Skip to main content
Chemistry LibreTexts

Untitled Page 31

  • Page ID
  • 7.4 Taylor series

    If you calculate e0.1 on your calculator, you'll find that it's very close to 1.1. This is because the tangent line at x=0 on the graph of ex has a slope of 1 (dex/dx=ex=1 at x=0), and the tangent line is a good approximation to the exponential curve as long as we don't get too far away from the point of tangency.


    a / The function ex, and the tangent line at x=0.

    How big is the error? The actual value of e0.1 is 1.10517091807565…, which differs from 1.1 by about 0.005. If we go farther from the point of tangency, the approximation gets worse. At x=0.2, the error is about 0.021, which is about four times bigger. In other words, doubling x seems to roughly quadruple the error, so the error is proportional to x2; it seems to be about x2/2. Well, if we want a handy-dandy, super-accurate estimate of ex for small values of x, why not just account for this error. Our new and improved estimate is


    for small values of x.


    b / The function ex, and the approximation 1+x+x2/2.

    Figure b shows that the approximation is now extremely good for sufficiently small values of x. The difference is that whereas 1+x matched both the y-intercept and the slope of the curve, 1+x+x2/2 matches the curvature as well. Recall that the second derivative is a measure of curvature. The second derivatives of the function and its approximation are




    c / The function ex, and the approximation 1+x+x2/2+x3/6.

    We can do even better. Suppose we want to match the third derivatives. All the derivatives of ex, evaluated at x=0, are 1, so we just need to add on a term proportional to x3 whose third derivative is one. Taking the first derivative will bring down a factor of 3 in front, and taking and the second derivative will give a 2, so to cancel these out we need the third-order term to be (1/2)(1/3):


    Figure c shows the result. For a significant range of x values close to zero, the approximation is now so good that we can't even see the difference between the two functions on the graph.

    On the other hand, figure d shows that the cubic approximation for somewhat larger negative and positive values of x is poor --- worse, in fact, than the linear approximation, or even the constant approximation ex=1. This is to be expected, because any polynomial will blow up to either positive or negative infinity as x approaches negative infinity, whereas the function ex is supposed to get very close to zero for large negative x. The idea here is that derivatives are local things: they only measure the properties of a function very close to the point at which they're evaluated, and they don't necessarily tell us anything about points far away.


    d / The function ex, and the approximation 1+x+x2/2+x3/6, on a wider scale.

    It's a remarkable fact, then, that by taking enough terms in a polynomial approximation, we can always get as good an approximation to ex as necessary --- it's just that a large number of terms may be required for large values of x. In other words, the infinite series


    always gives exactly ex. But what is the pattern here that would allows us to figure out, say, the fourth-order and fifth-order terms that were swept under the rug with the symbol “...”? Let's do the fifth-order term as an example. The point of adding in a fifth-order term is to make the fifth derivative of the approximation equal to the fifth derivative of ex, which is 1. The first, second, ... derivatives of x5 are






    The notation for a product like 1⋅2⋅…⋅ n is n!, read “n factorial.” So to get a term for our polynomial whose fifth derivative is 1, we need x5/5!. The result for the infinite series is


    where the special case of 0!=1 is assumed.2 This infinite series is called the Taylor series for ex, evaluated around x=0, and it's true, although I haven't proved it, that this particular Taylor series always converges to ex, no matter how far x is from zero.

    In general, the Taylor series around x=0 for a function y is


    where the condition for equality of the nth order derivative is


    Here the notation .|x=0 means that the derivative is to be evaluated at x=0.

    A Taylor series can be used to approximate other functions besides ex, and when you ask your calculator to evaluate a function such as a sine or a cosine, it may actually be using a Taylor series to do it. Taylor series are also the method Inf uses to calculate most expressions involving infinitesimals. In example 13 on page 29, we saw that when Inf was asked to calculate 1/(1-d), where d was infinitesimal, the result was the geometric series:

       : 1/(1-d)

    These are also the the first five terms of the Taylor series for the function y=1/(1-x), evaluated around x=0. That is, the geometric series 1+x+x2+x3+… is really just one special example of a Taylor series, as demonstrated in the following example.

    Example 5

    ◊ Find the Taylor series of y=1/(1-x) around x=0.

    ◊ Rewriting the function as y=(1-x)-1 and applying the chain rule, we have






    The pattern is that the nth derivative is n!. The Taylor series therefore has an=n!/n!=1:


    If you flip back to page 106 and compare the rate of convergence of the geometric series for x=0.1 and 0.5, you'll see that the sum converged much more quickly for x=0.1 than for x=0.5. In general, we expect that any Taylor series will converge more quickly when x is smaller. Now consider what happens at x=1. The series is now 1+1+1+…, which gives an infinite result, and we shouldn't have expected any better behavior, since attempting to evaluate 1/(1-x) at x=1 gives division by zero. For x>1, the results become nonsense. For example, 1/(1-2)=-1, which is finite, but the geometric series gives 1+2+4+…, which is infinite.

    In general, every function's Taylor series around x=0 converges for all values of x in the range defined by |x|<r, where r is some number, known as the radius of convergence. Also, if the function is defined by putting together other functions that are well behaved (in the sense of converging to their own Taylor series in the relevant region), then the Taylor series will not only converge but converge to the correct value. For the function ex, the radius happen to be infinite, whereas for 1/(1-x) it equals 1. The following example shows a worst-case scenario.


    e / The function e-1/x2 never converges to its Taylor series.

    Example 6

    The function y=e-1/x2, shown in figure e, never converges to its Taylor series, except at x=0. This is because the Taylor series for this function, evaluated around x=0 is exactly zero! At x=0, we have y=0, dy/dx=0, d2 y/dx2=0, and so on for every derivative. The zero function matches the function y(x) and all its derivatives to all orders, and yet is useless as an approximation to y(x). The radius of convergence of the Taylor series is infinite, but it doesn't give correct results except at x=0. The reason for this is that y was built by composing two functions, w(x)=-1/x2 and y(w)=ew. The function w is badly behaved at x=0 because it blows up there. In particular, it doesn't have a well-defined Taylor series at x=0.

    Example 7

    ◊ Find the Taylor series of y=sin x, evaluated around x=0.

    ◊ The first few derivatives are






    We can see that there will be a cycle of sin, cos, -sin, and -cos, repeating indefinitely. Evaluating these derivatives at x=0, we have 0, 1, 0, -1, .... All the even-order terms of the series are zero, and all the odd-order terms are ±1/n!. The result is


    The linear term is the familiar small-angle approximation sin xx.

    The radius of convergence of this series turns out to be infinite. Intuitively the reason for this is that the factorials grow extremely rapidly, so that the successive terms in the series eventually start diminish quickly, even for large values of x.

    Example 8

    Suppose that we want to evaluate a limit of the form
    where u(0)=v(0)=0. L'H\^{o}pital's rule tells us that we can do this by taking derivatives on the top and bottom to form u'/v', and that, if necessary, we can do more than one derivative, e.g., u”/v”. This was proved on p. 152 using the mean value theorem. But if u and v are both functions that converge to their Taylor series, then it is much easier to see why this works. For example, suppose that their Taylor series both have vanishing constant and linear terms, so that u=ax2+… and v=bx2+…. Then u”=2a+…, and v”=2b+….

    A function's Taylor series doesn't have to be evaluated around x=0. The Taylor series around some other center x=c is given by




    To see that this is the right generalization, we can do a change of variable, defining a new function g(x)=f(x-c). The radius of convergence is to be measured from the center c rather than from 0.

    Example 9

    ◊ Find the Taylor series of ln x, evaluated around x=1.

    ◊ Evaluating a few derivatives, we get





    Note that evaluating these at x=0 wouldn't have worked, since division by zero is undefined; this is because ln x blows up to negative infinity at x=0. Evaluating them at x=1, we find that the nth derivative equals ± (n-1)!, so the coefficients of the Taylor series are ± (n-1)!/n!=±1/n, except for the n=0 term, which is zero because ln 1=0. The resulting series is


    We can predict that its radius of convergence can't be any greater than 1, because ln x blows up at 0, which is at a distance of 1 from 1.