Skip to main content
Chemistry LibreTexts

Untitled Page 25

  • Page ID
  • 5.3 Methods of integration

    Change of variable

    Sometimes an unfamiliar-looking integral can be made into a familiar one by substituting a new variable for an old one. For example, we know how to integrate 1/x --- the answer is ln x --- but what about


    Let u=2x+1. Differentiating both sides, we have du=2dx, or dx=du/2, so




    This technique is known as a change of variable or a substitution. (Because the letter u is often employed, you may also see it called u-substitution.)

    In the case of a definite integral, we have to remember to change the limits of integration to reflect the new variable.

    Example 2

    ◊ Evaluate \int34 dx/(2x+1).

    ◊ As before, let u=2x+1.





    Sometimes, as in the next example, a clever substitution is the secret to doing a seemingly impossible integral.

    Example 3
    ◊ Evaluate

    ◊ The only hope for reducing this to a form we can do is to let u=√ x. Then dx=d(u2)=2udu, so





    Example 65 really isn't so tricky, since there was only one logical choice for the substitution that had any hope of working. The following is a little more dastardly.

    Example 4
    ◊ Evaluate

    ◊ The substitution that works is x=tan u. First let's see what this does to the expression 1+x2. The familiar identity



    so 1+x2 becomes \sec2 u. But differentiating both sides of x=tan u gives






    so the integral becomes




    What mere mortal would ever have suspected that the substitution x=tan u was the one that was needed in example 66? One possible answer is to give up and do the integral on a computer:

       Integrate(x) 1/(1+x^2)

    Another possible answer is that you can usually smell the possibility of this type of substitution, involving a trig function, when the thing to be integrated contains something reminiscent of the Pythagorean theorem, as suggested by figure b. The 1+x2 looks like what you'd get if you had a right triangle with legs 1 and x, and were using the Pythagorean theorem to find its hypotenuse.


    b / The substitution x=tan u.

    Example 5

    ◊ Evaluate \int dx/√1-x2.

    ◊ The √1-x2 looks like what you'd get if you had a right triangle with hypotenuse 1 and a leg of length x, and were using the Pythagorean theorem to find the other leg, as in figure c. This motivates us to try the substitution x=cos u, which gives dx=-sin udu and √1-x2=√1-cos2u=sin u. The result is





    c / The substitution x=cos u.

    Integration by parts

    Figure d shows a technique called integration by parts. If the integral \int vdu is easier than the integral \int udv, then we can calculate the easier one, and then by simple geometry determine the one we wanted. Identifying the large rectangle that surrounds both shaded areas, and the small white rectangle on the lower left, we have





    d / Integration by parts.

    In the case of an indefinite integral, we have a similar relationship derived from the product rule:



    Integrating both sides, we have the following relation.

    Integration by parts


    Since a definite integral can always be done by evaluating an indefinite integral at its upper and lower limits, one usually uses this form. Integrals don't usually come prepackaged in a form that makes it obvious that you should use integration by parts. What the equation for integration by parts tells us is that if we can split up the integrand into two factors, one of which (the dv) we know how to integrate, we have the option of changing the integral into a new form in which that factor becomes its integral, and the other factor becomes its derivative. If we choose the right way of splitting up the integrand into parts, the result can be a simplification.

    Example 6

    ◊ Evaluate


    ◊ There are two obvious possibilities for splitting up the integrand into factors,



    The first one is the one that lets us make progress. If u=x, then du=dx, and if dv=cos xdx, then integration gives v=sin x.





    Of the two possibilities we considered for u and dv, the reason this one helped was that differentiating x gave dx, which was simpler, and integrating cos xdx gave sin x, which was no more complicated than before. The second possibility would have made things worse rather than better, because integrating xdx would have given x2/2, which would have been more complicated rather than less.

    Example 7

    ◊ Evaluate \int ln x dx.

    ◊ This one is a little tricky, because it isn't explicitly written as a product, and yet we can attack it using integration by parts. Let u=ln x and dv=dx.





    Example 8
    ◊ Evaluate \int x2 ex dx.

    ◊ Integration by parts lets us split the integrand into two factors, integrate one, differentiate the other, and then do that integral. Integrating or differentiating ex does nothing. Integrating x2 increases the exponent, which makes the problem look harder, whereas differentiating x2 knocks the exponent down a step, which makes it look easier. Let u=x2 and dv=ex dx, so that du=2xdx and v=ex. We then have


    Although we don't immediately know how to evaluate this new integral, we can subject it to the same type of integration by parts, now with u=x and dv=ex dx. After the second integration by parts, we have:




    Partial fractions

    Given a function like


    we can rewrite it over a common denominator like this:





    But note that the original form is easily integrated to give



    while faced with the form
    [4] -2/(x2-1), we wouldn't have known how to integrate it.

    Note that the original function was of the form (-1)/…+(+1)/… It's not a coincidence that the two constants on top, -1 and +1, are opposite in sign but equal in absolute value. To see why, consider the behavior of this function for large values of x. Looking at the form -1/(x-1)+1/(x+1), we might naively guess that for a large value of x such as 1000, it would come out to be somewhere on the order thousandths. But looking at the form -2/(x2-1), we would expect it to be way down in the millionths. This seeming paradox is resolved by noting that for large values of x, the two terms in the form -1/(x-1)+1/(x+1) very nearly cancel. This cancellation could only have happened if the constants on top were opposites like plus and minus one.

    The idea of the method of partial fractions is that if we want to do an integral of the form


    where P(x) is an nth order polynomial, we rewrite 1/P as


    where r1 ... rn are the roots of the polynomial, i.e., the solutions of the equation P(r)=0. If the polynomial is second-order, you can find the roots r1 and r2 using the quadratic formula; I'll assume for the time being that they're real. For higher-order polynomials, there is no surefire, easy way of finding the roots by hand, and you'd be smart simply to use computer software to do it. In Yacas, you can find the real roots of a polynomial like this:


    (I assume it uses Newton's method to find them.) The constants Ai can then be determined by algebra, or by the following trick.

    Numerical method

    Suppose we evaluate 1/P(x) for a value of x very close to one of the roots. In the example of the polynomial x4-5x3-25x2+65x+84, let r1 ... r4 be the roots in the order in which they were returned by Yacas. Then A1 can be found by evaluating 1/P(x) at x=3.000001:


    We know that for x very close to 3, the expression


    will be dominated by the A1 term, so



    By the same method we can find the other four constants:


    (The N( ,30) construct is to tell Yacas to do a numerical calculation rather than an exact symbolic one, and to use 30 digits of precision, in order to avoid problems with rounding errors.) Thus,





    The desired integral is






    As in the simpler example I started off with, where P was second order and we got A1=-A2, in this n=4 example we expect that A1+A2+A3+A4=0, for otherwise the large-x behavior of the partial-fraction form would be 1/x rather than 1/x4. This is a useful way of checking the result: -8.93+2.84-4.33+10.4=-.02≈0.


    There are two possible complications:

    First, the same factor may occur more than once, as in x3-5x2+7x-3=(x-1)(x-1)(x-3). In this example, we have to look for an answer of the form A/(x-1)+B/(x-1)2+C/(x-3), the solution being -.25/(x-1)-.5/(x-1)2+.25/(x-3).

    Second, the roots may be complex. This is no show-stopper if you're using computer software that handles complex numbers gracefully. (You can choose a c that makes the result real.) In fact, as discussed in section 8.3, some beautiful things can happen with complex roots. But as an alternative, any polynomial with real coefficients can be factored into linear and quadratic factors with real coefficients. For each quadratic factor Q(x), we then have a partial fraction of the form (A+Bx)/Q(x), where A and B can be determined by algebra. In Yacas, this can be done using the Apart function.

    Example 9

    ◊ Evaluate the integral


    using the method of partial fractions.

    ◊ First we use Yacas to look for real roots of the polynomial:


    Unfortunately this polynomial seems to have only two real roots; the rest are complex. We can divide out the factor (x-1)(x-7), but that still leaves us with a second-order polynomial, which has no real roots. One approach would be to factor the polynomial into the form (x-1)(x-7)(x-p)(x-q), where p and q are complex, as in section 8.3. Instead, let's use Yacas to expand the integrand in terms of partial fractions:


    We can now rewrite the integral like this:










    In fact, Yacas should be able to do the whole integral for us from scratch, but it's best to understand how these things work under the hood, and to avoid being completely dependent on one particular piece of software. As an illustration of this gem of wisdom, I found that when I tried to make Yacas evaluate the integral in one gulp, it choked because the calculation became too complicated! Because I understood the ideas behind the procedure, I was still able to get a result through a mixture of computer calculations and working it by hand. Someone who didn't have the knowledge of the technique might have tried the integral using the software, seen it fail, and concluded, incorrectly, that the integral was one that simply couldn't be done. A computer is no substitute for understanding.

    Residue method

    On p. 92 I introduced the trick of carrying out the method of partial fractions by evaluating 1/P(x) numerically at x=ri+ε, near where 1/P blows up. Sometimes we would like to have an exact result rather than a numerical approximation. We can accomplish this by using an infinitesimal number dx rather than a small but finite ε. For simplicity, let's assume that all of the n roots ri are distinct, and that P's highest-order term is xn. We can then write P as the product P(x)=(x-r1)(x-r2)…(x-rn). For products like this, there is a notation Π (capital Greek letter “pi”) that works like Σ does for sums:


    It's not necessary that the roots be real, but for now we assume that they are. We want to find the coefficients Ai such that


    We then have





    where … represents finite terms that are negligible compared to the infinite ones. Multiplying on both sides by dx, we have


    where the … now stand for infinitesimals which must in fact cancel out, since both Ai and 1/P' are real numbers.

    Example 10

    ◊ The partial-fraction decomposition of the function


    was found numerically on p. 92. The coefficient of the 1/(x-3) term was found numerically to be A1≈ -8.930×10-3. Determine it exactly using the residue method.

    ◊ Differentiation gives P'(x)=4x3-15x2-50x+65. We then have A1=1/P'(3)=-1/112.

    Integrals that can't be done

    Integral calculus was invented in the age of powdered wigs and harpsichords, so the original emphasis was on expressing integrals in a form that would allow numbers to be plugged in for easy numerical evaluation by scribbling on scraps of parchment with a quill pen. This was an era when you might have to travel to a large city to get access to a table of logarithms.

    In this computationally impoverished environment, one always wanted to get answers in what's known as closed form and in terms of elementary functions.

    A closed form expression means one written using a finite number of operations, as opposed to something like the geometric series 1+x+x2+x3+…, which goes on forever.

    Elementary functions are usually taken to be addition, subtraction, multiplication, division, logs, and exponentials, as well as other functions derivable from these. For example, a cube root is allowed, since √[3]x=e^{(1/3)ln x}, and so are trig functions and their inverses, since, as we will see in chapter 8, they can be expressed in terms of logs and exponentials.

    In theory, “closed form” doesn't mean anything unless we state the elementary functions that are allowed. In practice, when people refer to closed form, they usually have in mind the particular set of elementary functions described above.

    A traditional freshman calculus course spends such a vast amount of time teaching you how to do integrals in closed form that it may be easy to miss the fact that this is impossible for the vast majority of integrands that you might randomly write down. Here are some examples of impossible integrals:





    The first of these is a form that is extremely important in statistics (it describes the area under the standard “bell curve”), so you can see that impossible integrals aren't just obscure things that don't pop up in real life.

    People who are proficient at doing integrals in closed form generally seem to work by a process of pattern matching. They recognize certain integrals as being of a form that can't be done, so they know not to try.

    Example 11

    ◊ Students! Stand at attention! You will now evaluate \int e^{-x2+7x} dx in closed form.

    ◊ No sir, I can't do that. By a change of variables of the form u=x+c, where c is a constant, we could clearly put this into the form \int e^{-x2} dx, which we know is impossible.

    Sometimes an integral such as \int e^{-x2} dx is important enough that we want to give it a name, tabulate it, and write computer subroutines that can evaluate it numerically. For example, statisticians define the “error function” \operatorname{erf}(x)=(2/√π) \int e^{-x2} dx. Sometimes if you're not sure whether an integral can be done in closed form, you can put it into computer software, which will tell you that it reduces to one of these functions. You then know that it can't be done in closed form. For example, if you ask the popular web site to do \int e^{-x2+7x} dx, it spits back (1/2)e^{49/4}√π\operatorname{erf}(x-7/2). This tells you both that you shouldn't be wasting your time trying to do the integral in closed form and that if you need to evaluate it numerically, you can do that using the erf function.

    As shown in the following example, just because an indefinite integral can't be done, that doesn't mean that we can never do a related definite integral.

    Example 12

    ◊ Evaluate \int0^{π/2} e^{-tan2 x}(tan2x+1)dx.

    ◊ The obvious substitution to try is u=tan x, and this reduces the integrand to e-x2. This proves that the corresponding indefinite integral is impossible to express in closed form. However, the definite integral can be expressed in closed form; it turns out to be √π/2. The trick for proving this is given in example 99 on p. 134.

    Sometimes computer software can't say anything about a particular integral at all. That doesn't mean that the integral can't be done. Computers are stupid, and they may try brute-force techniques that fail because the computer runs out of memory or CPU time. For example, the integral \int dx/(x^{10000}-1) (problem 15, p. 127) can be done in closed form using the techniques of chapter 8, and it's not too hard for a proficient human to figure out how to attack it, but every computer program I've tried it on has failed silently.