# 5.1: Second Order Ordinary Differential Equations

- Page ID
- 106827

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

Solving second order ordinary differential equations is much more complex than solving first order ODEs. We just saw that there is a general method to solve any linear 1st order ODE. Unfortunately, this is not true for higher order ODEs. However, we can solve higher order ODEs if the coefficients are constants:

\[y''(x)+ k_1 y'(x) + k_2 y(x)+k_3=0 \nonumber\]

The equation above is said to be homogeneous if \(k_3=0\):

\[\label{eq:2ndorder} y''(x)+ k_1 y'(x) + k_2 y(x)=0\]

It is possible to solve non-homogeneous ODEs, but in this course we will concentrate on the homogeneous cases. Second order linear equations occur in many important applications. For example, the motion of a mass on a spring, and any other simple oscillating system, is described by an equation of the form

\[m\frac{d^2u}{dt^2}+\gamma\frac{du}{dt}+k u=F(t) \nonumber\]

We’ll analyze what the different parts of this equation mean in the examples. The equation above is homogeneous if \(F(t)=0\).

Let’s analyze Equation \ref{eq:2ndorder}, which is linear and homogeneous. The parameters \(m\), \(\gamma\) and \(k\) represent physical quantities that do not depend on the value of \(x\), and therefore the equation has constant coefficients.This equation will be satisfied by a function whose derivatives are multiples of itself. This is the only way that we will get zero after adding a multiple of the function plus a multiple of its first derivative plus a multiple of the second derivative. You may be tempted to say that \(\sin(x)\) satisfies this requirement, but its first derivative is \(\cos{x}\), so it will not cancel out with the sine term when added together. The only functions that satisfy this requirement are the expnential functions \(e^{\alpha x}\), with first and second derivatives \(\alpha e^{\alpha x}\) and \(\alpha^2e^{\alpha x}\) respectively. So, let’s assume that the answer we are looking for is an exponential function, \(y(x)=e^{\alpha x}\), and let’s plug these expressions back into Equation \ref{eq:2ndorder}:

\[\alpha^2e^{\alpha x}+ k_1 \alpha e^{\alpha x} + k_2 e^{\alpha x}=0 \nonumber\]

\[e^{\alpha x}\left(\alpha^2+ k_1 \alpha + k_2 \right)=0 \nonumber\]

Thee above equation tells us that either \(e^{\alpha x}\) or \(\left(\alpha^2+ k_1 \alpha + k_2 \right)\) are zero. In the first case, this would mean that \(x\) is plus or minus infinity (depending on whether \(\alpha\) is negative or positive). But this is too restrictive because we want to find a solution that is a function of \(x\), so we don’t want to impose restrictions on our independent variable. We therefore consider

\[\left(\alpha^2+ k_1 \alpha + k_2 \right)=0 \nonumber\]

This is a quadratic equation in \(\alpha\), which we will call the auxiliary equation. The two roots are found from:

\[\alpha_{1,2}=\frac{-k_1\pm \sqrt{k_{1}^{2}-4k_2}}{2} \nonumber\]

This gives two answers, \(\alpha_1\) and \(\alpha_2\), which means there are at least two different exponential functions that are solutions of the differential equation: \(e^{\alpha_1 x}\) and \(e^{\alpha_2 x}\). We will see that any linear combination of these two functions is also a solution, but before continuing, let’s look at a few examples. Notice that the argument of the square root can be positive, negative or zero, depending on the relative values of \(k_1\) and \(k_2\). This means that \(\alpha_{1,2}\) can be imaginary, and the solutions can therefore be complex exponentials. Let’s look at the three situations individually through examples.

## Case I: \(k_1^2-4k_2>0\)

In this case, \(\sqrt{k_{1}^{2}-4k_2}>0\), and therefore \(\alpha_1\) and \(\alpha_2\) are both real and different.

For example: Find the solution of \(y''(x) -5y'(x) +4 y(x) = 0\) subject to initial conditions \(y(0)=1\) and \(y'(0)=-1\).

As we discussed above, we’ll assume the solution is \(y(x)=e^{\alpha x}\), and we’ll determine which values of \(\alpha\) satisfy this particular differential equation. Let’s replace \(y(x), y'(x)\) and \(y''(x)\) in the differential equation:

\[\alpha^2e^{\alpha x}-5 \alpha e^{\alpha x} +4 e^{\alpha x}=0 \nonumber\]

\[e^{\alpha x}\left(\alpha^2-5 \alpha + 4 \right)=0 \nonumber\]

and with the arguments we discussed above:

\[\left(\alpha^2-5 \alpha +4 \right)=0 \nonumber\]

\[\alpha_{1,2}=\frac{-(-5)\pm \sqrt{(-5)}^{2}-4\times 4}{2} \nonumber\]

from which we obtain \(\alpha_1=1\) and \(\alpha_2=4\). Therefore, \(e^{x}\) and \(e^{4x}\) are both solutions to the differential equation. Let’s prove this is true. If \(y(x) = e^{4x}\), then \(y'(x) = 4e^{4x}\) and \(y''(x) = 16e^{4x}\). Substituting these expressions in the differential equation we get

\[y''(x) - 5y'(x) + 4y(x) = 16e^{4x}-5\times 4e^{4x}+4\times e^{4x}=0 \nonumber\]

so \(y(x) = e^{4x}\) clearly satisfies the differential equation. You can do the same with \(y(x) = e^{x}\) and prove it is also a solution.

However, none of these solutions satisfy both initial conditions, so clearly we are not finished. We found two independent solutions to the differential equation, and now we will claim that any linear combination of these two independent solutions (\(c_1 y_1(x)+c_2 y_2(x)\)) is also a solution. Mathematically, this means that if \(y_1(x)\) and \(y_2(x)\) are solutions, then \(c_1 y_1(x)+c_2 y_2(x)\) is also a solution, where \(c_1\) and \(c_2\) are constants (i.e. not functions of \(x\)). Coming back to our example, the claim is that \(c_1 e^{4x}+c_2 e^x\) is the general solution of this differential equation. Let’s see if it’s true:

\[ \begin{aligned} y(x)=c_1 e^{4x}+c_2 e^x \\ y'(x)=4c_1 e^{4x}+c_2 e^x \\ y''(x)=16c_1 e^{4x}+c_2 e^x \end{aligned} \nonumber\]

Substituting in the differential equation:

\[y''(x) - 5y'(x) + 4y(x) = 16c_1 e^{4x}+c_2 e^x-5\times \left(4c_1 e^{4x}+c_2 e^x\right)+4\times \left(c_1 e^{4x}+c_2 e^x\right)=0 \nonumber\]

so we just proved that the linear combination is also a solution, independently of the values of \(c_1\) and \(c_2\). It is important to notice that our general solution has now two arbitrary constants, as expected for a second order differential equation. We will determine these constants from the initial conditions to find the particular solution.

The general solution is \(y(x)=c_1e^{4x}+c_2e^{x}\). Let’s apply the first initial condition: \(y(0)=1\).

\[y(0)=c_1+c_2=1 \nonumber\]

This gives a relationship between \(c_1\) and \(c_2\). The second initial condition is \(y'(0)=-1\).

\[y'(x)=4c_1e^{4x}+c_2e^x\rightarrow y'(0)=4c_1+c_2=-1 \nonumber\]

We have two equations with two unknowns that we can solve to get \(c_1=-2/3\) and \(c_2=5/3\).

The particular solution is then:

\[y(x)=-\frac{2}{3}e^{4x}+\frac{5}{3}e^x \nonumber\]

## Case II: \(k_1^2-4k_2<0\)

In this case, \(k_{1}^{2}-4k_2<0\), so , \(\sqrt{k_{1}^{2}-4k_2}=i \sqrt{-k_{1}^{2}+4k_2}\) where \(\sqrt{-k_{1}^{2}+4k_2}\) is a real number. Therefore, in this case,

\[\alpha_{1,2}=\frac{-k_1\pm \sqrt{k_{1}^{2}-4k_2}}{2}=\frac{-k_1\pm i \sqrt{-k_{1}^{2}+4k_2}}{2} \nonumber\]

and then the two roots \(\alpha_1\) and \(\alpha_2\) are complex conjugates. Let’s see how it works with an example.

Determine the solution of \(y''(x) - 3y'(x) + \frac{9}{2}y(x) = 0\) subject to the initial conditions \(y(0)=1\) and \(y'(0)=-1\).

Following the same methodology we discussed for the previous example, we assume \(y(x)=e^{\alpha x}\), and use this expression in the differential equation to obtain a quadratic equation in \(\alpha\):

\[\alpha_{1,2}=\frac{3\pm \sqrt{(-3)^{2}-4\times 9/2}}{2}=\frac{3\pm \sqrt{-9}}{2} \nonumber\]

Therefore, \(\alpha_1=\frac{3}{2}+\frac{3}{2}i\) and \(\alpha_2=\frac{3}{2}-\frac{3}{2}i\), which are complex conjugates. The general solution is:

\[ \begin{aligned} y(x)=c_1e^{(\frac{3}{2}+\frac{3}{2}i)x}+c_2e^{(\frac{3}{2}-\frac{3}{2}i)x} \\ y(x)=c_1e^{\frac{3}{2}x}e^{\frac{3}{2}i x}+c_2e^{\frac{3}{2}x}e^{-\frac{3}{2}i x} \\ y(x)=e^{\frac{3}{2}x}\left(c_1e^{\frac{3}{2}i x}+c_2e^{-\frac{3}{2}i x}\right) \end{aligned} \nonumber\]

This expression can be simplified using Euler’s formula: \(e^{\pm ix}=\cos(x) \pm i \sin{x}\) (Equation \(2.2.1\)).

\[y(x)=e^{\frac{3}{2}x}\left[c_1\left(\cos(\frac{3}{2}x)+i \sin(\frac{3}{2}x) \right)+c_2\left(\cos(\frac{3}{2}x)-i \sin(\frac{3}{2}x) \right)\right] \nonumber\]

Grouping the sines and cosines together:

\[y(x)=e^{\frac{3}{2}x}\left[\cos(\frac{3}{2}x)(c_1+c_2)+i \sin(\frac{3}{2}x)(c_1-c_2) \right] \nonumber\]

Renaming the constants \(c_1+c_2=a\) and \(i(c_1-c_2)=b\)

\[y(x)=e^{\frac{3}{2}x}\left[a\cos(\frac{3}{2}x)+b \sin(\frac{3}{2}x)\right] \nonumber\]

Our general solution has two arbitrary constants, as expected from a second order ODE. As usual, we’ll use our initial conditions to determine their values. The first initial condition is \(y(0)=1\)

\[\begin{array}{c c c} y(0)=a = 1 & (e^0=1, \cos(0)=1 & \text{and} & \sin(0)=0 ) \end{array} \nonumber\]

So far, we have

\[y(x)=e^{\frac{3}{2}x}\left[\cos(\frac{3}{2}x)+b \sin(\frac{3}{2}x)\right] \nonumber\]

The second initial condition is \(y'(0)=-1\)

\[y'(x)=e^{\frac{3}{2}x}\left[-\sin(\frac{3}{2}x)+b \cos(\frac{3}{2}x)\right]\frac{3}{2}+\frac{3}{2}e^{\frac{3}{2}x}\left[\cos(\frac{3}{2}x)+b \sin(\frac{3}{2}x)\right] \nonumber\]

\[y'(0)=\frac{3}{2}b +\frac{3}{2}=-1\rightarrow b=-\frac{5}{3} \nonumber\]

The particular solution is, therefore:

\[y(x)=e^{\frac{3}{2}x}\left[\cos(\frac{3}{2}x)-\frac{5}{3} \sin(\frac{3}{2}x)\right] \nonumber\]

Notice that the function is real even when the roots were complex numbers.

## Case III: \(k_1^2-4k_2=0\)

The last case we will analyze is when \(k_1^2-4k_2=0\), which results in

\[\alpha_{1,2}=\frac{-k_1\pm \sqrt{k_{1}^{2}-4k_2}}{2}=\alpha_{1,2}=\frac{-k_1}{2} \nonumber\]

Therefore, the two roots are real, and identical. This means that \(e^{-k_1 x/2}\) is a solution, but this creates a problem because we need another independent solution to create the general solution from a linear combination, and we have only one. The second solution can be found using a method called reduction of order. We will not discuss the method in detail, although you can see how it is used in this case at the end of the video http://tinyurl.com/mpl69ju. The application of the method of reduction of order to this differential equation gives \((a+bx)e^{-k_1 x/2}\) as the general solution. The constants \(a\) and \(b\) are arbitrary constants that we will determine from the initial/boundary conditions. Notice that the exponential term is the one we found using the ’standard’ procedure. Let’s see how it works with an example.

Determine the solution of \(y''(x) - 8y'(x) + 16y(x) = 0\) subject to initial conditions \(y(0)=1\) and \(y'(0)= -1\).

We follow the procedure of the previous examples and calculate the two roots:

\[\alpha_{1,2}=\frac{-k_1\pm \sqrt{k_{1}^{2}-4k_2}}{2}=\frac{8\pm \sqrt{8^{2}-4\times 16}}{2}=4 \nonumber\]

Therefore, \(e^{4x}\) is a solution, but we don’t have another one to create the linear combination we need. The method of reduction of order gives:

\[y(x)=(a+bx)e^{4 x} \nonumber\]

Since we accepted the result of the method of reduction of order without seeing the derivation, let’s at least show that this is in fact a solution. The first and second derivatives are:

\[y'(x)=be^{4 x}+4(a+bx)e^{4x} \nonumber\]

\[y''(x)=4be^{4 x}+4be^{4x}+16(a+bx)e^{4x} \nonumber\]

Substituting these expressions in \(y''(x)-8y'(x)+16y(x)=0\):

\[\left[4be^{4 x}+4be^{4x}+16(a+bx)e^{4x}\right]-8\left[be^{4 x}+4(a+bx)e^{4x}\right]+16\left[(a+bx)e^{4 x}\right]=0 \nonumber\]

Because all these terms cancel out to give zero, the function \(y(x)=(a+bx)e^{4 x}\) is indeed a solution of the differential equation.

Coming back to our problem, we need to determine \(a\) and \(b\) from the initial conditions. Let’s start with \(y(0)=1\):

\[y(0)=a=1 \nonumber\]

So far, we have \(y(x)=(1+bx)e^{4 x}\), and therefore \(y'(x)=be^{4 x}+4(1+bx)e^{4x}\). The other initial condition is \(y'(0)=-1\):

\[y'(0)=b+4=-1\rightarrow b=-5 \nonumber\]

The particular solution, therefore, is \(y(x)=(1-5x)e^{4x}\)

This video contains an example of each of the three cases discussed above as well as the application of the method of reduction of order to case III. Remember that you can pause, rewind and fast forward so you can watch the videos at your own pace. http://tinyurl.com/mpl69ju