This is a little bit in advance, but I wanted to let everyone know that my servers will be undergoing some maintenance on May 17 and May 18 during 8:00 AM CST until 2:00 PM CST. Hopefully the only inconvenience will be the occasional “lost/broken” connection that should be fixed by simply reloading the page. Outside of that the maintenance should (fingers crossed) be pretty much “invisible” to everyone.

Paul

May 6, 2021

*i.e.*you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen width.

### Section 3-6 : Fundamental Sets of Solutions

The time has finally come to define “nice enough”. We’ve been using this term throughout the last few sections to describe those solutions that could be used to form a general solution and it is now time to officially define it.

First, because everything that we’re going to be doing here only requires linear and homogeneous we won’t require constant coefficients in our differential equation. So, let’s start with the following IVP.

\[\begin{equation}\begin{array}{c}p\left( t \right)y'' + q\left( t \right)y' + r\left( t \right)y = 0\\ y\left( {{t_0}} \right) = {y_0}\hspace{0.25in}\,\,y'\left( {{t_0}} \right) = {{y'}_0}\end{array}\label{eq:eq1}\end{equation}\]Let’s also suppose that we have already found two solutions to this differential equation, \(y_{1}(t)\) and \(y_{2}(t)\). We know from the Principle of Superposition that

\[\begin{equation}y\left( t \right) = {c_1}{y_1}\left( t \right) + {c_2}{y_2}\left( t \right)\label{eq:eq2}\end{equation}\]will also be a solution to the differential equation. What we want to know is whether or not it will be a general solution. In order for \(\eqref{eq:eq2}\) to be considered a general solution it must satisfy the general initial conditions in \(\eqref{eq:eq1}\).

\[y\left( {{t_0}} \right) = {y_0}\hspace{0.25in}\,\,\,\,\,\,\,y'\left( {{t_0}} \right) = {y'_0}\]This will also imply that any solution to the differential equation can be written in this form.

So, let’s see if we can find constants that will satisfy these conditions. First differentiate \(\eqref{eq:eq2}\) and plug in the initial conditions.

\[\begin{equation}\begin{array}{l}{y_0} = y\left( {{t_0}} \right) = {c_1}{y_1}\left( {{t_0}} \right) + {c_2}{y_2}\left( {{t_0}} \right)\\ {{y'}_0} = y'\left( {{t_0}} \right) = {c_1}{{y'}_1}\left( {{t_0}} \right) + {c_2}{{y'}_2}\left( {{t_0}} \right)\end{array}\label{eq:eq3}\end{equation}\]Since we are assuming that we’ve already got the two solutions everything in this system is technically known and so this is a system that can be solved for \(c_{1}\) and \(c_{2}\). This can be done in general using Cramer’s Rule. Using Cramer’s Rule gives the following solution.

\[\begin{equation}{c_1} = \frac{{\left| {\begin{array}{*{20}{c}}{{y_0}}&{{y_2}\left( {{t_0}} \right)}\\{{{y'}_0}}&{{{y'}_2}\left( {{t_0}} \right)}\end{array}} \right|}}{{\left| {\begin{array}{*{20}{c}}{{y_1}\left( {{t_0}} \right)}&{{y_2}\left( {{t_0}} \right)}\\{{{y'}_1}\left( {{t_0}} \right)}&{{{y'}_2}\left( {{t_0}} \right)}\end{array}} \right|}}\hspace{0.25in}\hspace{0.25in}\hspace{0.25in}{c_2} = \frac{{\left| {\begin{array}{*{20}{c}}{{y_1}\left( {{t_0}} \right)}&{{y_0}}\\{{{y'}_1}\left( {{t_0}} \right)}&{{{y'}_0}}\end{array}} \right|}}{{\left| {\begin{array}{*{20}{c}}{{y_1}\left( {{t_0}} \right)}&{{y_2}\left( {{t_0}} \right)}\\{{{y'}_1}\left( {{t_0}} \right)}&{{{y'}_2}\left( {{t_0}} \right)}\end{array}} \right|}}\label{eq:eq4}\end{equation}\]where,

\[\left| {\begin{array}{*{20}{c}}a&b\\c&d\end{array}} \right| = ad - bc\]is the determinant of a 2x2 matrix. If you don’t know about determinants that is okay, just use the formula that we’ve provided above.

Now, \(\eqref{eq:eq4}\) will give the solution to the system \(\eqref{eq:eq3}\). Note that in practice we generally don’t use Cramer’s Rule to solve systems, we just proceed in a straightforward manner and solve the system using basic algebra techniques. So, why did we use Cramer’s Rule here then?

We used Cramer’s Rule because we can use \(\eqref{eq:eq4}\) to develop a condition that will allow us to determine when we can solve for the constants. All three (yes three, the denominators are the same!) of the quantities in \(\eqref{eq:eq4}\) are just numbers and the only thing that will prevent us from actually getting a solution will be when the denominator is zero.

The quantity in the denominator is called the **Wronskian** and is denoted as

When it is clear what the functions and/or \(t\) are we often just denote the Wronskian by \(W\).

Let’s recall what we were after here. We wanted to determine when two solutions to \(\eqref{eq:eq1}\) would be nice enough to form a general solution. The two solutions will form a general solution to \(\eqref{eq:eq1}\) if they satisfy the general initial conditions given in \(\eqref{eq:eq1}\) and we can see from Cramer’s Rule that they will satisfy the initial conditions provided the Wronskian isn’t zero. Or,

\[W\left( {{y_1},{y_2}} \right)\left( {{t_0}} \right) = \left| {\begin{array}{*{20}{c}}{{y_1}\left( {{t_0}} \right)}&{{y_2}\left( {{t_0}} \right)}\\{{{y'}_1}\left( {{t_0}} \right)}&{{{y'}_2}\left( {{t_0}} \right)}\end{array}} \right| = {y_1}\left( {{t_0}} \right){y'_2}\left( {{t_0}} \right) - {y_2}\left( {{t_0}} \right){y'_1}\left( {{t_0}} \right) \ne 0\]So, suppose that \(y_{1}(t)\) and \(y_{2}(t)\) are two solutions to \(\eqref{eq:eq1}\) and that \(W\left( {{y_1},{y_2}} \right)\left( t \right) \ne 0\). Then the two solutions are called a **fundamental set of solutions** and the general solution to \(\eqref{eq:eq1}\) is

We know now what “nice enough” means. Two solutions are “nice enough” if they are a fundamental set of solutions.

So, let’s check one of the claims that we made in a previous section. We’ll leave the other two to you to check if you’d like to.

were a fundamental set of solutions. Prove that they in fact are.

So, to prove this we will need to take the Wronskian for these two solutions and show that it isn’t zero.

\[\begin{align*}W & = \left| {\begin{array}{*{20}{c}}{{{\bf{e}}^{\lambda t}}\cos \left( {\mu \,t} \right)}&{{{\bf{e}}^{\lambda t}}\sin \left( {\mu \,t} \right)}\\{\lambda {{\bf{e}}^{\lambda t}}\cos \left( {\mu \,t} \right) - \mu {{\bf{e}}^{\lambda t}}\sin \left( {\mu \,t} \right)}&{\lambda {{\bf{e}}^{\lambda t}}\sin \left( {\mu \,t} \right) + \mu {{\bf{e}}^{\lambda t}}\cos \left( {\mu \,t} \right)}\end{array}} \right|\\ & = {{\bf{e}}^{\lambda t}}\cos \left( {\mu \,t} \right)\left( {\lambda {{\bf{e}}^{\lambda t}}\sin \left( {\mu \,t} \right) + \mu {{\bf{e}}^{\lambda t}}\cos \left( {\mu \,t} \right)} \right) - \\ & \hspace{0.25in}\hspace{0.25in}\hspace{0.25in}\hspace{0.25in}{{\bf{e}}^{\lambda t}}\sin \left( {\mu \,t} \right)\left( {\lambda {{\bf{e}}^{\lambda t}}\cos \left( {\mu \,t} \right) - \mu {{\bf{e}}^{\lambda t}}\sin \left( {\mu \,t} \right)} \right)\\ & = \mu {{\bf{e}}^{2\lambda t}}{\cos ^2}\left( {\mu \,t} \right) + \mu {{\bf{e}}^{2\lambda t}}{\sin ^2}\left( {\mu \,t} \right)\\ & = \mu {{\bf{e}}^{2\lambda t}}\left( {{{\cos }^2}\left( {\mu \,t} \right) + {{\sin }^2}\left( {\mu \,t} \right)} \right)\\ & = \mu {{\bf{e}}^{2\lambda t}}\end{align*}\]Now, the exponential will never be zero and \(\mu \ne 0\) (if it were we wouldn’t have complex roots!) and so \(W \ne 0\). Therefore, these two solutions are in fact a fundamental set of solutions and so the general solution in this case is.

\[y\left( t \right) = {c_1}{{\bf{e}}^{\lambda t}}\cos \left( {\mu \,t} \right) + {c_2}{{\bf{e}}^{\lambda t}}\sin \left( {\mu \,t} \right)\]Show that this second solution, along with the given solution, form a fundamental set of solutions for the differential equation.

The two solutions from that example are

\[{y_1}\left( t \right) = {t^{ - 1}}\hspace{0.25in}\hspace{0.25in}{y_2}\left( t \right) = {t^{\frac{3}{2}}}\]Let’s compute the Wronskian of these two solutions.

\[W = \left| {\begin{array}{*{20}{c}}{{t^{ - 1}}}&{{t^{\frac{3}{2}}}}\\{ - {t^{ - 2}}}&{\frac{3}{2}{t^{\frac{1}{2}}}}\end{array}} \right| = \frac{3}{2}{t^{ - \,\frac{1}{2}}} - \left( { - {t^{ - \,\frac{1}{2}}}} \right) = \frac{5}{2}{t^{ - \,\frac{1}{2}}} = \frac{5}{{2\sqrt t }}\]So, the Wronskian will never be zero. Note that we can’t plug \(t\) = 0 into the Wronskian. This would be a problem in finding the constants in the general solution, except that we also can’t plug \(t\) = 0 into the solution either and so this isn’t the problem that it might appear to be.

So, since the Wronskian isn’t zero for any \(t\) the two solutions form a fundamental set of solutions and the general solution is

\[y\left( t \right) = {c_1}{t^{ - 1}} + {c_2}{t^{\frac{3}{2}}}\]as we claimed in that example.

To this point we’ve found a set of solutions then we’ve claimed that they are in fact a fundamental set of solutions. Of course, you can now verify all those claims that we’ve made, however this does bring up a question. How do we know that for a given differential equation a set of fundamental solutions will exist? The following theorem answers this question.

#### Theorem

Consider the differential equation

\[y'' + p\left( t \right)y' + q\left( t \right)y = 0\]where \(p(t)\) and \(q(t)\) are continuous functions on some interval I. Choose \(t_{0}\) to be any point in the interval I. Let \(y_{1}(t)\) be a solution to the differential equation that satisfies the initial conditions.

\[y\left( {{t_0}} \right) = 1\hspace{0.25in}y'\left( {{t_0}} \right) = 0\]Let \(y_{2}(t)\) be a solution to the differential equation that satisfies the initial conditions.

\[y\left( {{t_0}} \right) = 0\hspace{0.25in}y'\left( {{t_0}} \right) = 1\]Then \(y_{1}(t)\) and \(y_{2}(t)\) form a fundamental set of solutions for the differential equation.

It is easy enough to show that these two solutions form a fundamental set of solutions. Just compute the Wronskian.

\[W\left( {{y_1},{y_2}} \right)\left( {{t_0}} \right) = \left| {\begin{array}{*{20}{c}}{{y_1}\left( {{t_0}} \right)}&{{y_2}\left( {{t_0}} \right)}\\{{{y'}_1}\left( {{t_0}} \right)}&{{{y'}_2}\left( {{t_0}} \right)}\end{array}} \right| = \left| {\begin{array}{*{20}{c}}1&0\\0&1\end{array}} \right| = 1 - 0 = 1 \ne 0\]So, fundamental sets of solutions will exist provided we can solve the two IVP’s given in the theorem.

using \(t_{0} = 0\).

Using the techniques from the first part of this chapter we can find the two solutions that we’ve been using to this point.

\[y\left( t \right) = {{\bf{e}}^{ - 3t}}\hspace{0.25in}\hspace{0.25in}y\left( t \right) = {{\bf{e}}^{ - t}}\]These do form a fundamental set of solutions as we can easily verify. However, they are NOT the set that will be given by the theorem. Neither of these solutions will satisfy either of the two sets of initial conditions given in the theorem. We will have to use these to find the fundamental set of solutions that is given by the theorem.

We know that the following is also a solution to the differential equation.

\[y\left( t \right) = {c_1}{{\bf{e}}^{ - 3t}} + {c_2}{{\bf{e}}^{ - t}}\]So, let’s apply the first set of initial conditions and see if we can find constants that will work.

\[y\left( 0 \right) = 1\hspace{0.25in}y'\left( 0 \right) = 0\]We’ll leave it to you to verify that we get the following solution upon doing this.

\[{y_1}\left( t \right) = - \frac{1}{2}{{\bf{e}}^{ - 3t}} + \frac{3}{2}{{\bf{e}}^{ - t}}\]Likewise, if we apply the second set of initial conditions,

\[y\left( 0 \right) = 0\hspace{0.25in}y'\left( 0 \right) = 1\]we will get

\[{y_2}\left( t \right) = - \frac{1}{2}{{\bf{e}}^{ - 3t}} + \frac{1}{2}{{\bf{e}}^{ - t}}\]According to the theorem these should form a fundament set of solutions. This is easy enough to check.

\[\begin{align*}W & = \left| {\begin{array}{*{20}{c}}{ - \frac{1}{2}{{\bf{e}}^{ - 3t}} + \frac{3}{2}{{\bf{e}}^{ - t}}}&{ - \frac{1}{2}{{\bf{e}}^{ - 3t}} + \frac{1}{2}{{\bf{e}}^{ - t}}}\\{\frac{3}{2}{{\bf{e}}^{ - 3t}} - \frac{3}{2}{{\bf{e}}^{ - t}}}&{\frac{3}{2}{{\bf{e}}^{ - 3t}} - \frac{1}{2}{{\bf{e}}^{ - t}}}\end{array}} \right|\\ & = \left( { - \frac{1}{2}{{\bf{e}}^{ - 3t}} + \frac{3}{2}{{\bf{e}}^{ - t}}} \right)\left( {\frac{3}{2}{{\bf{e}}^{ - 3t}} - \frac{1}{2}{{\bf{e}}^{ - t}}} \right) - \left( { - \frac{1}{2}{{\bf{e}}^{ - 3t}} + \frac{1}{2}{{\bf{e}}^{ - t}}} \right)\left( {\frac{3}{2}{{\bf{e}}^{ - 3t}} - \frac{3}{2}{{\bf{e}}^{ - t}}} \right)\\ & = {{\bf{e}}^{ - 4t}} \ne 0\end{align*}\]So, we got a completely different set of fundamental solutions from the theorem than what we’ve been using up to this point. This is not a problem. There are an infinite number of pairs of functions that we could use as a fundamental set of solutions for this problem.

So, which set of fundamental solutions should we use? Well, if we use the ones that we originally found, the general solution would be,

\[y\left( t \right) = {c_1}{{\bf{e}}^{ - 3t}} + {c_2}{{\bf{e}}^{ - t}}\]Whereas, if we used the set from the theorem the general solution would be,

\[y\left( t \right) = {c_1}\left( { - \frac{1}{2}{{\bf{e}}^{ - 3t}} + \frac{3}{2}{{\bf{e}}^{ - t}}} \right) + {c_2}\left( { - \frac{1}{2}{{\bf{e}}^{ - 3t}} + \frac{1}{2}{{\bf{e}}^{ - t}}} \right)\]This would not be very fun to work with when it came to determining the coefficients to satisfy a general set of initial conditions.

So, which set of fundamental solutions should we use? We should always try to use the set that is the most convenient to use for a given problem.