Section 3.7 : More on the Wronskian
In the previous section we introduced the Wronskian to help us determine whether two solutions were a fundamental set of solutions. In this section we will look at another application of the Wronskian as well as an alternate method of computing the Wronskian.
Let’s start with the application. We need to introduce a couple of new concepts first.
Given two non-zero functions \(f(x)\) and \(g(x)\) write down the following equation.
\[\begin{equation}c\,f\left( x \right) + k\,g\left( x \right) = 0\label{eq:eq1}\end{equation}\]Notice that \(c = 0\) and \(k\) = 0 will make \(\eqref{eq:eq1}\) true for all \(x\) regardless of the functions that we use.
Now, if we can find non-zero constants \(c\) and \(k\) for which \(\eqref{eq:eq1}\) will also be true for all \(x\) then we call the two functions linearly dependent. On the other hand if the only two constants for which \(\eqref{eq:eq1}\) is true are \(c\) = 0 and \(k\) = 0 then we call the functions linearly independent.
- \(f\left( x \right) = 9\cos \left( {2x} \right)\hspace{0.25in}g\left( x \right) = 2{\cos ^2}\left( x \right) - 2{\sin ^2}\left( x \right)\)
- \(f\left( t \right) = 2{t^2}\hspace{0.25in}g\left( t \right) = {t^4}\)
We’ll start by writing down \(\eqref{eq:eq1}\) for these two functions.
\[c\left( {9\cos \left( {2x} \right)} \right) + k\left( {2{{\cos }^2}\left( x \right) - 2{{\sin }^2}\left( x \right)} \right) = 0\]We need to determine if we can find non-zero constants \(c\) and \(k\) that will make this true for all \(x\) or if \(c\) = 0 and \(k\) = 0 are the only constants that will make this true for all \(x\). This is often a fairly difficult process. The process can be simplified with a good intuition for this kind of thing, but that’s hard to come by, especially if you haven’t done many of these kinds of problems.
In this case the problem can be simplified by recalling
\[{\cos ^2}\left( x \right) - {\sin ^2}\left( x \right) = \cos \left( {2x} \right)\]Using this fact our equation becomes.
\[\begin{align*}9c\,\cos \left( {2x} \right) + 2k\,\cos \left( {2x} \right) & = 0\\ \left( {9c + 2k} \right)\cos \left( {2x} \right) & = 0\end{align*}\]With this simplification we can see that this will be zero for any pair of constants \(c\) and \(k\) that satisfy
\[9c + 2k = 0\]Among the possible pairs on constants that we could use are the following pairs.
\[\begin{align*}c & = 1, & \hspace{0.25in}k & = - \frac{9}{2}\\ c & = \frac{2}{9}, & \hspace{0.25in}k & = - 1\\ c & = - 2, & \hspace{0.25in}k & = 9\\ c & = - \frac{7}{6}, & \hspace{0.25in}k & = \frac{{21}}{4}\\ & etc.\end{align*}\]As we’re sure you can see there are literally thousands of possible pairs and they can be made as “simple” or as “complicated” as you want them to be.
So, we’ve managed to find a pair of non-zero constants that will make the equation true for all \(x\) and so the two functions are linearly dependent.
b \(f\left( t \right) = 2{t^2}\hspace{0.25in}g\left( t \right) = {t^4}\) Show Solution
As with the last part, we’ll start by writing down \(\eqref{eq:eq1}\) for these functions.
\[2c{t^2} + k{t^4} = 0\]In this case there isn’t any quick and simple formula to write one of the functions in terms of the other as we did in the first part. So, we’re just going to have to see if we can find constants. We’ll start by noticing that if the original equation is true, then if we differentiate everything we get a new equation that must also be true. In other words, we’ve got the following system of two equations in two unknowns.
\[\begin{align*}2c{t^2} + k{t^4} & = 0\\ 4ct + 4k{t^3} & = 0\end{align*}\]We can solve this system for \(c\) and \(k\) and see what we get. We’ll start by solving the second equation for \(c\).
\[c = - k{t^2}\]Now, plug this into the first equation.
\[\begin{align*}2\left( { - k{t^2}} \right){t^2} + k{t^4} & = 0\\ - k{t^4} & = 0\end{align*}\]Recall that we are after constants that will make this true for all \(t\). The only way that this will ever be zero for all \(t\) is if \(k\) = 0! So, if \(k\) = 0 we must also have \(c\) = 0.
Therefore, we’ve shown that the only way that
\[2c{t^2} + k{t^4} = 0\]will be true for all \(t\) is to require that \(c\) = 0 and \(k\) = 0. The two functions therefore, are linearly independent.
As we saw in the previous examples determining whether two functions are linearly independent or linearly dependent can be a fairly involved process. This is where the Wronskian can help.
Fact
Given two functions \(f(x)\) and \(g(x)\) that are differentiable on some interval I.
- If \(W\left( {f,g} \right)\left( {{x_0}} \right) \ne 0\) for some \(x_{0}\) in I, then \(f(x)\) and \(g(x)\) are linearly independent on the interval I.
- If \(f(x)\) and \(g(x)\) are linearly dependent on I then \(W(f,g)(x) = 0\) for all \(x\) in the interval I.
Be very careful with this fact. It DOES NOT say that if \(W(f,g)(x) = 0\) then \(f(x)\) and \(g(x)\) are linearly dependent! In fact, it is possible for two linearly independent functions to have a zero Wronskian!
This fact is used to quickly identify linearly independent functions and functions that are liable to be linearly dependent.
- \(f\left( x \right) = 9\cos \left( {2x} \right)\hspace{0.25in}g\left( x \right) = 2{\cos ^2}\left( x \right) - 2{\sin ^2}\left( x \right)\)
- \(f\left( t \right) = 2{t^2}\hspace{0.25in}g\left( t \right) = {t^4}\)
In this case if we compute the Wronskian of the two functions we should get zero since we have already determined that these functions are linearly dependent.
\[\begin{align*}W & = \left| {\begin{array}{*{20}{c}}{9\cos \left( {2x} \right)}&{2{{\cos }^2}\left( x \right) - 2{{\sin }^2}\left( x \right)}\\{ - 18\sin \left( {2x} \right)}&{ - 4\cos \left( x \right)\sin \left( x \right) - 4\sin \left( x \right)\cos \left( x \right)}\end{array}} \right|\\ & = \left| {\begin{array}{*{20}{c}}{9\cos \left( {2x} \right)}&{2\cos \left( {2x} \right)}\\{ - 18\sin \left( {2x} \right)}&{ - 2\sin \left( {2x} \right) - 2\sin \left( {2x} \right)}\end{array}} \right|\\ & = \left| {\begin{array}{*{20}{c}}{9\cos \left( {2x} \right)}&{2\cos \left( {2x} \right)}\\{ - 18\sin \left( {2x} \right)}&{ - 4\sin \left( {2x} \right)}\end{array}} \right|\\ & = - 36\cos \left( {2x} \right)\sin \left( {2x} \right) - \left( { - 36\cos \left( {2x} \right)\sin \left( {2x} \right)} \right) = 0\end{align*}\]So, we get zero as we should have. Notice the heavy use of trig formulas to simplify the work!
b \(f\left( t \right) = 2{t^2}\hspace{0.25in}g\left( t \right) = {t^4}\) Show Solution
Here we know that the two functions are linearly independent and so we should get a non-zero Wronskian.
\[W = \left| {\begin{array}{*{20}{c}}{2{t^2}}&{{t^4}}\\{4t}&{4{t^3}}\end{array}} \right| = 8{t^5} - 4{t^5} = 4{t^5}\]The Wronskian is non-zero as we expected provided \(t \ne 0\). This is not a problem. As long as the Wronskian is not identically zero for all \(t\) we are okay.
- \(f\left( t \right) = \cos t\hspace{0.25in}g\left( t \right) = \sin t\)
- \(f\left( x \right) = {6^x}\hspace{0.25in}g\left( x \right) = {6^{x + 2}}\)
Now that we have the Wronskian to use here let’s first check that. If its non-zero then we will know that the two functions are linearly independent and if its zero then we can be pretty sure that they are linearly dependent.
\[W = \left| {\begin{array}{*{20}{c}}{\cos t}&{\sin t}\\{ - \sin t}&{\cos t}\end{array}} \right| = {\cos ^2}t + {\sin ^2}t = 1 \ne 0\]So, by the fact these two functions are linearly independent. Much easier this time around!
b \(f\left( x \right) = {6^x}\hspace{0.25in}g\left( x \right) = {6^{x + 2}}\) Show Solution
We’ll do the same thing here as we did in the first part. Recall that
\[{\left( {{a^x}} \right)^\prime } = {a^x}\ln a\]Now compute the Wronskian.
\[W = \left| {\begin{array}{*{20}{c}}{{6^x}}&{{6^{x + 2}}}\\{{6^x}\ln 6}&{{6^{x + 2}}\ln 6}\end{array}} \right| = {6^x}{6^{x + 2}}\ln 6 - {6^{x + 2}}{6^x}\ln 6 = 0\]Now, this does not say that the two functions are linearly dependent! However, we can guess that they probably are linearly dependent. To prove that they are in fact linearly dependent we’ll need to write down \(\eqref{eq:eq1}\) and see if we can find non-zero \(c\) and \(k\) that will make it true for all \(x\).
\[\begin{align*}c\,{6^x} + k\,{6^{x + 2}} & = 0\\ c\,{6^x} + k\,{6^x}{6^2} & = 0\\ c\,{6^x} + 36k\,{6^x} & = 0\\ \left( {c\, + 36k} \right)\,{6^x} & = 0\end{align*}\]So, it looks like we could use any constants that satisfy
\[c + 36k = 0\]to make this zero for all \(x\). In particular we could use
\[\begin{align*}c & = 36 & \hspace{0.25in}k & = - 1\\ c & = - 36 & \hspace{0.25in}k & = 1\\ c & = 9 & \hspace{0.25in}k & = - \frac{1}{4}\\ & etc.\end{align*}\]We have non-zero constants that will make the equation true for all \(x\). Therefore, the functions are linearly dependent.
Before proceeding to the next topic in this section let’s talk a little more about linearly independent and linearly dependent functions. Let’s start off by assuming that \(f(x)\) and \(g(x)\) are linearly dependent. So, that means there are non-zero constants \(c\) and \(k\) so that
\[c\,f\left( x \right) + k\,g\left( x \right) = 0\]is true for all \(x\).
Now, we can solve this in either of the following two ways.
\[f\left( x \right) = - \frac{k}{c}g\left( x \right)\hspace{0.25in}{\mbox{OR}}\hspace{0.25in}g\left( x \right) = - \frac{c}{k}f\left( x \right)\]Note that this can be done because we know that \(c\) and \(k\) are non-zero and hence the divisions can be done without worrying about division by zero.
So, this means that two linearly dependent functions can be written in such a way that one is nothing more than a constants time the other. Go back and look at both of the sets of linearly dependent functions that we wrote down and you will see that this is true for both of them.
Two functions that are linearly independent can’t be written in this manner and so we can’t get from one to the other simply by multiplying by a constant.
Next, we don’t want to leave you with the impression that linear independence and linear dependence is only for two functions. We can easily extend the idea to as many functions as we’d like.
Let’s suppose that we have \(n\) non-zero functions, \(f_{1}(x)\), \(f_{2}(x)\),…, \(f_{n}(x)\). Write down the following equation.
\[\begin{equation}{c_1}{f_1}\left( x \right) + {c_2}{f_2}\left( x \right) + \cdots + {c_n}{f_n}\left( x \right) = 0 \label{eq:eq2}\end{equation}\]If we can find constants \(c_{1}\), \(c_{2}\), …, \(c_{n}\) with at least two non-zero so that \(\eqref{eq:eq2}\) is true for all \(x\) then we call the functions linearly dependent. If, on the other hand, the only constants that make \(\eqref{eq:eq2}\) true for \(x\) are \(c_{1}=0\), \(c_{2}=0\), …, \(c_{n}=0\) then we call the functions linearly independent.
Note that unlike the two function case we can have some of the constants be zero and still have the functions be linearly dependent.
In this case just what does it mean for the functions to be linearly dependent? Well, let’s suppose that they are. So, this means that we can find constants, with at least two non-zero so that \(\eqref{eq:eq2}\) is true for all \(x\). For the sake of argument let’s suppose that \(c_{1}\) is one of the non-zero constants. This means that we can do the following.
\[\begin{align*}{c_1}{f_1}\left( x \right) + {c_2}{f_2}\left( x \right) + \cdots + {c_n}{f_n}\left( x \right) & = 0\\ {c_1}{f_1}\left( x \right) & = - \left( {{c_2}{f_2}\left( x \right) + \cdots + {c_n}{f_n}\left( x \right)} \right)\\ {f_1}\left( x \right) & = - \frac{1}{{{c_1}}}\left( {{c_2}{f_2}\left( x \right) + \cdots + {c_n}{f_n}\left( x \right)} \right)\end{align*}\]In other words, if the functions are linearly dependent then we can write at least one of them in terms of the other functions.
Okay, let’s move on to the other topic of this section. There is an alternate method of computing the Wronskian. The following theorem gives this alternate method.
Abel’s Theorem
If \(y_{1}(t)\) and \(y_{2}(t)\) are two solutions to
\[y'' + p\left( t \right)y' + q\left( t \right)y = 0\]then the Wronskian of the two solutions is
\[W\left( {{y_1},{y_2}} \right)\left( t \right) = W\left( {{y_1},{y_2}} \right)\left( {{t_0}} \right){{\bf{e}}^{ - \int_{{\,{t_0}}}^{{\,t}}{{p\left( x \right)\,dx}}}}\]for some \(t_{0}\).
Because we don’t know the Wronskian and we don’t know \(t_{0}\) this won’t do us a lot of good apparently. However, we can rewrite this as
\[\begin{equation}W\left( {{y_1},{y_2}} \right)\left( t \right) = c\,{{\bf{e}}^{ - \int{{p\left( t \right)\,dt}}}}\label{eq:eq3}\end{equation}\]where the original Wronskian sitting in front of the exponential is absorbed into the \(c\) and the evaluation of the integral at \(t_{0}\) will put a constant in the exponential that can also be brought out and absorbed into the constant \(c\). If you don’t recall how to do this go back and take a look at the linear, first order differential equation section as we did something similar there.
With this rewrite we can compute the Wronskian up to a multiplicative constant, which isn’t too bad. Notice as well that we don’t actually need the two solutions to do this. All we need is the coefficient of the first derivative from the differential equation (provided the coefficient of the second derivative is one of course…).
Let’s take a look at a quick example of this.
The first thing that we need to do is divide the differential equation by the coefficient of the second derivative as that needs to be a one. This gives us
\[y'' - \frac{2}{t}y' - {t^4}y = 0\]Now, using \(\eqref{eq:eq3}\) the Wronskian is
\[W = c{{\bf{e}}^{ - \int{{ - \frac{2}{t}dt}}}} = c{{\bf{e}}^{2\ln t}} = c{{\bf{e}}^{\ln {t^2}}} = c{t^2}\]