Linear Systems with Constant Coefficients

Here is a system of n differential equations in n unknowns:

$$ \eqalign { x_1' &= a_{11}x_1 + \cdots + a_{1n} x_n,\cr x_2' &= a_{21}x_1 + \cdots + a_{2n} x_n,\cr &\vdots \cr x_n' &= a_{n1}x_1 + \cdots + a_{nn} x_n.\cr } $$

This is a constant coefficient linear homogeneous system. Thus, the coefficients $a_{ij}$ are constant, and you can see that the equations are linear in the variables $x_1$ , ..., $x_n$ and their derivatives. The reason for the term "homogeneous" will be clear when I've written the system in matrix form.

The primes on $x_1'$ , ..., $x_n'$ denote differentiation with respect to an independent variable t. The problem is to solve for $x_1$ , ..., $x_n$ in terms of t.

Write the system in matrix form as

$$\left[\matrix{x_1' \cr \vdots \cr x_n' \cr}\right] = \left[\matrix{a_{11} & \cdots & a_{1n} \cr a_{21} & \cdots & a_{2n} \cr \vdots & & \vdots \cr a_{n1} & \cdots & a_{nn} \cr}\right] \left[\matrix{x_1 \cr \vdots \cr x_n \cr}\right].$$

Equivalently,

$$\vec x' = A\vec x.$$

(A nonhomogeneous system would look like $\vec x' = A\vec x + \vec b$ .)

It's possible to solve such a system if you know the eigenvalues (and possibly the eigenvectors) for the coefficient matrix

$$\left[\matrix{a_{11} & \cdots & a_{1n} \cr a_{21} & \cdots & a_{2n} \cr \vdots & & \vdots \cr a_{n1} & \cdots & a_{nn} \cr}\right]$$

First, I'll do an example which shows that you can solve small linear systems by brute force.


Example. Consider the system of differential equations

$$\der {x_1} t = x_1' = -29x_1 - 48x_2,$$

$$\der {x_2} t = x_2' = 16x_1 + 27x_2.$$

The idea is to solve for $x_1$ and $x_2$ in terms of t.

One approach is to use brute force. Solve the first equation for $x_2$ , then differentiate to find $x_2'$ :

$$x_2 = \dfrac{1}{48} (-x_1' - 29x_1), \quad x_2' = \dfrac{1}{48} (-x_1'' - 29x_1').$$

Plug these into second equation:

$$\dfrac{1}{48} (-x_1'' - 29x_1') = 16x_1 + 27\cdot \dfrac{1}{48} (-x_1' - 29x_1), \quad x_1'' + 2x_1' - 15x_1 = 0.$$

This is a constant coefficient linear homogeneous equation in $x_1$ . The characteristic equation is $m^2 - 2m
   - 15 = 0$ . The roots are $m = -5$ and $m = 3$ . Therefore,

$$x_1 = c_1 e^{-5t} + c_2 e^{3t}.$$

Plug back in to find $x_2$ :

$$x_2 = \dfrac{1}{48} (-x_1' - 29x_1) = \dfrac{1}{48} \left(-(-5c_1 e^{-5t} + 3c_2 e^{3t}) - 29(c_1 e^{-5t} + c_2 e^{3t})\right) = -\dfrac{1}{2} c_1 e^{-5t} - \dfrac{2}{3} c_2 e^{3t}.\quad\halmos$$


The procedure works, but it's clear that the computations would be pretty horrible for larger systems.

To describe a better approach, look at the coefficient matrix:

$$A = \left[\matrix{-29 & -48 \cr 16 & 27 \cr}\right]$$

Find the eigenvalues:

$$|A - \lambda I| = \left|\matrix{-29 - \lambda & -48 \cr 16 & 27 - \lambda \cr}\right| = (\lambda + 29)(\lambda - 27) + 16\cdot 48 = \lambda^2 + 2\lambda - 15.$$

This is the same polynomial that appeared in the example. Since $\lambda^2 + 2\lambda - 15 = (\lambda +
   5)(\lambda - 3)$ , the eigenvalues are $\lambda = -5$ and $\lambda =
   3$ .

Thus, you don't need to go through the process of eliminating $x_2$ and isolating $x_1$ . You know that

$$x_1 = c_1 e^{-5t} + c_2 e^{3t}$$

once you know the eigenvalues of the coefficient matrix. You can now finish the problem as above by plugging $x_1$ back in to solve for $x_2$ .

This is better than brute force, but it's still cumbersome if the system has more than two variables.

I can improve things further by making use of eigenvectors as well as eigenvalues. Consider the system

$$\vec x' = A\vec x.$$

Suppose $\lambda$ is an eigenvalue of A with eigenvector v. This means that

$$Av = \lambda v.$$

I claim that $\vec x = c e^{\lambda t}
   v$ is a solution to the equation, where c is a constant. To see this, plug it in:

$$\vec x' = c \lambda e^{\lambda t} v = c e^{\lambda t} (\lambda v) = c e^{\lambda t} (Av) = A(c e^{\lambda t} v) = A\vec x.$$

To obtain the general solution to $\vec x' = A\vec x$ , you should have "one arbitrary constant for each differentiation". In this case, you'd expect n arbitrary constants. This discussion should make the following result plausible.

$$\vec x = c_1 e^{\lambda_1 t} v_1 + \cdots c_n e^{\lambda_n t} v_n.$$


Example. Solve

$$\der {x_1} t = x_1' = -29x_1 - 48x_2,$$

$$\der {x_2} t = x_2' = 16x_1 + 27x_2.$$

The matrix form is

$$\left[\matrix{x_1' \cr x_2' \cr}\right] = \left[\matrix{-29 & -48 \cr 16 & 27 \cr}\right] \left[\matrix{x_1 \cr x_2 \cr}\right].$$

The matrix

$$A = \left[\matrix{-29 & -48 \cr 16 & 27 \cr}\right]$$

has eigenvalues $\lambda = -5$ and $\lambda =
   3$ . I need to find the eigenvectors.

Consider $\lambda = -5$ :

$$A + 5I = \left[\matrix{-24 & -48 \cr 16 & 32 \cr}\right] \quad \rightarrow \quad \left[\matrix{1 & 2 \cr 0 & 0 \cr}\right]$$

The last matrix says $a + 2b = 0$ , or $a = -2b$ . Therefore,

$$\left[\matrix{a \cr b \cr}\right] = \left[\matrix{-2b \cr b \cr}\right] = b \left[\matrix{-2 \cr 1 \cr}\right].$$

Take $b = 1$ . The eigenvector is $(-2, 1)$ .

Now consider $\lambda = 3$ :

$$A - 3I = \left[\matrix{-32 & -48 \cr 16 & 24 \cr}\right] \quad \rightarrow \quad \left[\matrix{1 & \dfrac{3}{2} \cr \noalign{\vskip2pt} 0 & 0 \cr}\right]$$

The last matrix says $a + \dfrac{3}{2}b
   = 0$ , or $a = -\dfrac{3}{2}b$ . Therefore,

$$\left[\matrix{a \cr b \cr}\right] = \left[\matrix{-\dfrac{3}{2}b \cr \noalign{\vskip2pt} b \cr}\right] = b \left[\matrix{-\dfrac{3}{2} \cr \noalign{\vskip2pt}1 \cr}\right].$$

Take $b = 2$ . The eigenvector is $(-3, 2)$ .

You can check that the vectors $(-2,
   1)$ , $(-3, 2)$ , are independent. Hence, the solution is

$$\vec x = \left[\matrix{x_1 \cr x_2 \cr}\right] = c_1 e^{-5t} \left[\matrix{-2 \cr 1 \cr}\right] + c_2 e^{3t} \left[\matrix{-3 \cr 2 \cr}\right].\quad\halmos$$


Example. Find the general solution $(x(t), y(t))$ to the linear system

$$\eqalign{ \der x t & = x + y \cr \noalign{\vskip2pt} \der y t & = 6 x + 2 y \cr}$$ The matrix form is

$$\left[\matrix{x' \cr y' \cr}\right] = \left[\matrix{ 1 & 1 \cr 6 & 2 \cr}\right] \left[\matrix{x \cr y \cr}\right].$$

Let

$$A = \left[\matrix{ 1 & 1 \cr 6 & 2 \cr}\right].$$

$$\det (A - x I) = \left|\matrix{ 1 - x & 1 \cr 6 & 2 - x \cr}\right| = x^2 - 3 x - 4 = (x - 4)(x + 1).$$

The eigenvalues are $x = 4$ and $x = -1$ .

For $x = 4$ , I have

$$A - 4 I = \left[\matrix{ -3 & 1 \cr 6 & -2 \cr}\right] \quad \to \quad \left[\matrix{ 3 & -1 \cr 0 & 0 \cr}\right].$$

If $(a, b)$ is an eigenvector, then

$$3 a - b = 0, \quad b = 3 a.$$

So

$$\left[\matrix{a \cr b \cr}\right] = \left[\matrix{a \cr 3 a \cr}\right] = a \cdot \left[\matrix{1 \cr 3 \cr}\right].$$

$(1, 3)$ is an eigenvector.

For $x = -1$ , I have

$$A + I = \left[\matrix{ 2 & 1 \cr 6 & 3 \cr}\right] \quad \to \quad \left[\matrix{ 2 & 1 \cr 0 & 0 \cr}\right].$$

If $(a, b)$ is an eigenvector, then

$$2 a + b = 0, \quad b = -2 a.$$

So

$$\left[\matrix{a \cr b \cr}\right] = \left[\matrix{a \cr -2 a \cr}\right] = a \cdot \left[\matrix{1 \cr -2 \cr}\right].$$

$(1, -2)$ is an eigenvector.

The solution is

$$\left[\matrix{x \cr y \cr}\right] = c_1 e^{4 t} \left[\matrix{1 \cr 3 \cr}\right] + c_2 e^{-t} \left[\matrix{1 \cr -2 \cr}\right].\quad\halmos$$


Example. ( Complex roots) Solve

$$x_1' = 5x_1 + 5x_2,$$

$$x_2' = -4x_1 - 3x_2.$$

The characteristic polynomial is

$$\left|\matrix{5 - \lambda & 5 \cr -4 & -3 - \lambda \cr}\right| = \lambda^2 - 2\lambda + 5.$$

The eigenvalues are $\lambda = 1 \pm
   2i$ . You can check that the eigenvectors are:

$$ \eqalign { \lambda = 1 - 2i: \quad & b\cdot \left[\matrix{-2 + i \cr 2 \cr}\right]\cr \lambda = 1 + 2i: \quad & b\cdot \left[\matrix{-2 - i \cr 2 \cr}\right]\cr } $$

Observe that the eigenvectors are conjugates of one another. This is always true when you have a complex eigenvalue.

The eigenvector method gives the following complex solution:

$$\left[\matrix{x_1 \cr x_2 \cr}\right] = c_1 e^{(1-2i)t} \left[\matrix{-2 + i \cr 2 \cr}\right] + c_2 e^{(1+2i)t} \left[\matrix{-2 - i \cr 2 \cr}\right] =$$

$$e^t \left[\matrix{\left(-2(c_1 + c_2) + i(c_1 - c_2)\right) \cos 2t + \left((c_1 + c_2) + 2i(c_1 - c_2)\right) \sin 2t \cr 2(c_1 + c_2) \cos 2t - 2i(c_1 - c_2) \sin 2t \cr}\right].$$

Note that the constants occur in the combinations $c_1 + c_2$ and $i(c_1 - c_2)$ . Something like this will always happen in the complex case. Set $d_1 = c_1 + c_2$ and $d_2 = i(c_1 -
   c_2)$ . The solution is

$$\left[\matrix{x_1 \cr x_2 \cr}\right] = e^t \left[\matrix{\left(-2d_1 + d_2\right) \cos 2t + \left(d_1 + 2d_2\right) \sin 2t \cr 2d_1 \cos 2t - 2d_2 \sin 2t \cr}\right].$$

In fact, if you're given initial conditions for $x_1$ and $x_2$ , the new constants $d_1$ and $d_2$ will turn out to be real numbers.


You can get a picture of the solution curves for a system $\vec{x'} = f(\vec{x})$ even if you can't solve it by sketching the direction field. Suppose you have a two-variable linear system

$$\left[\matrix{x' \cr y' \cr}\right] = \left[\matrix{a & b \cr c & d \cr}\right] \left[\matrix{x \cr y \cr}\right].$$

This is equivalent to the equations

$$\der x t = ax + by \quad\hbox{and}\quad \der y t = cx + dy.$$

Then

$$\der y x = \dfrac{\der y t}{\der x t} = \dfrac{cx + dy}{ax + by}.$$

That is, the expression on the right gives the slope of the solution curve at the point $(x, y)$ .

To sketch the direction field, pick a set of sample points --- for example, the points on a grid. At each point $(x, y)$ , draw the vector $(ax + by, cx + dy)$ starting at the point $(x, y)$ . The collection of vectors is the direction field. You can approximate the solution curves by sketching in curves which are tangent to the direction field.


Example. Sketch the direction field for

$$x' = x - y, \quad y' = x + y.$$

I've computed the vectors for 9 points:

$$\vbox{\offinterlineskip \halign{& \vrule # & \strut \hfil \quad # \quad \hfil \cr \noalign{\hrule} height2pt & \omit & & \omit & & \omit & & \omit & & \omit & \cr & x & & y & & $x - y$ & & $x + y$ & & vector & \cr height2pt & \omit & & \omit & & \omit & & \omit & & \omit & \cr \noalign{\hrule} height2pt & \omit & & \omit & & \omit & & \omit & & \omit & \cr & -1 & & -1 & & 0 & & -2 & & $(0, -2)$ & \cr height2pt & \omit & & \omit & & \omit & & \omit & & \omit & \cr \noalign{\hrule} height2pt & \omit & & \omit & & \omit & & \omit & & \omit & \cr & -1 & & 0 & & -1 & & -1 & & $(-1, -1)$ & \cr height2pt & \omit & & \omit & & \omit & & \omit & & \omit & \cr \noalign{\hrule} height2pt & \omit & & \omit & & \omit & & \omit & & \omit & \cr & -1 & & 1 & & -2 & & 0 & & $(-2, 0)$ & \cr height2pt & \omit & & \omit & & \omit & & \omit & & \omit & \cr \noalign{\hrule} height2pt & \omit & & \omit & & \omit & & \omit & & \omit & \cr & 0 & & -1 & & 1 & & -1 & & $(1, -1)$ & \cr height2pt & \omit & & \omit & & \omit & & \omit & & \omit & \cr \noalign{\hrule} height2pt & \omit & & \omit & & \omit & & \omit & & \omit & \cr & 0 & & 0 & & 0 & & 0 & & $(0, 0)$ & \cr height2pt & \omit & & \omit & & \omit & & \omit & & \omit & \cr \noalign{\hrule} height2pt & \omit & & \omit & & \omit & & \omit & & \omit & \cr & 0 & & 1 & & -1 & & 1 & & $(-1, 1)$ & \cr height2pt & \omit & & \omit & & \omit & & \omit & & \omit & \cr \noalign{\hrule} height2pt & \omit & & \omit & & \omit & & \omit & & \omit & \cr & 1 & & -1 & & 2 & & 0 & & $(2, 0)$ & \cr height2pt & \omit & & \omit & & \omit & & \omit & & \omit & \cr \noalign{\hrule} height2pt & \omit & & \omit & & \omit & & \omit & & \omit & \cr & 1 & & 0 & & 1 & & 1 & & $(1, 1)$ & \cr height2pt & \omit & & \omit & & \omit & & \omit & & \omit & \cr \noalign{\hrule} height2pt & \omit & & \omit & & \omit & & \omit & & \omit & \cr & 1 & & 1 & & 0 & & 2 & & $(0, 2)$ & \cr height2pt & \omit & & \omit & & \omit & & \omit & & \omit & \cr \noalign{\hrule} }} $$

Thus, from the second line of the table, I'd draw the vector $(-1, -1)$ starting at the point $(1, 0)$ .

Here's a sketch of the vectors:

$$\hbox{\epsfysize=1.5in \epsffile{system1.eps}}$$

While it's possible to plot fields this way, it's very tedious. You can use software to plot fields quickly. Here is the same field as plotted by Mathematica:

$$\hbox{\epsfysize=1.5in \epsffile{system2a.eps}} \hskip0.5in \hbox{\epsfysize=1.5in \epsffile{system2b.eps}}$$

The first picture shows the field as it would be if you plotted it by hand. As you can see, the vectors overlap each other, making the picture a bit ugly. The second picture is the way Mathematica draws the field by default: The vectors' lengths are scaled so that the vectors don't overlap. In subsequent examples, I'll adopt the second alternative when I display a direction field picture.

The arrows in the pictures show the direction of increasing t on the solution curves. You can see from these pictures that the solution curves for this system appear to spiral out from the origin.


Example. ( A compartment model) Two tanks hold 50 gallons of liquid each. The first tank starts with 25 pounds of dissolved salt, while the second starts with pure water. Pure water flows into the first tank at 3 gallons per minute; the well-stirred micture flows into tank 2 at 4 gallons per minute. The mixture in tank 2 is pumped back into tank 1 at 1 gallon per minute, and also drains out at 3 gallons per minute. Find the amount of salt in each tank after t minutes.

Let x be the number of pounds of salt dissolved in the first tank at time t and let y be the number of pounds of salt dissolved in the second tank at time t. The rate equations are

$$\der x t = \left(3\ \dfrac{\rm gal}{\rm min}\right) \left(0\ \dfrac{\rm lbs}{\rm gal}\right) + \left(1\ \dfrac{\rm gal}{\rm min}\right) \left(\dfrac{y {\rm lbs}}{50 {\rm gal}}\right) - \left(4\ \dfrac{\rm gal}{\rm min}\right) \left(\dfrac{x {\rm lbs}}{50 {\rm gal}}\right),$$

$$\der y t = \left(4\ \dfrac{\rm gal}{\rm min}\right) \left(\dfrac{x {\rm lbs}}{50 {\rm gal}}\right) - \left(1\ \dfrac{\rm gal}{\rm min}\right) \left(\dfrac{y {\rm lbs}}{50 {\rm gal}}\right) - \left(3\ \dfrac{\rm gal}{\rm min}\right) \left(\dfrac{y {\rm lbs}}{50 {\rm gal}}\right).$$

Simplify:

$$x' = -0.08x + 0.02y, \quad y' = 0.08x - 0.08y.$$

Next, find the characteristic polynomial:

$$\left|\matrix{-0.08 - \lambda & 0.02 \cr 0.08 & -0.08 - \lambda \cr}\right| = \lambda^2 + 0.16\lambda + 0.048 = (\lambda + 0.04)(\lambda + 0.12).$$

The eigenvalues are $\lambda = -0.04$ , $\lambda = -0.12$ .

Consider $\lambda = -0.04$ :

$$A + 0.04I = \left[\matrix{-0.04 & 0.02 \cr 0.08 & -0.04 \cr}\right] \quad \rightarrow \quad \left[\matrix{1 & -\dfrac{1}{2} \cr \noalign{\vskip2pt} 0 & 0 \cr}\right]$$

This says $a - \dfrac{1}{2}b = 0$ , so $a = \dfrac{1}{2}b$ . Therefore,

$$\left[\matrix{a \cr b \cr}\right] = \left[\matrix{\dfrac{1}{2}b \cr \noalign{\vskip2pt} b \cr}\right] = b\left[\matrix{\dfrac{1}{2} \cr \noalign{\vskip2pt} 1 \cr}\right].$$

Set $b = 2$ . The eigenvector is $(1, 2)$ .

Now consider $\lambda = -0.12$ :

$$A + 0.12I = \left[\matrix{0.04 & 0.02 \cr 0.08 & 0.04 \cr}\right] \quad \rightarrow \quad \left[\matrix{1 & \dfrac{1}{2} \cr \noalign{\vskip2pt} 0 & 0 \cr}\right]$$

This says $a + \dfrac{1}{2}b = 0$ , so $a = -\dfrac{1}{2}b$ . Therefore,

$$\left[\matrix{a \cr b \cr}\right] = \left[\matrix{-\dfrac{1}{2}b \cr \noalign{\vskip2pt}b \cr}\right] = b\left[\matrix{-\dfrac{1}{2} \cr \noalign{\vskip2pt}1 \cr}\right].$$

Set $b = 2$ . The eigenvector is $(-1, 2)$ .

The solution is

$$\vec x = c_1 e^{-0.04t} \left[\matrix{ 1 \cr 2 \cr}\right] + c_2 e^{-0.12t} \left[\matrix{ -1 \cr 2 \cr}\right].$$

When $t = 0$ , $x = 25$ and $y = 0$ . Plug in:

$$\left[\matrix{25 \cr 0 \cr}\right] = c_1 \left[\matrix{1 \cr 2 \cr}\right] + c_2 \left[\matrix{-1 \cr 2 \cr}\right] = \left[\matrix{1 & -1 \cr 2 & 2 \cr}\right] \left[\matrix{c_1 \cr c_2 \cr}\right].$$

Solving for the constants, I obtain $c_1 = 12.5$ , $c_2 = 12.5$ . Thus,

$$\vec x = 12.5 e^{-0.04t} \left[\matrix{ 1 \cr 2 \cr}\right] + 12.5 e^{-0.12t} \left[\matrix{-1 \cr 2 \cr}\right] = \left[\matrix{12.5 e^{-0.04t} + 12.5 e^{-0.12t} \cr 25 e^{-0.04t} - 25 e^{-0.12t} \cr}\right].$$

The direction field for the system is shown in the first picture. In the second picture, I've sketched in some solution curves.

$$\hbox{\epsfysize=2in \epsffile{system3.eps}} \hskip0.5in \hbox{\epsfysize=2in \epsffile{system4.eps}}$$

The solution curve picture is referred to as the phase portrait.

The eigenvectors $(1, 2)$ and $(-1,
   2)$ have slopes 2 and -2, respectively. These appear as the two lines (linear solutions).


Consider the linear system

$$\vec x\,' = Ax.$$

Suppose it has has conjugate complex eigenvalues $\lambda$ , $\lambda^*$ with eigenvectors $\vec v$ , $\vec v\,^*$ , respectively. This yields solutions

$$e^{\lambda t} \vec v, \quad e^{\lambda^* t} \vec v\,^*.$$

If $a + bi$ is a complex number,

$$\re (a + bi) = a = \dfrac{1}{2}\left( (a + bi) + (a - bi)\right) = \dfrac{1}{2}\left((a + bi) + (a + bi)^*\right),$$

$$\im (a + bi) = b = \dfrac{1}{2}\left( (a + bi) - (a - bi)\right) = \dfrac{1}{2}\left((a + bi) - (a + bi)^*\right).$$

I'll apply this to $e^{\lambda t} \vec
   v$ , using the fact that

$$\left(e^{\lambda t} \vec v\right)^* = e^{\lambda^* t} \vec v\,^*.$$

Then

$$\re \left(e^{\lambda t} \vec v\right) = \dfrac{1}{2}\left( e^{\lambda t} \vec v + e^{\lambda^* t} \vec v\,^*\right),$$

$$\im \left(e^{\lambda t} \vec v\right) = \dfrac{1}{2}\left( e^{\lambda t} \vec v - e^{\lambda^* t} \vec v\,^*\right).$$

The point is that since the terms on the right are independent solutions, so are the terms on the left. The terms on the left, however, are real solutions. Here is what this means.


Example. Solve the system

$$x\,' = x - y,$$

$$y\,' = x + y.$$

Set

$$A = \left[\matrix{1 & -1 \cr 1 & 1 \cr}\right].$$

The eigenvalues are $\lambda = 1 \pm
   i$ .

Consider $\lambda = 1 + i$ :

$$A - (1 + i)I = \left[\matrix{-i & -1 \cr 1 & -i \cr}\right] \quad \to \quad \left[\matrix{1 & -i \cr 0 & 0 \cr}\right]$$

The last matrix says $a - bi = 0$ , so $a = bi$ . The eigenvectors are

$$\left[\matrix{a \cr b \cr}\right] = \left[\matrix{bi \cr b \cr}\right] = b \left[\matrix{i \cr 1 \cr}\right].$$

Take $b = 1$ . This yields the eigenvector $(i, 1)$ .

Write down the complex solution

$$e^{(1+i)t} \left[\matrix{i \cr 1 \cr}\right] = e^t e^{it} \left[\matrix{i \cr 1 \cr}\right] = e^t (\cos t + i \sin t) \left[\matrix{i \cr 1 \cr}\right] = e^t \left[\matrix{-\sin t & + & i \cos t \cr \cos t & + & i \sin t \cr}\right].$$

Take the real and imaginary parts:

$$\re e^t \left[\matrix{-\sin t & + & i \cos t \cr \cos t & + & i \sin t \cr}\right] = e^t \left[\matrix{-\sin t \cr \cos t \cr}\right],$$

$$\im e^t \left[\matrix{-\sin t & + & i \cos t \cr \cos t & + & i \sin t \cr}\right] = e^t \left[\matrix{\cos t \cr \sin t \cr}\right].$$

The general solution is

$$\vec x = c_1 e^t \left[\matrix{-\sin t \cr \cos t \cr}\right] + c_2 e^t \left[\matrix{\cos t \cr \sin t \cr}\right].\quad\halmos$$


The eigenvector method produces a solution to a (constant coefficient homogeneous) linear system whenever there are "enough eigenvectors". There might not be "enough eigenvectors" if the characteristic polynomial has repeated roots.

I'll consider the case of repeated roots with multiplicity two or three (i.e. double or triple roots). The general case can be handled by using the exponential of a matrix.

Consider the following linear system:

$$\vec x\,' = A\vec x.$$

Suppose $\lambda$ is an eigenvalue of A of multiplicity 2, and $\vec v$ is an eigenvector for $\lambda$ . $e^{\lambda t} \vec v$ is one solution; I want to find a second independent solution.

Recall that the constant coefficient equation $(D - 3)^2 y = 0$ had independent solutions $e^{3x}$ and $x e^{3x}$ .

By analogy, it's reasonable to guess a solution of the form

$$\vec x = t e^{\lambda t} \vec w.$$

Here $\vec w$ is a constant vector.

Plug the guess into $\vec x\,' = A\vec
   x$ :

$$\vec x\,' = t e^{\lambda t} \lambda \vec w + e^{\lambda t} \vec w = A (t e^{\lambda t} \vec w).$$

Compare terms in $t e^{\lambda t}$ and $e^{\lambda t}$ on the left and right:

$$A \vec w = \lambda \vec w \quad {\rm and} \quad \vec w = 0.$$

While it's true that $t e^{\lambda
   t}\cdot 0 = 0$ is a solution, it's not a very useful solution. I'll try again, this time using

$$\vec x = t e^{\lambda t} \vec w_1 + e^{\lambda t} \vec w_2.$$

Then

$$\vec x\,' = t e^{\lambda t} \lambda \vec w_1 + e^{\lambda t} \vec w_1 + \lambda e^{\lambda t} \vec w_2.$$

Note that

$$A \vec x = t e^{\lambda t} A \vec w_1 + e^{\lambda t} A \vec w_2.$$

Hence,

$$t e^{\lambda t} \lambda \vec w_1 + e^{\lambda t} \vec w_1 + \lambda e^{\lambda t} \vec w_2 = t e^{\lambda t} A \vec w_1 + e^{\lambda t} A \vec w_2.$$

Equate coefficients in $e^{\lambda t}$ , $t
   e^{\lambda t}$ :

$$A \vec w_1 = \lambda \vec w_1 \quad\hbox{so}\quad (A - \lambda I) \vec w_1 = 0,$$

$$A \vec w_2 = \vec w_1 + \lambda \vec w_2 \quad\hbox{so}\quad (A - \lambda I) \vec w_2 = \vec w_1.$$

In other words, $\vec w_1$ is an eigenvector, and $\vec w_2$ is a vector which is mapped by $A - \lambda I$ to the eigenvector. $\vec w_2$ is called a generalized eigenvector.


Example. Solve

$$\vec x\,' = \left[\matrix{-3 & -8 \cr 2 & 5 \cr}\right] \vec x.$$

$$\left|\matrix{-3 - \lambda & -8 \cr 2 & 5 - \lambda \cr}\right| = (\lambda + 3)(\lambda - 5) + 16 = \lambda^2 - 2\lambda + 1 = (\lambda - 1)^2.$$

Therefore, $\lambda = 1$ is an eigenvalue of multiplicity 2.

Now

$$A - I = \left[\matrix{-4 & -8 \cr 2 & 4 \cr}\right] \quad \rightarrow \quad \left[\matrix{1 & 2 \cr 0 & 0 \cr}\right]$$

The last matrix says $a + 2b = 0$ , or $a = -2b$ . Therefore,

$$\left[\matrix{a \cr b \cr}\right] = \left[\matrix{-2b \cr b \cr}\right] = b \left[\matrix{-2 \cr 1 \cr}\right].$$

Take $b = 1$ . The eigenvector is $(-2, 1)$ . This gives a solution

$$e^t \left[\matrix{-2 \cr 1 \cr}\right].$$

Next, I'll try to find a vector $\vec
   w$ such that

$$(A - I)\vec w = \left[\matrix{-2 \cr 1 \cr}\right].$$

Write $\vec w = (c, d)$ . The equation becomes

$$\left[\matrix{-4 & -8 \cr 2 & 4 \cr}\right] \left[\matrix{c \cr d \cr}\right] = \left[\matrix{-2 \cr 1 \cr}\right].$$

Row reduce:

$$\left[\matrix{-4 & -8 & -2 \cr 2 & 4 & 1 \cr}\right] \quad \rightarrow \quad \left[\matrix{1 & 2 & \dfrac{1}{2} \cr \noalign{\vskip2pt} 0 & 0 & 0 \cr}\right]$$

The last matrix says that $c + 2d =
   \dfrac{1}{2}$ , so $c = -2d + \dfrac{1}{2}$ . In this situation, I may take $d = 0$ ; doing so produces $\vec w =
   \left(\dfrac{1}{2}, 0\right)$ .

This work generates the solution

$$t e^t \left[\matrix{-2 \cr 1 \cr}\right] + e^t \left[\matrix{\dfrac{1}{2} \cr \noalign{\vskip2pt} 0 \cr}\right].$$

The general solution is

$$\vec x = c_1 e^t \left[\matrix{-2 \cr 1 \cr}\right] + c_2 \left(t e^t \left[\matrix{-2 \cr 1 \cr}\right] + e^t \left[\matrix{\dfrac{1}{2} \cr \noalign{\vskip2pt} 0 \cr}\right]\right).$$

The first picture shows the direction field; the second shows the phase portrait, with some typical solution curves. This kind of phase portrait is called an improper node.

$$\hbox{\epsfysize=2in \epsffile{system5.eps}} \hskip0.5in \hbox{\epsfysize=2in \epsffile{system6.eps}}\quad\halmos$$


Example. Solve the system

$$\vec x\,' = \left[\matrix{1 & 0 & 0 \cr 1 & 1 & 0 \cr 2 & -1 & 2 \cr}\right] \vec x.$$

The eigenvalues are $\lambda = 2$ and $\lambda = 1$ (double).

I'll do $\lambda = 2$ first.

$$A - 2I = \left[\matrix{-1 & 0 & 0 \cr 1 & -1 & 0 \cr 2 & -1 & 0 \cr}\right] \to \left[\matrix{1 & 0 & 0 \cr 0 & 1 & 0 \cr 0 & 0 & 0 \cr}\right]$$

The last matrix implies that $a = 0$ and $b = 0$ , so the eigenvectors are

$$\left[\matrix{a \cr b \cr c \cr}\right] = \left[\matrix{0 \cr 0 \cr c \cr}\right] = c\cdot \left[\matrix{0 \cr 0 \cr 1 \cr}\right].$$

For $\lambda = 1$ ,

$$A - I = \left[\matrix{0 & 0 & 0 \cr 1 & 0 & 0 \cr 2 & -1 & 1 \cr}\right] \to \left[\matrix{1 & 0 & 0 \cr 0 & 1 & -1 \cr 0 & 0 & 0 \cr}\right]$$

The last matrix implies that $a = 0$ and $b = c$ , so the eigenvectors are

$$\left[\matrix{a \cr b \cr c \cr}\right] = \left[\matrix{0 \cr c \cr c \cr}\right] = c\cdot \left[\matrix{0 \cr 1 \cr 1 \cr}\right].$$

I'll use $\vec v = (0, 1, 1)$ .

Next, I find a generalized eigenvector $\vec w = (a', b', c')$ . It must satisfy

$$(A - I)\vec w = \vec v.$$

That is,

$$\left[\matrix{0 & 0 & 0 \cr 1 & 0 & 0 \cr 2 & -1 & 1 \cr}\right] \left[\matrix{a' \cr b' \cr c' \cr}\right] = \left[\matrix{0 \cr 1 \cr 1 \cr}\right].$$

Solving this system yields $a' = 1$ , $b' = c' + 1$ . I can take $c' = 0$ , so $b' = 1$ , and

$$\vec w = \left[\matrix{a' \cr b' \cr c' \cr}\right] = \left[\matrix{1 \cr 1 \cr 0 \cr}\right].$$

The solution is

$$\vec x = c_1 e^{2t} \left[\matrix{0 \cr 0 \cr 1 \cr}\right] + c_2 e^t \left[\matrix{0 \cr 1 \cr 1 \cr}\right] + c_2 \left(te^t \left[\matrix{0 \cr 1 \cr 1 \cr}\right] + e^t \left[\matrix{1 \cr 1 \cr 0 \cr}\right]\right).\quad\halmos$$


I'll give a brief description of the situation for an eigenvalue $\lambda$ of multiplicity 3. First, if there are three {\it independent} eigenvectors $\vec u$ , $\vec v$ , $\vec w$ , the solution is

$$\vec x = c_1 e^{\lambda t} \vec u + c_2 e^{\lambda t} \vec v + c_2 e^{\lambda t} \vec w.$$

Suppose there is one independent eigenvector, say $\vec u$ . One solution is

$$e^{\lambda t} \vec u.$$

Find a generalized eigenvector $\vec
   v$ by solving

$$(A - \lambda I)\vec v = \vec u.$$

A second solution is

$$t e^{\lambda t} \vec u + e^{\lambda t} \vec v.$$

Next, obtain another generalized eigenvector $\vec w$ by solving

$$(A - \lambda I)\vec w = \vec v.$$

A third independent solution is

$$\dfrac{1}{2} t^2 e^{\lambda t} \vec u + t e^{\lambda t} \vec v + e^{\lambda t} \vec w.$$

Finally, combine the solutions to obtain the general solution.

The only other possibility is that there are two independent eigenvectors $\vec u$ and $\vec v$ . These give solutions

$$e^{\lambda t} \vec u \quad {\rm and} \quad e^{\lambda t} \vec v.$$

Find a generalized eigenvector $\vec
   w$ by solving

$$(A - \lambda I)\vec w = a \vec u + b \vec v.$$

The constants a and b are chosen so that the equation is solvable.

$\vec w$ yields the solution

$$t e^{\lambda t} (a \vec u + b \vec v) + e^{\lambda t} \vec w.$$

The best way of explaining why this works involves something called the Jordan canonical form for matrices. It's also possible to circumvent these technicalities by using the exponential of a matrix.


Send comments about this page to: Bruce.Ikenaga@millersville.edu.

Bruce Ikenaga's Home Page

Copyright 2015 by Bruce Ikenaga