Math 285-3 ODE Notes
Revised Spring 2007
E. Kosygina, C. Robinson, M. Stein
Ordinary Differential Equations
Definition. For a vector field F on Rn , the differential equation determined by the F is
denoted by x0 = F(x). A solution to the differential equation is a differentiable path x(t)
in Rn such that x0 (t) = F(x(t)), i.e., the velocity vector x0 (t) of the path at time t is given
by the vector field F at the point x(t). (In the calculus book by Colley, a solution is called
a flow line. See p. 211.) We often specify the initial condition x0 at some time t0 , i.e., we
seek a solution x(t) such that x(t0 ) = x0 .
We shall concentrate on the following two questions:
• How can we describe all possible solutions to a given differential equation?
• How can we find a solution with given initial condition (t0 , x0 )?
We will mainly consider linear differential equations of the form x0 = Ax, but will
consider a few nonlinear examples. We will consider first n = 1 and then n ≥ 2.
1. Scalar Differential Equations
When n = 1 and we have a vector field on the line, we shall denote it by a non-boldfaced
F. Even though in this case F is just a function from R to R, we shall think of F(x) as a
one-dimensional vector emanating from the point x on the line.
Theorem 1. a. For a constant a ∈ R, all solutions y(t) to the linear differential equation
y 0 = ay are of the form y(t) = Ceat , where C is an arbitrary constant.
b. Given an initial condition (t0 , y0 ), there is exactly one solution y(t) = y0 ea(t−t0 ) to
y 0 = ay with y(t0 ) = y0 .
Proof. (a) It is very easy to check that y(t) of the form Ceat are solutions. (Do it!) We shall
show that there are no other solutions. Let y(t) be an arbitrary solution of our equation. We
compare this unknown solution y(t) to the known solution eat by defining z(t) = e−at y(t).
Then
z 0 (t) = −ae−at y(t) + e−at y 0 (t) = −ae−at y(t) + e−at ay(t) = 0,
since y 0 (t) = ay(t). Therefore z(t) has to be a constant. Denoting the constant by C, we
have that C = e−at y(t), or y(t) = Ceat for all t.
(b) Suppose that we want to find a solution y(t) = Ceat that satisfies the initial condition
(t0 , y0 ), i.e., y0 = y(t0 ) = Ceat0 and C = y0 e−at0 . Thus, our solution is
y(t) = y0 e−at0 eat = y0 ea(t−t0 ) .
Problem 1. Find the solution of the differential equation y 0 = 2y, which satisfies the
following initial conditions: a. y(0) = 0; b. y(0) = −1; c. y(2) = 3.
Problem 2. Assume that a : R → R is a continuous function. Imitate the proof of Theorem
1 to show that the solution y(t) to the Rdifferential equation y 0 = a(t)y with y(t0 ) = y0 is
t
given by y(t) = y0 eb(t) , where b(t) = t0 a(s) ds. Hint: b0 (t) = a(t).
1
2
Problem 3. Find the solution of the differential equation y 0 = 2t y, which satisfies the
initial condition y(0) = 1.
Problem 4. Find the solution of the differential equation y 0 = cos(t) y, which satisfies the
initial condition y(−π/2) = −1.
Theorem 2. Let a, g : R → R be continuous functions. Then the solution of the nonhomogeneous differential equation y 0 = a(t)y + g(t) with y(t0 ) = y0 is given by
Z t
b(t)
b(t)
(1)
y(t) = y0 e + e
e−b(s) g(s) ds.
t0
where b(t) =
Rt
t0
a(s) ds.
Proof. Let y(t) be the solution with y(t0 ) = y0 . We use the the solution of the corresponding homogeneous equation found in Problem 2, to form what is called an integrating factor
e−b(t) . We define z(t) = e−b(t) y(t). Note that b(t0 ) = 0 so eb(t0 ) = 1 and z(t0 ) = y0 . Also,
z 0 (t) = −e−b(t) b0 (t)y(t) + e−b(t) y 0 (t)
= −e−b(t) a(t)y(t) + e−b(t) [a(t)y(t) + g(t)] = e−b(t) g(t).
Since g(t) and a(t) are given, and b(t) is determined by integration, we know the derivative
of z(t). Integrating from t0 to t,
Z t
z(t) − z(t0 ) =
e−b(s) g(s) ds
and
t0
Z t
b(t)
b(t)
−b(s)
y(t) = e z(t) = e
y0 +
g(s) ds ,
e
t0
where we used the fact that z(t0 ) = y0 . Notice that (i) y0 eb(t) is solutionR of the associated
t
homogeneous equation with y(t0 ) = y0 given in Problem 2 and (ii) eb(t) t0 e−b(s) g(s) ds is
a particular solution of the nonhomogeneous equation with initial condition (t0 , 0).
Example 1. An initial amount of money y0 is put in an account on which interest is
compounded continuously at a rate of r > 0, for an effective annual rate of er . (For
r = 0.05, the effective rate is 0.0513.) Assume that money is added continuously to the
account at the rate of g. The differential equation governing the amount of money in the
account is given by
y 0 = r y + g.
The solution is determined as follows: a(t) = r , b(t) = er t , and
Z t
rt
rt
y(t) = y0 e + e
e r s g ds
0
t
h g
rt
rt
rs
= y0 e + e − e r
0
rt
rt g
= y0 e + e
1 − e rt
r
h
g i rt g
= y0 +
e − .
r
r
3
In the long term, the amount in the account approaches the amount of an initial deposit of
g
y0 + with no money added.
r
Problem 5. Find the solution of the differential equation y 0 = 2t y + t, which satisfies the
condition y(0) = 0.
Problem 6. For k = 2, 1, 0, 1, 2 find the solution (for t > 0) of the differential equation
y 0 = y/t + t, which satisfies the initial condition y(1) = k. Graph these solutions in the
(t, y) plane. What happens to these solutions when t → 0? Notice that at t = 0 the function
a(t) = 1/t is undefined!
Problem 7. Consider the nonhomogeneous linear equation (NH) y 0 = a(t)y + g(t) with
the associated homogeneous linear equation (H) y 0 = a(t)y.
a. If y p (t) is one (particular) solution of the nonhomogeneous equation (NH) and
y h (t) is a solution of the homogeneous equation (H), show that y p (t) + y h (t) is a
solution of the nonhomogeneous equation (NH).
b. Assume that y p (t) is a particular solution of the nonhomogeneous equation (NH).
Show that the general solution of the nonhomogeneous equation (NH) is y p (t) +
Ceb(t) where b(t) is given as in Problem 2 and C is an arbitrary constant. Hint: For
any solution y(t) of (NH), show that y(t) − y p (t) satisfies (H).
Example 2 (Logistic equation). We consider one nonlinear differential equation, the so
called logistic differential equation,
y
,
y0 = r y 1 −
K
where K > 0 is a given constant. There are two constant solutions, y1 (t) ≡ 0 and y2 (t) ≡
K . To find solutions with y(0) 6= 0, K , this equation is solved by the method of separation
of variables, that converts it into a problem of integrals, and then we use partial fractions:
K y0
=r
y (K − y)
y0
y0
+
= r.
y
K−y
Integrating with respect to t, the term y 0 dt changes it to and integral with respect to y:
Z
Z
Z
1
1
dy +
dy = r dt
y
K−y
ln(|y|) − ln(|K − y|) = r t + C1
|y|
= C 2 er t ,
where C2 = eC1 .
|K − y|
Assuming 0 < y < K so we can drop the absolute value signs, we solve for y:
y = C 2 K er t − C 2 er t y
(1 + C2 er t )y = C2 K er t
y=
C 2 K er t
K
=
,
(1 + C2 er t )
Ce−r t + 1
4
where C = 1/C2 . If y0 is the initial condition at t = 0, then some more algebra shows that
C = (K − y0 )/y0 so
y0 K
y(t; y0 ) =
.
y0 + (K − y0 )e−r t
It can be shown that this form of the solution is valid for any y0 and not just those with
0 < y0 < K .
For the logistic equation, a solution y(t; y0 ) for an initial condition 0 < y0 < K has
y 0 > 0 and it increases toward K . Also, for y0 > K , y 0 < 0, and the solution y(t; y0 )
decrease toward K . So we can conclude, even without solving the differential equation,
that for any y0 > 0 the solution y(t; y0 ) tends toward K as t goes to infinity. For this
reason, K is called the
carrying capacity. For y0 < 0, the denominator becomes 0 for
t1 = ln (K − y0 )/( y0 ) , and the solution goes to ∞ as t goes to t1 .
Remark. Notice that the solution y(t) of a linear differential equation or a nonhomogeneous differential equation is equal to an expression found by means of integrals.
A nonlinear equation of the form y 0 = f (y) is solved by separation of variables and then
integrals. However, the result gives a expression relating some combination of y to t. It is
then necessary to solve this implicit solution for y(t). In general, this can be difficult.
Problem 8. Solve the nonlinear differential equation y 0 = y 2 − 1 = (y + 1)(y − 1) by
separation of variables.
Example 3 (Economic growth). The Solow-Swan model of economic growth is given by
K 0 = s F(K , L) − δ K
and
L0 = n L ,
where K is the capital, L is the labor force, F(K , L) is the product function that we take to
be A K a L 1−a where 0 < a < 1, 0 < s ≤ 1 is the rate of reinvestment of income, δ > 0
is the rate of depreciation of capital, and n > 0 is the rate or growth of the labor force. A
new variable k = K/L is introduced that is the capital per capita (of labor). The differential
equation that k satisfies is as follows:
K
1 0
K − 2 L0
L
L
1
1
K
= s A K a L 1−a − δ K − 2 n L
L
L
L
= s A k a − (δ + n) k
= k a s A − (δ + n) k 1−a .
k0 =
Notice the similarity to the logistic equation. The equilibrium where k 0 = 0 and k > 0
occurs for
s A = (n + δ)k 1−a ,
1
s A 1−a
∗
k =
.
n+δ
For 0 < k < k ∗ , k 0 > 0 and k increases toward k ∗ . For k > k ∗ , k 0 < 0 and k decreases
toward k ∗ . It can be shown that for any initial capital k0 > 0, the solution k(t; k0 ) limits to
5
k ∗ as t goes to infinity. Therefore, all solutions with k0 tend to the steady state of the capital
to labor ratio of k ∗ .
2. Exponential Matrix Solutions of Constant Coefficient Systems
We now turn to the case of the differential equation for a linear vector fields on Rn ,
x0 = Ax,
(2)
where A is a n × n matrix. This is called a constant coefficient linear differential equation
or system of linear differential equations.
We will show that there is a solution x(t) of (2) that satisfies the condition x(t0 ) = x0 is
given by
x(t) = e(t−t0 )A x0 .
(3)
This expression is similar to the one-dimensional case, and the only problem is to make
sense out of e(t−t0 )A , where A is a matrix. Even though Theorems 6, 7, and 8 give a more
efficient way to solve the differential equation, the exponential is a convenient notation for
solutions and helps us understand the form of solutions.
We shall use the following fact from calculus: for every z ∈ R,
∞
X
1 2
1 k
1 k
z
e = 1 + z + z + ··· + z + ··· =
z .
2
k!
k=0
k!
For A be a square n × n matrix, define
1
1
2
k!
e A = I + A + A2 + · · · +
Ak + · · · =
∞
X
1
k=0
k!
Ak ,
where I is the n × n identity matrix. Notice that each term in the right hand side is an n × n
matrix, and at least finite sums make perfect sense. The following results states that this
infinite sum converges.
Theorem 3. Let A, B, and P be n × n matrices.
P
1 k
A
a. The series ∞
k=0 k! A converges to a matrix that we denote by e .
−1
b. Assume that P is a non-singular. Then ePAP = PeA P−1 .
c. If AB = BA then eA+B = eA eB .
Proof. (a) We indicate the ingredients of the proof and skip the details.
First, there are several norms of square matrices:
kAxk
,
kAk = sup
x6 =0 kxk
kAk1 =
n
X
|ai j |,
i, j=1
n
kAk2 = max |ai j |.
i, j=1
Any of these norms have the following properties for two n × n matrices A and B and a
vector x ∈ Rn :
6
(i) kA + Bk ≤ kAk + kBk
(ii) kABk ≤ kAk · kBk
(iii) kAxk ≤ kAk · kxk
Given an n × n matrix A, the exponential ekAk converges (as the exponential of a real
P
1
k
number), so given an > 0, there is an K (kAk, ) such that ∞
k=k0 k! kAk < for any
k0 ≥ K (kAk, ). Therefore for k0 ≥ K (kAk, ), the tail of the series has norm less that ,
∞
∞
X
X
1 k
1
≤
A
kAkk < ,
k!
k!
k=k0
k=k0
and the sequence of matrix converges in the set of all n × n matrices.
The proof of parts (b) and (c) are left to Problems 12 and 13.
a 0
Example 4. Let A =
be a 2 × 2 diagonal matrix. Then it is easy to see that etA is
0 d
k k
at
P
0
e
0
1 t a
also a diagonal matrix and etA = ∞
=
.
k=1 k!
0
tkdk
0 edt
Now, further assume that a = 2, d = 1, t0 = 1, and x0 = (1, 3)T . Then,
2(t−1)
2(t−1) 1
e
0
e
(t−1)A
x(t) = e
x0 =
.
=
3
0
e1−t
3e1−t
Moreover, x(1) = (1, 3)T , and x0 (t) = (2e2(t−1) , 3e1−t )T = Ax(t). Therefore, we have
shown that in this particular case formula (3) gives you a solution of (2).
Notice that x(t) = e2(t−1) e1 − 3e1−t e2 where the standard unit vectors e1 and e2 are the
eigenvectors of 2 and 1 respectively.
Also notice that in this case where A is a a 2 × 2 diagonal matrix, the system of two
equations splits into two independent one dimensional equations x10 = 2x1 and x20 = x2 ,
which have solutions x1 (t) = e2(t−1) and x2 (t) = 3e1−t with x1 (1) = 1 and x2 (1) = 3. Theorem 4. Let A be an n × n matrix.
a. If In and 0n are the n × n identity matrix and zero matrix, then e0n = In .
b. etA is a matrix solution of (2): dtd etA = AetA .
c. x(t) = e(t−t0 )A x0 is a solution of (2) with x(t0 ) = x0 .
Proof. (a) e0 = In + 0n + 12 02n + · · · = In .
(b) Differentiating the series for etA term by term (which can be justified by the convergence of the series),
d tA
d
1 2 2
1 3 3
1 j j
e =
In + tA + t A + t A + · · · + t A + · · ·
dt
dt
2
3!
j!
1
1
1
= 0n + A + (2t)A2 + (3t 2 )A3 + · · · + ( jt j−1 )A j + · · ·
2
3!
j!
1 2 2
1
j−1 j−1
= A In + tA + t A + · · · +
t A
+ ···
2
( j − 1)!
= AetA .
7
(c) By part (b), if x(t) = e(t−t0 )A x0 then x0 (t) = Ae(t−t0 )A x0 = Ax(t). Moreover, by part
(a), x(t0 ) = e0 x0 = Ix0 = x0 . This proves that x(t) = e(t−t0 )A x0 is a solution of (2) which
satisfies the condition x(t0 ) = x0 .
The following examples and problems give a few cases where that is is possible to calculate directly the exponential, although it is not always easy to do.
λt
λ1 0 0
e1
0
0
0 .
Problem 9. a. Let A = 0 λ2 0 . Show that etA = 0 eλ2 t
λ3 t
0 0 λ3
0
0 e
λ1 0 . . . 0
0 λ2 . . . 0
b. Let A = ..
.. . .
.. be a diagonal matrix. What is etA ?
.
. .
.
0 0 . . . λn
1 0
0 1
0 1
2
3
= J,
. Then, J =
= I, J =
Example 5. Let J =
0 1
1 0
1 0
1 0
= I, J5 = J, J6 = J2 = I, etc. Therefore,
J4 =
0 1
t2
t3
t4
t5
t6
t7
etJ = I + t J − I − J + I + J − I − J + · · ·
2!
3!
4!
5!
6!
7!
t2
t4
t6
t3
t5
t7
= 1 − + − + ··· I + t − + − + ··· J
2! 4! 6!
3! 5! 7!
= cos(t) I + sin(t) J
cos(t) sin(t)
=
sin(t) cos(t)
Example 6. Let N =
0 1
. Then N2 = 0, so Nk = 0 for k ≥ 2. Then, using the power
0 0
series
1
etN = I + tN + t 2 0 + · · ·
2
1 t
=
0 1
0 1 0
Problem 10. a. Let N = 0 0 1. Calculate etN . Hint: N3 = 0.
0 0 0
0 1 0 0
0 0 1 0
tN
4
b. Let N =
0 0 0 1. Calculate e . Hint: N = 0.
0 0 0 0
8
0 1 0 ... 0 0
0 0 1 . . . 0 0
0 0 0 . . . 0 0
c. Let N be an n × n matrix of the form .. .. .. . . .. .. . What is the smallest
. . .
. . .
0 0 0 . . . 0 1
0 0 0 ... 0 0
integer k satisfying Nk = 0? What is etN ?
λ 1
Problem 11. a. Let A =
. Calculate etA . Hint: Use A = λI + (A − λI). Note that
0 λ
λI and N = A
− λI commute.
Also, use Example 6.
λ 1 0
b. Let A = 0 λ 1. Calculate etA . Hint: Use Problem 10a.
0 0 λ
λ 1 0 ... 0 0
0 λ 1 . . . 0 0
0 0 λ . . . 0 0
c. Let A be an n × n matrix of the form .. .. .. . . .. .. . What is etA =
. . .
. . .
0 0 0 . . . λ 1
0 0 0 ... 0 λ
λt t (A−λI)
e e
? Hint: Use Problem 10c.
Problem 12. Let A be an n × n matrix, and let P be a non-singular n × n matrix. Show that
−1
−1
ePAP = PeA P−1 . Hint: Write out the power series for ePAP .
Problem 13. Let A and B be two n × n matrices such that AB = BA. Prove that eA+B =
eA eB . Hint: You
Pmay assume that the binomial theorem applies to commuting matrices,
i.e., (A + B)k = i+ j=k i!k!j! Ai B j .
0 1
0 0
.
Problem 14. Let A =
and B =
1 0
0 0
a. Show that AB 6=BA. 1 1
1 0
A
B
b. Show that e =
and e =
. Hint: Use Example 6 with t = 1 for
0 1
1 1
eA and a similar calculation for eB .
c. Show that eA eB 6= eA+B . Hint: A + B = J, where etJ was calculated in Example 5.
2. Systems of Linear Differential Equations: Some General Facts
The following theorem shows that the set of solution form a finite dimensional vector
space.
Theorem 5. Assume that A is a n × n matrix.
a. For every pair (t0 , x0 ) ∈ Rn+1 , there is a unique solution x(t) to (2) which satisfies
the initial condition x(t0 ) = x0 , and this solution is defined for all t ∈ R.
b. The set of all solutions of equation (2) forms an subspace of the vector space of C 1
functions from R to Rn , C 1 (R, Rn ).
9
c. Assume that B = {v1 , . . . , vn } is any basis of Rn , t0 ∈ R, and xi : R → Rn is the
solution of (2) with the initial condition xi (t0 ) = vi for each i = 1, 2, . . . , n. Then
n
forms a basis of the set of solutions of (2). Therefore, the set of solutions
{xi (t)}i=1
is an n-dimensional subspace of C 1 (R, Rn ). Also, any solution x(t) can be written
as c1 x1 (t) + · · · + cn x(t) for some choice of the constants c j . To get a solution with
the initial condition x(t0 ) = x0 , the c j are taken to be the coordinates of vector x0
relative to the basis B, (c1 , . . . , cn )T = [x0 ]B .
n
for the set of solutions of the (2) is called a fundamental
Definition. Any basis {xi (t)}i=1
1
set of solutions and c1 x (t) + · · · + cn xn (t) is called the general solution.
Proof. (a) Given any initial condition (t0 , x0 ), Theorem 4c proves that x(t) = e(t−t0 )A x0 is a
solution with the correct initial condition, i.e., this theorem proves exisitence. Assume y(t)
is another solution with y(t0 ) = x0 . Then we compare this solution with the exponential:
d
dt
e−(t−t0 )A y(t) = −e−(t−t0 )A Ay(t) + e−(t−t0 )A Ay(t) = 0.
Therefore, e−(t−t0 )A y(t) is a constant. Evaluating it at t = t0 , e−(t−t0 )A y(t) = x0 and
y(t) = e(t−t0 )A x0 . This proves uniqueness.
The proof of part (b) is left to Problem 15.
n
n
(c) We need to show that the set {xi (t)}i=1
is linearly independent. Assume that {di }i=1
are scalars such that
t0 = d1 x1 (t) + d2 x2 (t) + · · · + dn xn (t)
for all t.
Since the above equality holds for all t, it holds for t = t0 and
t0 = d1 x1 (t0 ) + d2 x2 (t0 ) + · · · + dn xn (t0 ) = d1 v1 + d2 v2 + · · · + dn vn ,
n
because xi (t0 ) = vi . Since the set of vectors {vi }i=1
is linearly independent, d1 = d2 =
· · · = dn = 0. This shows that the set of solutions is linearly independent.
n
The next step is to show that the set {xi (t)}i=1
spans the subspace of solutions. Let x(t)
be any solution of (2). Taking the initial condition at t = t0 , the vector x(t0 ) can be written
as a linear combination of the basis {v1 , . . . , vn } of Rn : x(t0 ) = c1 v1 + · · · + cn vn . Using
these coefficients, set y(t) ≡ c1 x1 (t) + c2 x2 (t) + · · · + cn xn (t). Then, since each xi (t) is
a solution, y(t) is a solution by Problem 15 or part (b). Moreover, y(t) and x(t) have the
same initial conditions at t = t0 ,
y(t0 ) = c1 x1 (t0 ) + c2 x2 (t0 ) + · · · + cn xn (t0 ) = c1 v1 + c2 v2 + · · · + cn vn = x(t0 ).
By part (a) such a solution is unique, so x(t) ≡ y(t). This proves that every solution of (2)
n
is in the span of {xi (t)}i=1
. We have also shown how to prescribe the initial conditions. Problem 15. Show that the sum of any two solutions of (2) is again a solution, and any
scalar multiple of a solution is a solution. This means that the set of solutions of (2) is
closed under addition and scalar multiplication, and therefore is a subspace of C 1 (R, Rn ).
10
4. Eigenvalue and Eigenvector Solutions of Constant Coefficient Equations
Rather than calculate the matrix exponential etA , we want to find vectors for which etA v
has a simple form. The vectors we seek are real and complex eigenvectors and generalized
eigenvectors.
with
For a matrix A with characteristic equation 0 = (r1 − λ)m 1 · · · (rk − λ)m k
r j 6= r` for j 6= `, the exponent m j is called the algebraic multiplicity of the eigenvalue
r j . The dimension of the eigenspace E(r j ) = {v : (A − r j I)v = 0 } is called the geometric multiplicity of the eigenvalue r j . We know that the geometric multiplicity satisfies
1 ≤ dim(E(r j )) ≤ m j . The generalized eigenspace for the eigenvalue r j is defined to be
Egen (r j ) = {v : (A − r j I)m j v = 0 }. A vector in Egen (r j ) is called a generalized eigenvector
for r j . A theorem in linear algebra states that the dimension of Egen (r j ) is always equal to
the algebraic multiplicity of r j , so there are always enough generalized eigenvectors.
We break down the consideration of constructing solutions to equation (2) into four cases:
Case 1: A has a real eigenvector for a real eigenvalue. Case 2: A has a complex eigenvector
for a complex eigenvalue. Case 3: A has a generalized real eigenvector for a real eigenvalue.
Case 4: A has a generalized complex eigenvector for a complex eigenvalue.
Case 1: A real eigenvector for a real eigenvalue.
Theorem 6. Assume that r is a real eigenvalue of A with eigenvector v. Then x(t) = er t v
and y(t) = er (t−t0 ) v are solutions of x0 = Ax with x(0) = v and y(t0 ) = v.
Proof. We will prove this result in two ways, first using the exponential. Since Av = r v,
Ak v = r k v for all k ≥ 1, and
t2 2
tk
A v + · · · + Ak v + · · ·
2!
k!
t 2r 2
t kr k
= v + tr v +
v + ··· +
v + ···
2!
k!
= er t v.
etA v = Iv + tAv +
This shows that er t v = etA v is a solution. The calculation for er (t−t0 ) v is similar.
Alternatively, we can say that the solution of scalar equations gives an indication that
a solution should be of form x(t) = er t v. Using this function, the following calculation
shows that it is a solution provided that v is eigenvector for the eigenvalue r :
d rt e v = er t r v = er t Av = Aer t v.
dt
Example 7. Find the general solution of the equation
1 1
0
x =
x.
4 1
Then find a solution which passes through (3, 2)T at time t0 = 1.
First we need the eigenvalues and eigenvectors of the matrix A. We find that the eigenvalues are r1 = 3 and r2 = 1. Since the eigenvalues are distinct, the corresponding
eigenvectors v1 = (1, 2)T and v2 = (1, 2)T are independent and form a basis in R2 .
11
By Theorem 5b, the general solution is given by
1
3t 1
t
x(t) = c1 e
+ c2 e
.
2
2
To find a particular solution we can either set t = 1, x(1) = (3, 2)T in the above formula
and solve the linear system for c1 and c2 (do it!), or use our theorem, and write (3, 2)T =
2(1, 2)T + (1, 2)T (we also solved a linear system here), and set
3(t−1) 1
−(t−1) 1
x(t) = 2e
+e
.
2
2
Problem 16. In each of problems (a) through (c) find the general solution of the equation
x0 = Ax and a particular solution for the given initial condition.
3 2
1
4 3
1
and x(0) =
and x( 1) =
(a) A =
; (b) A =
;
2 2
1
8 6
0
1
2 1
(c) A =
and x(2) =
.
5
1 2
Case 2: A complex eigenvector for a complex eigenvalue.
We always assume that A is an n × n matrix with real entries. We want to consider the
case when λ = a + ib is a complex eigenvalue, where a and b 6= 0 are real.
Problem 17. Assume that u + iw is a eigenvector for the eigenvalue a + ib with b 6= 0.
a. Show that u − iw is an eigenvector for the eigenvalue a − ib.
b. The eigenvectors u+iw and u−iw must be linearly independent with complex scalars
since they correspond to distinct eigenvalues. Use this fact to show that u and w are linearly
independent.
As noted in the Linear Algebra book by Lay, e(at+ibt) = eat [cos(bt) + i sin(bt)], as can
be seen by comparing power series expansions. The following theorem indicates how to get
two real solutions from the pair of complex eigenvalues a ± ib.
Theorem 7. Let A be a real matrix.
a. If z(t) = x(t) + i y(t) is a complex solution to (2) where x(t) and y(t) are real, then
x(t) and y(t) are each real solutions to (2).
b. If v = u + i w is a complex eigenvector for a complex eigenvalue a + ib, then
x1 (t) = etA u = eat [cos(bt)u − sin(bt)w] and x2 (t) = etA w = eat [sin(bt)u + cos(bt)w]
are two real solutions for the pair of complex eigenvalues a ± i b with x1 (0) = u and
x2 (0) = w.
Proof. (a) The real and imaginary parts of the the two sides of the equation
d
d
dt x(t) + i dt y(t) = Ax(t) + i Ay(t) must be equal. This proves part (a).
Part (b) follows from part (a) and the following calculation:
e(at+ibt) [u + iw] = eat [cos(bt)u − sin(bt)w] + i eat [sin(bt)u + cos(bt)w]
12
is a complex solution. Taking the real and imaginary parts gives the two real solutions.
Note that the complex solution for a − ib is
e(at−ibt) [u − iw] = eat [cos(bt)u − sin(bt)w] − i eat [sin(bt)u + cos(bt)w].
This gives the same two real solutions. Because they have initial conditions vu and w
respectively, they equal etA u and etA w respectively. Compare with the exponential in Example 5.
Example 8. The matrix of the linear system of differential equations
3 0 2
x0 = 1 1 0 x
2 1 0
√
has characteristic equation 0 = −(λ + 2)(λ2 + 2λ + 3) and eigenvalues
2 and 1 ± 2i.
√
The eigenvector for 2 is (2, 2, 1)T . To find the eigenvector for 1 + 2i, we need to row
reduce the following matrix:
√
-2 − 2 i √0
2
√
A − (-1 + 2 i) =
1
- 2i
0√
-2
-1
1 − 2i
√
multiplying row 1 by -2 + 2 i
√
6 √0
−4 + 2 2 i
∼ 1 - 2i
0√
-2
-1
1 − 2i
interchanging rows 1 & 2 and dividing the new row 2 by 2
√
1 - 2i
0√
∼ 3
0
−2 +√ 2 i
-2
-1
1 − 2i
clearing column 1
√
1
- √2 i
0√
∼ 0
3 2√i
−2 +√ 2 i
0 -1 − 2 2 i 1 − 2 i
√
√
multiplying row 2 by − 2 i and row 3 by -1 + 2 2 i
√
1 - 2i
0√
∼ 0
6
2 + 2√2 i
0
9
3 + 3 2i
dividing row 2 by 2 and eliminating row 3
√
1 - 2i
0√
∼ 0
3
1 + 2 i .
0
0
0
13
√
√
2 i v2 and (1 + 2 i)v3 = -3v2 , so
√ is
onesolution
2
√
√
√
√
√2
v3 = 3, v2 = -1 − 2 i, and v1 = 2 i(-1 − 2 i) = 2 − 2 i: v = -1 − i
2. We
3
0
have the three desired independent solutions, and the general solution is
√
2
2
√
√
√2
x(t) = c1 e 2t 2 + c2 e t cos( 2t) -1 + sin( 2t) 2
1
3
0
√
2
√
√
√2
t
+ c3 e
sin( 2t) -1 − cos( 2t)
2 .
3
0
These give us the equations v1 =
2 0 1
Problem 18. Consider the matrix A = 0 2 1 , with eigenvalues 2, 1 + i, and
2 0 0
1 − i. Find the general real solution of the differential equation x0 = Ax.
Case 3: A generalized real eigenvector for a real eigenvalue.
Consider the simplest case where A has a real eigenvalue r with algebraic multiplicity 2
and geometric multiplicity 1. Let v1 be an eigenvector corresponding to r . Since Egen (r )
must have dimension 2, there must be a generalized eigenvector w that is not an eigenvector,
so (A − r I)w 6= 0 but (A − r I)2 w = 0. It follows that (A − r I)w must be a scalar multiple
of v1 . By scaling w, we can find a generalized eigenvector v2 such that (A − r I)v2 = v1 .
Problem 19. If (A − r I)v2 = v1 6= 0 and (A − r I)v1 = 0, show that v1 and v2 are linearly
independent.
Theorem 8. a. Assume that A has a real eigenvalue r with multiplicity m and generalized
eigenvector w. Then there is a solution of x0 = Ax with x(0) = w of the form
t m−1
tA
rt
m−1
(A − r I) w .
x(t) = e w = e
w + t (A − r I)w + · · · +
(m − 1)!
b. Assume A has a real eigenvalue r with algebraic multiplicity 2 but geometric multiplicity 1. Assume that v is an eigenvector and w is a generalized eigenvector solving
(A − r I)w = v. Then (2) has two independent solutions x1 (t) = etA v = er t v and
x2 (t) = etA w = er t w + ter t v. Notice that x1 (0) = v and x2 (0) = w.
Proof. (a) We note that r I and A − r I commute so that etA = er I+(A−r I) = er tI et (A−r I) =
er t et (A−r I) , and
etA w = er t et (A−r I) w
tj
rt
j
= e I w + t (A − r I) w + · · · + (A − r I) w + · · ·
j!
t m−1
= er t w + t (A − r I)w + · · · +
(A − r I)m−1 w ,
(m − 1)!
14
j
since tj! (A − r I) j w = 0 for j ≥ m. Compare with the exponential in Problem 11.
Part (b) follows from part (a).
Remark. The proof of this theorem should be compared with the exponential matrices
of Problem 11(a-c). For each of those matrices, there is single eigenvalue of algebraic
multiplicity 2, 3, and n respectively, and there is only one eigenvector. The exponential
contains polynomial terms of degree 1, 2, and n − 1 respectively. This is similar to the
solution given in the last theorem.
Example 9. The matrix of the linear system of differential equations
0 1 1
x0 = 2 3 1 x
1 1 1
has eigenvalues 1, 1, 2. An eigenvector for 2 is v2 = (0, 1, 1)T . Since dim(Egen ( 2)) =
1, this eigenvector v2 spans both Egen ( 2) and E( 2).
The other eigenvalue 1 has dim(Egen ( 1)) = 2.
1 -1 1
1 -1 1
1 -1 0
(A + I) = 2 -2 1 ∼ 0 0 -1 ∼ 0 0 1
1 -1 0
0 0 -1
0 0 0
Thus, there is only one independent eigenvector, which can be take to be v1 = (1, 1, 0)T .
This eigenvector v1 spans E( 1) but not Egen ( 1).
To find another generalized eigenvector for λ = 1, we solve the following nonhomogeneous equation (A + I)w = v1 by considering the following augmented matrix:
1 1 1 1
1 1 0 0
1 1 1 1
2 2 1 1 ∼ 0 0
1 1 ∼ 0 0 1 1 .
1 1 0 0
0 0
1 1
0 0 0 0
Thus, the solution is w1 = w2 and w3 = 1, or w = w2 (1, 1, 0)T + (0, 0, 1)T = w2 v1 +
(0, 0, 1)T . Notice that the solution involves an arbitrary multiple of the eigenvector v1 : this
is always the case. We take w2 = 0 and get w = (0, 0, 1)T as the generalized eigenvector.
The vectors v1 and w span Egen ( 1).
We have found three independent solutions, and the general solution is
0
1
0
1
x(t) = c1 e 2t 1 + c2 e t 1 + c3 e t 0 + t 1 .
0
1
0
1
3 4
x, find the general solution and the solu1 1
tion with initial condition (5, 1)T at time t = 0.
0 6 6
Problem 21. Find the general solution of the system x0 = 5 3 1 x. Hint: The
1 5 1
eigenvalues for (b) are 6, 6, and 6.
Problem 20. For the linear system x0 =
15
Case 4: A generalized complex eigenvector for a complex eigenvalue.
Problem 22. Assume that A is a real matrix with a complex eigenvalue r = a + bi with
one eigenvector v = vr + ivi , and a generalized eigenvector w = wr + iwi such that
(A − r I) w = v. Find two real solutions associated with the complex solution etA w.
Hint: Write et (A−r I) w as a finite sum involving v and w, and then expand
etA w = e(a+bi)t et (A−r I) w to find the real and imaginary parts.
Fundamental matrix solution and etA .
By Theorem 5, for an n × n matrix A, we need n solutions of (2) x0 = Ax with linearly independent initial conditions. We write the characteristic equation of A as 0 =
(r1 − λ)m 1 · · · (rk − λ)m k with r j 6= r` for j 6= `. We have denoted the eigenspace and generalized eigenspace by E(r j ) and Egen (r j ). For a complex eigenvalue r j , we denote the real
space of vector generated by the real and imaginary parts of the eigenvectors by E(r j , r̄ j )
and the subspace generated by the generalized complex eigenvectors by Egen (r j , r̄ j ). For r j
real, m j = dim(Egen (r j )) ≥ dim(E(r j )) ≥ 1; for r j complex, 2m j = dim(Egen (r j , r̄ j )) ≥
dim(E(r j , r̄ j )) ≥ 2. For each eigenvector, real and imaginary parts of each complex eigenvectors, and each generalized eigenvector vi , by Theorems 6, 7b, 8a and Problem 22, we
can find a real solution xi (t)of (2) with xi (0) = vi , that has the form er t vi , er t vi + tz ,
or eat cos(bt)vi + sin(bt)w . There are n such solutions with linearly independent initial
conditions, so a basis of solutions for the the system of differential equations.
There are several situations where we want to put these different solutions together into a
single matrix solution. A curve of matrices M(t) is called a fundamental matrix solution of
d
d
dt x = Ax provided that dt M(t) = AM(t). and M(0) is invertible, i.e., det(M(0)) 6 = 0.
We have shown that etA is always a fundamental matrix solution, but it is not always easy
to compute. The next theorem indicates how the solutions for eigenvectors and generalized
eigenvectors can be used to form a fundamental matrix solution and etA . The point is that the
solutions xi (t) are found by the (generalize) eigenvector method and not by exponentiating
the matrix.
n
Theorem 9. Let A be a real n × n matrix. Let {vi }i=1
be a basis of Rn of eigenvectors, real
and imaginary parts of complex eigenvectors, and generalized eigenvectors. By Theorems
6, 7b, 8a and Problem 22, we know solutions xi (t) = etA vi of (2) with xi (0) = vi . Let
M(t) = [x1 (t), . . . , xn (t)] be the matrix whose columns are the solutions xi (t).
a. M(t) is a fundamental matrix solution and M(t) = c1 x1 (t) + · · · + cn xn (t) is the
general solution.
b. etA = M(t)M(0)−1 .
d x = Ax with initial condition x(0) = x is given by x(t) =
c. The solution of dt
0
−1
M(t)M(0) x0 or M(t)c = c1 x1 (t) + · · · + cn xn (t) where c = [x0 ]B the
coefficients of x0 with respect to the basis B.
Proof. (a) The derivative of the matrix M(t) satisfies the following:
d
d 1
d n
M(t) =
x (t), . . . , x (t) = Ax1 (t), . . . , Axn (t)
dt
dt
dt
1
= A x (t), . . . , xn (t) = AM(t).
16
This shows that M(t) is a matrix solution. Also det(M(0)) = det[v1 , . . . , vn ] 6= 0 because
n
the vectors {vi }i=1
are a basis. The fact that M(t)c is the general solution follows from
Theorem 5.
e = M(t)M(0)−1 satisfies
(b) The matrix M(t)
d
dt
e = d M(t)M(0)−1
M(t)
dt
= AM(t)M(0)−1
e
= AM(t),
e
e
so M(t)
is a fundamental matrix solution. At t = 0, M(0)
= M(0)M(0)−1 = I = e0A , so
tA
e = e for all t.
M(t)
e
(c) For an initial condition x0 at t = 0, a direct calculation shows that M(t)x
0 is a solu1
e
tion and it satisfies the initial condition M(0)x0 = x0 . The vector x0 = [v , . . . , vn ][x0 ]B =
M(0)[x0 ]B , so etA x0 = M(t)M(0)−1 M(0)[x0 ]B = M(t)[x0 ]B is a solution with the correct initial condition.
Example 10. For the differential equation in Example 9, a fundamental matrix solution is
0 e t te t
M(t) = e 2t e t te t .
e 2t 0 e t
The exponential is
0 e t te t
0
etA = e 2t e t te t 1
1
e 2t 0 e t
t
t
e + te
2t
e + e t + te t
=
e 2t + e t
−1
0
1 0
1 0 = e 2t
0 1
e 2t
te t
e 2t − e t + te
e 2t − e t
t
e t te t
1
e t te t 1
1
0 e t
t
1 0
0 0
1 1
te
te t
e t
5. Geometry of Solutions: Phase Plane
For x0 = Ax, the x-space Rn is called the phase plane for n = 2 and the phase space
for n > 2. For n = 2, we can plot the curves that are traced by solutions x(t) in the phase
plane. We consider a few simple examples, which represent generic types of behavior.
1 0
0
Example 11. A solution of the differential equation x =
x is of the form
0
2
x(t) = (x1 (t), x1 (t))T = (c1 e t , c2 e 2t )T = c1 e t e1 + c2 e 2t e2 . As t goes to infinity, any
solution x(t) converges to the origin, so the the origin is called stable or an attractor.
For c1 6= 0, x(t) = e t [c1 e1 + c2 e t e2 ] and the solution approaches the origin in a
direction asymptotic to e1 . In fact, we can express x2 as a function of x1 : x2 = c2 x12 /c12 .
Thus, each solution lies on a (half) parabola. If c1 = 0 then we get the half line x1 = 0 with
x2 > 0 if c2 > 0 and the half line x1 = 0 with x2 < 0 if c2 < 0. If c2 = 0, the solution like
on half of the line x2 = 0. See Figure 1. When there are two distinct negative eigenvalues,
the origin is called a stable node.
17
F IGURE 1. Example 11: Stable node.
Remark. For any linear system in R2 with two real negative eigenvalues, r1 and r2 , the
origin will be an attractor but the shape of flow lines depends on the ratio of r1 and r2 .
If we take r1 = 1 and r2 = 2 (x10 (t) = x1 (t), x20 (t) = 2x2 (t)) then the flow lines will still
be parabolas, but the origin becomes unstable or a repeller.
1 0
0
Example 12. A solution of the differential equation x =
x is of the form
0 2
x(t) = (x1 (t), x1 (t))T = (c1 e t , c2 e2t )T = c1 e t e1 + c2 e2t e2 . If both c1 6= 0 and c2 6= 0,
then position vector is asymptotic to the x2 -axis as t goes to infinity and is asymptotic to
the x1 -axis as t goes to minus infinity. See Figure 2.
F IGURE 2. Example 12: Saddle
For c1 6= 0 so x1 6= 0, we can express x2 as a function of x1 : x2 = c2 c12 /x12 . These curves
look similar to hyperbolas. If c1 = 0 then we get the half line x1 = 0 with x2 > 0 if c2 > 0
and the half line x1 = 0 with x2 < 0 if c2 < 0. If c2 = 0, the solution like on half of the
line x2 = 0. When there is one positive and one negative eigenvalue, the origin is called a
saddle.
Remark. For a linear system in R2 with one negative and one positive real negative eigenvalue, r1 and r2 , the shape of flow lines depends on the ratio of absolute values of r1 and r2 .
For example, if we take r1 = 1 and r2 = 1, then the flow lines will be usual hyperbolas.
2 1
1
t
Example 13. The general solution of x0 =
x is c1 e 2t
+ c2 e 2t
.
0
2
0
1
18
F IGURE 3. Example 13: Improper stable node
As t goes to infinity,
the quantity te−2t goes to 0, so all the
solutions go to the origin.
1
1
Also, te 2t 1 approaches the direction of the eigenvector
, so all solutions approach
0
/t
the origin asymptotic to the line of the eigenspace. See Figure 3. When there is a repeated
negative eigenvalue with geometric multiplicity one, the origin is called an improper stable
node.
2 1
Example 14. The system x0 =
x has complex eigenvalues 2 ± i, and a general
1
2
sin(t)
2t cos(t)
2t
solution c1 e
+ c2 e
. These solutions spiral around because of the
sin(t)
cos(t)
imaginary part of the eigenvalue and they approach the origin since the real part of the
eigenvalue is negative, 2. See Figure 4. When there are of complex eigenvalues with
negative real part, the origin is called an stable focus.
F IGURE 4. Example 14: Stable focus
a b
x approach zero as
c d
t → +∞ if and only if τ = tr(A) = a + d < 0 and 1 = det(A) = ad − bc > 0.
Hint: look at the signs of the real part of the eigenvalues.
Problem 23. Show that all solutions to the system x0 =
19
6. Nonhomogeneous Linear Systems
The solution of a nonhomogeneous linear system is similar to the nonhomogeneous linear scalar differential equation, and the proof is basically the same.
Theorem 10 (Variation of Parameters). Let A be an n × n matrix and g : R → Rn be
a continuous function. Then the solution of the nonhomogeneous equation x0 = Ax + g(t)
with x(0) = x0 is given by
Z t
Z t
tA
tA
−sA
−1
x(t) = e x0 + e
e
g(s) ds = M(t)M(0) x0 + M(t)
M(s) 1 g(s) ds
0
0
where M(t) is any fundamental matrix solution.
The calculation of a solution using the formula in the theorem is tedious, but we work
one simple example.
Example 15. Consider the nonhomogeneous linear system of differential equations
d x1
0 1 x1
0
=
+B
,
1 0 x2
sin(ωt)
dt x2
with ω 6= 1. The fundamental matrix solution of the homogeneous equation is
cos(s)
sin(s)
cos(t) sin(t)
−As
At
and
e
=
.
e =
sin(t) cos(t)
sin(s) cos(s)
The integral term in the expression for the solution gives a particular solution x p (t). Using
some trigonometry identities,
Z t
0
p
At
−As
x (t) = Be
e
ds
sin(ωs)
0
Z t
sin(s) sin(ωs)
At
ds
= Be
cos(s) sin(ωs)
0
Z t
B
cos((1 + ω)s) − cos((1 − ω)s)
= eAt
ds
sin((1 + ω)s) + sin((−1 + ω)s)
2
0
1
1
sin((1 + ω)t) −
sin((1 − ω)t)
B
1−ω
= eAt 1 +1 ω
1
2
cos((1 + ω)t) −
cos((ω − 1)t)
1+ω
ω−1
0
B
1
+ eAt 1
+
2
1+ω ω−1
B
B
sin(ωt)
At 0
=
−
e
.
ω
1 − ω2 ω cos(ωt)
1 − ω2
(The last equality requires some calculation.)
20
The solution with the initial condition x0 is
x(t) = eAt x0 + x p (t)
B
B
0
sin(ωt)
At
=e
+
.
x0 −
1 − ω2 ω
1 − ω2 ω cos(ωt)
d x = Ax + g(t) with the
Problem 24. Consider the nonhomogeneous equation (NH) dt
d x = Ax.
associated homogeneous equation (H) dt
a. If x p (t) is one (particular) solution of the nonhomogeneous equation (NH) and xh (t)
is a solution of the homogeneous equation (H), show that x p (t) + xh (t) is a solution
of the nonhomogeneous equation (NH).
b. Assume that x p (t) is a particular solution of the nonhomogeneous equation (NH).
Show that the general solution of the nonhomogeneous equation (NH) is given by
x p (t) + etA c for vectors of constants c.
Hint: For any solution x(t) of (NH), x(t) − x p (t) satisfies (H).
Problem 25. Find the general solution of
t
d
e
1 0
x+
x=
0 2
0
dt
with x(0) = (1, 3)T by using the variation of parameters formula given in Theorem 10.
7. Second Order Scalar Equations
There is a close relationship between second order scalar linear differential equations,
and systems of linear differential equations. Consider
(4)
y 00 + ay 0 + by = 0,
where a and b are constants. “Solve” means we are looking for a C 2 function y(t) which
satisfies the above equation. We say that this equation is second order since it involves
derivatives up to order two. Assume that y(t) is a solution (4), set x1 (t) = y(t), x2 (t) =
y 0 (t), and consider the vector x(t) = (y(t), y 0 (t))T = (x1 (t), x2 (t))T . Then
0 y (t)
x2 (t)
0 1
x1 (t)
0
x (t) = 00
=
=
,
y (t)
−ax2 (t) − bx1 (t)
b a x2 (t)
since y 00 (t) = −ay 0 (t)−by(t) = −ax2 (t)−bx1 (t). We have shown that if y(t) is a solution
of the equation (4) then x(t) = (x1 (t), x2 (t))T = (y(t), y 0 (t))T is a solution of the equation
(2), where
0 1
(5)
A=
.
b a
Problem 26. Show that if x(t) = (x1 (t), x2 (t))T is a solution of the differential equation
(2), with A given in equation (5), then the first coordinate y(t) = x1 (t) is a solution of (4).
21
Remark. Since an equation of the form (4) can be converted to a system of the type (2)
and visa versa, we do not need to consider equations of type (4) separately. We can use the
solutions of (2) to give solutions (4).
For a second order equation (4), we have to specify the initial conditions for both y(t0 )
and y 0 (t0 ). Why?
Problem 27. Consider the second order scalar differential equation y 00 − 5y 0 + 4y = 0,
with initial conditions y(0) = 3, y 0 (0) = 6.
a. Write down the corresponding 2×2 linear system and solve it for the general vector
solution and the solution with the given initial conditions. Use this vector solution
to get the general scalar solution for second order scalar differential equation and
the solution with the given initial conditions.
b. Solve the second order scalar equation direct way as follows: Look for a solution
in the form y(t) = er t , where r is to be determined. Put it in the equation and solve
for all possible values of r . You will get a quadratic equation and find two solutions
y (1) (t) = er1 t and y (2) (t) = er2 t . Convince yourself that the general solution is
found by taking all linear combinations of these two solutions. To find a particular
solution solve for the unknown constants in that linear combination.
Problem 28. Consider the second order scalar differential equation y 00 − 2y 0 + y = 0.
a. Write down the corresponding 2×2 linear system and solve it for the general vector
solution. Show that two solutions of the scalar equation are of the form er t and ter t
for the correct choice of r .
b. Find the general solution to this equation a second direct way as follows: Try to
find a solution in the form y (1) (t) = er t . You will see that r is a double root of
the quadratic equation. To obtain all solutions you need to find another linearly
independent solution. Try to look for it by setting y (2) (t) = ter t and substituting
into the second order scalar equation. Take an arbitrary linear combination of y (1) (t)
and y (2) (t). This will be the general solution.
c. Find a solution which satisfies the conditions y(0) = 2, y 0 (0) = 5.
Problem 29. Consider the differential equation y 00 − 4y 0 + 25 = 0. Find two real solutions
by looking for solutions of the form er t . Hint: For r = a + ib complex, what does eat+ibt
equal?
Problem 30. For the 2 × 2 linear systems x0 = Jx given in Example 5, write down the
corresponding second order equation. Find the solution of this equation, which satisfies the
conditions y(0) = 2, and y 0 (0) = 1.
8. Applications
Example 16 (Market model with price expectations). A dynamic market model with
price expectations is given in [1]. For a given product, the quantity demanded is denoted by
Q d , the quantity supplied by Q s , and the price by P. The assumption is that Q d and Q s are
determined by
Q s = −γ + δ P
and
Qd = α − β P + m
dP
d2 P
+n 2 .
dt
dt
22
We are assuming that the supply only depends on the current price, but the demand increases
with increasing price and positive second derivative. The market is assumed to clear at each
time, Q s = Q d . The result is the following nonhomogeneous second order differential
equation:
d2 P
dP
n 2 +m
− (β + δ) P = −(α + γ ).
dt
dt
α+γ
A dynamic equilibrium occurs for P ∗ =
> 0. This is a particular solution of the
β +δ
nonhomogeneous equation. The characteristic equation of the homogeneous equation is
0 = n r 2 + n r − (β + δ). If all the parameters are positive, then there are two real roots
s
m
β +δ
1 m 2
r1 , r2 = − ±
+4
,
2n 2
n
n
one positive and one negative. The general solution of the nonhomogeneous differential
equation is
α+γ
,
P(t) = c1 er1 t + c2 er2 t +
β +δ
and the equilibrium is dynamically unstable.
However, if n < 0, m < 0, and the rest of the parameters are positive, then both roots
have negative real part, and the equilibrium is dynamically stable. The eignevalues can be
real distinct, real and equal, or complex, depending on the size of the constants.
Example 17 (Model for inflation and unemployment). A model for inflation and unemployment is given in [1] based on a Phillips relation as applied by M. Friedman. The
variables are the expected rate of inflation π and the rate of unemployment U . There are
two other auxiliary variables which are the actual rate of inflation p and the rate of growth
of wages w. The rate of monetary expansion is m > 0 and the increase in productivity is
T > 0, which are taken as parameters. The model assumes the following:
w
p
dπ
dt
dU
dt
= α − β U,
=hπ −βU +α−T
with 0 < h ≤ 1,
= j ( p − π ) = − j (1 − h) π − jβ U + j (α − T ) ,
= k( p − m) = kh π − kβ U + k (α − T − m) ,
so
d π
− j (1 − h) − jβ π
j (α − T )
=
+
.
kh
−kβ U
k(α − T − m)
dt U
All the parameters are assumed to be positive.
The equilibrium is obtained by finding the values where the time derivatives are zero,
dU
dπ
=0=
. This yields a system of two linear equations that can be solved to give
dt
dt
π ∗ = m and U ∗ = β1 [α − T − m(1 − h)]. Then the variables x1 = π − π ∗ and x2 =
23
U −U ∗ satisfy x0 = Ax, where A is the coefficient matrix given above. A direct calculations
shows that the trace and determinant are as follows:
tr(A) = − j (1 − h) − kβ < 0,
det(A) = jkβ > 0.
Since r1 + r2 = tr(A) < 0 and r1r2 = det(A) > 0, the real part of both eigenvalues must be
negative. Therefore, any solutions of the linear equations in the x-variables goes to 0 and
(π(t), U (t))T goes to (π ∗ , U ∗ )T as t goes to infinity. At the equilibrium, the expected rate
of inflation equals the rate of monetary expansion, the unemployment is given by U ∗ and
the rate of inflation p ∗ can be calculated by its formula.
Example 18 (Monetary policy). A model of N. Obst for inflation and monetary policy is
given in [1]. The amount of the national product is Q, the “price” of the national product is
P, and the value of the national product is P Q. The money supply and demand are given
by Ms and Md , and the variable µ = Md/Ms is the ratio. It is assumed that Md = a P Q
where a > 0 is a constant, so µ = a P Q/Ms . The following rates of change are given:
1 dP
=p
P dt
1 dQ
=q
Q dt
1 d Ms
=m
Ms dt
is the rate of inflation,
is the rate of growth of national product, and
is the rate of monetary expansion.
We assume q is a constant, and p and m are variables. The rate of change of inflation is
assumed to be given by
dp
Ms − Md
=h
= h(1 − µ),
dt
Ms
where h > 0 is a constant. Obst argued that the policy that sets m should not be a function
dp
of p but a function of
so of µ − 1. For simplicity of discussion, we assume that
dt
m = m 1 µ + m 0 with m 1 > 0. Differentiating ln(µ) with respect to t, we get the following:
ln(µ) = ln(a) + ln(P) + ln(Q) − ln(Ms )
1 dµ
1 dP
1 dQ
1 d Ms
=0+
+
−
= p + q − m.
µ dt
P dt
Q dt
Ms dt
Combining, we have the system of nonlinear equations
dp
= h (1 − µ)
dt
dµ
= ( p + q − m 1 µ − m 0 ) µ.
dt
At the equilibrium, µ∗ = 1 and p ∗ = m 1 + m 0 − q. To avoid deflation, the monetary policy
should be made with m 1 + m 0 ≥ q, i.e., the monetary growth needs to be large enough
24
to sustain the growth of the national product. The system can be linearized at ( p ∗ , µ∗ ) by
forming the matrix of partial derivatives,
0
h
0
h
.
=
µ ( p + q − m 1 µ − m 0 ) − m 1 µ ( p∗ ,µ∗ )
1 m1
The determinant is h > 0, and the trace is m 1 < 0. Therefore, the eigenvalues
q at the
2
equilibrium have negative real parts. For 4h > m 21 , they are complex, m 1/2 + i h − m 1/4.
Just like for critical points of a real valued function, the linearization dominates the behavior
near the equilibrium and solutions near the equilibrium spiral in toward the equilibrium. See
Figure 5.
µ
µ∗
p∗
p
F IGURE 5. Example 18: Monetary policy
By itself, the linearization does not tell what happens to solutions far from the equilibrium, but another method shows that all solutions with µ0 > 0 do converge to the equilibrium. We form a real valued Lyapunov function (by judicious guessing and integrating),
p2
+ (q − m 1 − m 0 ) p + hµ − h ln(µ).
2
This function is defined on the upper half plane, µ > 0, and has a minimum at ( p ∗ , µ∗ ).
The time derivative of L along solution of the differential equation is given as follows:
L( p, µ) =
d
dp
dp
dµ
h dµ
L=p
+ (q − m 1 − m 0 )
+h
−
dt
dt
dt
dt
µ dt
= ph(1 − µ) + (q − m 0 )h(1 − µ) + h( p + q − m 1 µ + m 1 − m 0 )µ
− h( p + q − m 1 µ + m 1 − m 0 )
= hm 1 (µ − 1) − hm 1 µ2 + hm 1 µ
= −hm 1 (µ − 1)2 ≤ 0.
This function decreases along solutions, and it can be shown that it must go to the minimum
value so the solution goes to the equilibrium.
25
Problem 31. Consider the nonlinear system of equations
x0 = x + y
y 0 = x y − 1.
a. Find all the fixed points, i.e., points where x 0 = 0 = y 0 .
b. Linearize the system at each fixed point to determine its stability type. (Attracting,
repelling, or saddle)
Problem 32. Consider the nonlinear system of equations
x 0 = −x 3 + x y 2
y 0 = −3x 2 y − y 3 .
d L(x, y) < 0 for (x, y) 6 = (0, 0).
a. Let L(x, y) = (x + y )/2. Show that dt
b. Explain why all solutions converge to the origin and the origin is an attracting fixed
point.
2
2
R EFERENCES
[1] Chiang, A., Fundamental Methods of Mathematical Economics, McGraw Hill, Inc., New York, 1984.
© Copyright 2026 Paperzz