1
2
Systems of first order equations
In this section, we discuss the basic theory of existence, uniqueness and general solution
that determines all solutions of first order systems.
Consider the first order system:
x01 (t) = f1 (t, x1 , x2 , ..., xn ),
x1 (t0 ) = x01
x02 (t) = f2 (t, x1 , x2 , ..., xn ),
..
.
x2 (t0 ) = x02
..
.
x0n (t) = fn (t, x1 , x2 , ..., xn ),
xn (t0 ) = x0n
In the vector form, this may be written as
X 0 (t) = f (t, X), X(t0 ) = X0 ,
(2.1)
where
X(t) = (x1 (t), x2 (t), ...xn (t)), f = (f1 , f2 , ...fn ) and X0 = (x01 , x02 · · · , x0n ).
Definition 1. A vector valued continuous function f (t, X) : [a, b] × IRn → IR is Lipschitz
continuous in X, if there exists L > 0 such that
kf (t, X) − f (t, Y )k ≤ LkX − Y k, ∀X, Y ∈ IRn .
1
where kXk = (x21 + x22 + ... + x2n ) 2
Similarly, we can define locally Lipschitz continuous functions around (t0 , X0 ).
Example: If f (t, X) is continuous and linear in X, then it is Lipschitz continuous.
f (t, X) is continuous and linear in X =⇒ f (t, X) = A(t) + B(t)X for some vector A(t)
and n × n matrix B(t) where ai (t) and bij (t) are all continuous functions. Therefore,
kf (t, X) − f (t, Y )k = kB(t)(X − Y )k ≤ CkX − Y k.
Remark 1. Suppose fi (t, X) satisfies
|fi (t, X) − fi (t, Y )| ≤ LkX − Y k
for all 1 ≤ i ≤ n. Then f = (f1 , f2 , ..., fn ) is Lipschitz continuous.
2
1
proof follows from the inequality (a2 + b2 ) 2 ≤ |a| + |b|.
Now we can integrate the system in (2.1), to see that
Z
t
X(t) = X(t0 ) +
f (s, X(s))ds.
(2.2)
t0
Using the idea of Picard iteration formula for first order ODE case, we can define the
iteration scheme in this case as
Z t
Xn+1 = X0 +
f (s, Xn (s))ds, X0 (t) = X0 .
(2.3)
t0
Definition 2. A sequence of vector valued functions {Xk (t) : I → Rn } converges to X(t)
uniformly if for each > 0, there exists N such that
k ≥ N =⇒ max kXk (t) − X(t)k < .
t∈I
If the sequence {Xn (t)} defined in (2.3) converges uniformly to X(t), then the above
iteration converges to the system of integral equations (2.2). The existence and uniqueness theorem in this case can be stated as follows:
Theorem 1. Let f (t, X) be Lipschitz continuous function in a ”(n+1)-rectangle" R =
{(t, X) : |t − t0 | ≤ a, kX − X0 k ≤ b} and let M = maxR kf (t, X)k. Then the initial value
problem in (2.1) admits unique solution in |t − t0 | ≤ h = min(a, Mb ).
Corollary: Suppose f (t, X) is Lipschitz continuous on [a, b] × IRn , then the solution
exists and is unique in [a, b].
We have the following Uniqueness theorem for Linear system of equations
Theorem 2. Let A be an n × n matrix with real entries. Then the Initial Value Problem:
X 0 (t) = AX(t), X(t0 ) = X0 .
admits unique solution.
As in the scalar case, We have the following Peono’s theorem on existence of solutions
(uniqueness is not guaranteed)
Theorem 3. Suppose f (t, X) is continuous in an (n+1)-rectangle around (t0 , X0 ). Then
the initial Value Problem X 0 = f (t, X), X(t0 ) = X0 admits solution in a neighbourhood
of t0 .
3
2.1
Second order equations
If we take f1 (t, x1 , x2 ) = x2 and f2 (t, x1 , x2 ) = g(t, x1 ), then we get
x001 = x02 = f2 (t, x1 , x2 ) = g(t, x1 )
This is a second order ODE in x1 . From the above it is also clear that we can write the
second order equation x00 + a(t)x0 + b(t)x = 0 as a second order system by taking,
x1 (t) = x(t), x2 (t) = x0 (t).
Then it is not difficult to see
x01 =x2
x02 = − a(t)x2 − b(t)x1
Then it is not difficult to see that f1 = x2 , f2 = −ax2 − bx1 satisfies
|f1 (t, x1 , x2 ) − f2 (t, y1 , y2 )| ≤ |x2 − y2 |
and
|f2 (t, x1 , x2 ) − f2 (t, x1 , x2 )| ≤ |a(x2 − y2 )| + |b(x1 − y1 )|
≤ 2 max{kak∞ , kbk∞ } |x2 − y2 |2 + |x1 − y1 |2
21
In the above we used (|a| + |b|)2 ≤ 2(a2 + b2 ). Now by remark 1, f = (f1 , f2 ) is Lipschitz
continuous. So from Theorem 1, we conclude the following:
Theorem 4. Let a(t), b(t) be bounded, continuous functions on I and let α, β be two real
numbers. Then the Initial Value Problem x00 + a(t)x0 + b(t)x = 0, x(0) = α, x0 (0) = β
has unique solution on I.
Definition 3. Two functions x1 (t) and x2 (t) are Linearly dependent on I, if there exists
constants c1 , c2 (not both zero) such that
c1 x1 (t) + c2 x2 (t) = 0, for all t ∈ I.
If there are no such non-trivial constants, we say x1 (t), x2 (t) are Linearly Independent
over I.
4
Examples
1. ex , sin x are linearly independent on any interval.
2. ex , ex+2 are linearly dependent on any interval.
We define the Wronskian
Definition 4. Wronskian of two functions x1 and x2 is
x (t) x (t) 1
2
W (x1 , x2 )(t) = 0
= x1 x02 − x2 x01 .
x1 (t) x02 (t) Theorem 5. Suppose x1 (t) and x2 (t) are linearly dependent. Then W (x1 , x2 )(t) ≡ 0.
Proof. There exists constants c1 , c2 (not both zero) such that
c1 x1 (t) + c2 x2 (t) = 0.
Differentiating this equation w.r.t. t
c1 x01 + c2 x02 = 0.
This may be written in the Linear system of equations
x1 (t) x2 (t)
x01 (t) x02 (t)
!
c1
c2
!
=
0
0
!
.
Since (c1 , c2 ) is a non-trivial solution, we get W (x1 , x2 )(t) ≡ 0.
Converse of the above statement is not true. i.e., if W (x1 , x2 ) = 0, then x1 , x2 need not
be Linearly dependent. For example, x1 (t) = t|t| and x2 (t) = t2 . Then
t|t| t2
W (x1 , x2 )(t) = 2|t| 2t
= 0.
But t|t| and t2 are not Linearly dependent on [−1, 1].
In contrast, if x1 , x2 are solutions of a differential equation, then the converse of the
above theorem holds.
Theorem 6. Suppose x1 , x2 are two solutions of x00 + a(t)x0 + b(t)x = 0. Then
W (x1 , x2 )(t) = 0 ⇐⇒ x1 , x2 are L.D.
5
Before, proving this theorem, we prove the following Abel’s formula
Theorem 7. Suppose x1 , x2 are two solutions of x00 + a(t)x0 + b(t)x = 0. Then
W (x1 , x2 )(t) = C e−
R
a(t)dt
.
Proof. x1 , x2 satisfies
x001 + a(t)x01 + b(t)x1 =0
(2.4)
x002 + a(t)x02 + b(t)x2 =0.
(2.5)
Now multiplying (2.4) with x2 and (2.5) with x1 and subtracting the resultant equations,
we get
(x1 x002 − x2 x001 ) + a(t)(x1 x02 − x2 x01 ) = 0.
That is,
W 0 + a(t)W = 0
R
Therefore, W (t) = C e− a(t)dt .
A very important consequence of the above formula is "The Wronskian of two solutions of Linear equation is either always zero or never zero".
Proof of Theorem 18: It is enough to prove that if W (x1 , x2 )(t0 ) = 0 then x1 , x2 are
Linearly dependent. If W (x1 , x2 )(t0 ) = 0, then the system of equations
x1 (t0 ) x2 (t0 )
x01 (t0 ) x02 (t0 )
!
c1
c2
!
=
0
0
!
has non-trivial solution (c1 = α, c2 = β). Now consider x(t) = αx1 (t) + βx2 (t). Then
x(t) and 0 are solutions of
x00 + a(t)x0 + b(t)x = 0
x(t0 ) = 0, x0 (t0 ) = 0.
Therefore, by Uniqueness theorem, we get x(t) ≡ 0. i.e., αx1 (t) + βx2 (t) ≡ 0.
Next we show that all solutions of second order equation can be determined. i.e., we
can write ”General solution”.
Theorem 8. Let x1 (t), x2 (t) be two linearly independent solutions of x00 +a(t)x0 +b(t)x =
6
0 and let x(t) be any solution. Then there exists c1 , c2 such that
x(t) = c1 x1 (t) + c2 x2 (t).
Proof. Since x1 , x2 are Linearly independent, W (x1 , x2 )(t0 ) 6= 0. Therefore, the system
of equations
!
!
!
x1 (t0 ) x2 (t0 )
c1
x(t0 )
=
x01 (t0 ) x02 (t0 )
c2
x0 (t0 )
has unique solution c1 = α, c2 = β. Now consider the function
z(t) = αx1 (t) + βx2 (t).
Then it is easy to check that x(t), z(t) satisfies x00 + a(t)x0 + b(t)x = 0 and x(t0 ) =
z(t0 ), x0 (t0 ) = z 0 (t0 ). Therefore, by uniqueness theorem x(t) ≡ z(t).
Let x1 (t) be solution of x00 + a(t)x0 + b(t)x = 0 satisfying x1 (0) = 1, x01 (0) = 0 and let
x2 (t) be solution satisfying x2 (0) = 0, x02 (0) = 1. Then x1 , x2 are linearly independent as
W (x1 , x2 )(0) = 1. These solutions are called fundamental solutions.
By the above theorem, any solution x(t) of x00 + a(t)x0 + b(t)x = 0 belongs to linear
span of x1 , x2 . So we have
Theorem 9. The set of all solutions of x00 + a(t)x0 + b(t)x = 0 is a two dimensional
vector space over the field of complex numbers.
Problem1: Let f (x) be locally Lipschitz around x = 0. Then show that the solution of
x00 = f (x), x(0) = x0 (0) = 0 is even.
solution: Let z(t) = x(−t). Then z(t), x(t) satisfies the given IVP. By uniqueness theorem, x(t) ≡ z(t).
Problem 2: Let f (x) be locally Lipschitz and let x(t) be solution of x00 = f (x) defined
for all t ∈ IR, satisfying x(0) = x(T ), x0 (0) = x0 (T ). Then x(t) is periodic with period T .
Solution: Setting z(t) = x(t + T ), one sees that x(t) and z(t) satisfies z 00 = f (z) and
z(0) = x(T ) = x(0), z 0 (0) = x0 (T ) = x0 (0). Then by uniqueness theorem, x(t) ≡ z(t) =
x(t + T ).
Equations with constant coefficients
We consider the equation x00 + ax0 + bx = 0 with a, b are real constants. Then from
7
the equation intuitively, one expects the solution to be emt for some constant m. Now
substituting x(t) = emt in the equation, we get
emt (m2 + am + b) = 0.
Since emt is never zero, we get m2 + am + b = 0. This polynomial is called Characteristic
polynomial. Now we have the following cases depending on the nature of roots of this
polynomial:
Case 1: m1 , m2 are distinct real roots
In this case we have x1 (t) = em1 t , x2 (t) = em2 t are two solutions and they are linearly
independent. Therefore, the general solution is
x(t) = c1 em1 t + c2 em2 t .
Case 2: m1 , m2 are distinct complex constants
Since a, b are reals, m1 = α + iβ and m2 = α − iβ. Then
c1 em1 t + c2 em2 t = (c1 + c2 )eαt cos βt + i(c1 − c2 )eαt sin βt
Therefore, the general solution is
x(t) = C1 eαt cos βt + C2 eαt sin βt,
where C1 , C2 are arbitrary constants.
Case 3: Repeated roots m1 = m2 = m
Then x1 (t) = em1 t is a solution. The second solution can be found as follows: Let
x2 = emt f (t) be the solution. Then substituting in the equation, we get
x002 + ax02 + bx2 = emt (2mf 0 + f 00 + af 0 ) = 0.
Now since m is repeated root of m2 + am + b = 0, we have 2m + a = 0. Therefore,
from the above equation, we get f 00 = 0. Hence f (t) = A + Bt. So the second linearly
independent solution is temt .
In general, if we know one solution x1 (t) of x00 + a(t)x0 + b(t)x = 0. Then, we can find
8
second linearly independent solution in the form x2 = f (t)x1 (t). Then x2 is a solution if
0 = x002 + ax02 + bx2 = (x001 + ax01 + bx1 )f + (2x01 + ax1 )f 0 + x1 f 00
= (2x01 + ax1 )f 0
Therefore,
f 00
x01
.
=
−a(t)
−
2
f0
x1
Integrating this equation, and re-arranging the terms, we get
f 0 (t) =
1 − R a(t)dt
e
.
x21
Integrating this, we get f (t) and x2 .
2.2
Higher order equations
Now let us consider the higher order linear ODE
x(n) + a1 (t)x(n−1) + a2 (t)x(n−2) + .... + an (t)x = 0,
(2.6)
where a1 (t), ....an (t) are bounded, continuous functions.
Now defining the transformation:x1 = x, x2 = x01 , x3 = x02 , ...., xn = x0n−1 , we get
x01
x02
x03
.. 0
.xn
=
0
0
0
..
.
1
0
0
0
1
0
0
0
1
...
...
...
0
0
0
−an −an−1 −an−2 −an−3 ... −a1
x1
x2
x3
..
.
x4
Definition 5. The functions x1 (t), x2 (t), ...., xn (t) are Linearly dependent if there exists
constants c1 , c2 , ...., cn (not all zero) such that
c1 x1 (t) + c2 x2 (t) + .... + cn xn (t) = 0.
They are Linearly Independent if
c1 x1 (t) + c2 x2 (t) + .... + cn xn (t) = 0 ⇐⇒ c1 = c2 = ... = cn = 0.
9
Definition 6. The Wronskian of x1 (t), x2 (t), ....xn (t) is defined as
x1
x2
x3
0
0
x03
x2
x1
x003
x002
W (x1 , x2 , ....xn )(t) = x001
..
.
(n−1) (n−1) (n−1)
x
x2
x3
1
...
...
...
xn
x0n
x00n
(n−1)
... xn
Then we have the following Abel’s lemma for nth order equation
Theorem 10. Let x1 (t), x2 (t), ...xn (t) are solutions of (2.6). Then
W (x1 , x2 , ...., xn )(t) = Ce−
R
a1 (t)dt
.
Proof. We shall do the proof for n = 3.
x0 x0 x0 x1 x2 x3 x 1 x2 x3
1 2 3 d
W (t) = x01 x02 x03 + x001 x002 x003 + x01 x02 x03
dt
x001 x002 x003 x001 x002 x003 x000
x000
x000
3
2
1
x1
x2
0
=
x1
x02
−a1 x001 − a2 x01 − a3 x1 −a1 x002 − a2 x02 − a3 x2
x1 x2 x3 = −a1 (t) x01 x02 x03 − a2 (t)0 − a3 (t)0
00 00 00 x1 x2 x3 x3
x03
−a1 x003 − a2 x03 − a3 x3
= −a1 (t)W (t)
Therefore, W (t) = Ce−
R
a1 (t)dt
.
Theorem 11. The Wronskian of x1 , x2 , ...xn of solutions of (2.6) is either identically
equal to zero or never zero.
Next we can prove the following thoerem similar to the case n = 2.
Theorem 12.
W (x1 , x2 , ...xn )(t) = 0 ⇐⇒ x1 , x2 , ...xn are L.D.
Proof. It is enough to show that W (t0 ) = 0 implies x1 , x2 , ..., xn are L.D. So let W (t0 ) = 0.
10
The the following linear system of algebraic equations
x1 (t0 )
x01 (t0 )
..
.
(n−1)
x1
x2 (t0 )
x02 (t0 )
..
.
(n−1)
(t0 ) x2
....
....
..
.
xn (t0 )
x0n (t0 )
..
.
(n−1)
(t0 ) .... xn
(t0 )
c1
c2
..
.
=
cn
0
0
..
.
0
has a nontrivial solution α1 , α2 , ....αn . Now consider the function z(t) = α1 x1 (t) +
α2 x2 (t) + ... + αn xn (t). Then z(t) satisfies (2.6) and z(t0 ) = z 0 (t0 ) = ...z 0 (t0 ) = 0. ThereCoollary
fore, by uniqueness theorem, z(t) ≡ 0 as 0 is also a solution of the same IVP.
Let x1 , x2 , ..., xn are solutions of (2.6). Then
x1 (t), x2 (t), ...., xn (t) are L.I. ⇐⇒ W (x1 , x2 , ...., xn )(t) 6= 0.
Similarly, we can show the following theorem on ”General solution”
Theorem 13. Let x1 (t), x2 (t)...., xn (t) are Linearly independent solutions of (2.6). Then
any solution of (2.6) belongs to the Linear span of x1 (t), x2 (t), ..., xn (t).
Proof. Let x(t) be a solution of (2.6). Then the system
x1 (t0 )
x01 (t0 )
..
.
(n−1)
x1
x2 (t0 )
x02 (t0 )
..
.
(n−1)
(t0 ) x2
....
....
..
.
xn (t0 )
x0n (t0 )
..
.
(n−1)
(t0 ) .... xn
(t0 )
c1
c2
..
.
cn
=
x(t0 )
x0 (t0 )
..
.
x(n−1) (t0 )
(2.7)
has unique solution, say (β1 , β2 ....βn ). Now consider the function z(t) = β1 x1 (t) +
β2 x2 (t) + .... + βn xn (t). Then z(t) satisfies (2.6) and from (2.7), we see that z(t0 ) =
x(t0 ), z 0 (t0 ) = x0 (t0 )...., z (n−1) (t0 ) = x(n−1) (t0 ). Again, from Uniqueness theorem, we get
x(t) ≡ z(t) = β1 x1 (t) + β2 x2 (t) + .... + βn xn (t).
3rd order equation constant coefficients:
Let a1 , a2 , a3 be real constants. Consider the equation x000 +a1 x00 +a2 x0 +a3 x = 0. Then by
substituting x(t) = emt , we get the characteristic polynomial as m3 +a1 m2 +a2 m+a3 = 0.
Let m1 , m2 , m3 be the roots of this equation. Then we have the following cases:
Case 1: m1 , m2 , m3 are distinct roots.
In this case, consider the 3 solutions x1 (t) = em1 t , x2 (t) = em2 t , x3 (t) = em3 t . Then we
11
can see that the Wronskian of these
1
1
W (x1 , x2 , x3 )(0) = m1 m2
2
m1 m22
functions is
1 m3 = (m3 − m2 )(m2 − m1 )(m3 − m1 ) 6= 0.
m23 Therefore, x1 (t), x2 (t), x3 (t) are linearly independent. Hence the general solution is
c1 em1 t + c2 em2 t + c3 em3 t .
Case 2: m1 6= m2 = m3 .
In this case, motivated by the situation in 2nd order case, we take
x1 (t) = em1 t , x2 (t) = em2 t x3 (t) = tem3 t .
Then
1
1
0
W (x1 , x2 , x3 )(0) = m1 m2
1
2
m1 m22 2m2
= (m2 − m1 )2 6= 0.
Therefore, the general solution is
c1 em1 t + c2 em2 t + c3 tem2 t .
Case 3: m1 = m2 = m3 = m.
In this case, we take x1 (t) = emt , x2 (t) = temt , x3 (t) = t62emt . Then
1
0 0 W (x1 , x2 , x3 )(0) = m 1 0 = 2.
2
m 2m 2 Hence x1 (t), x2 (t), x3 (t) are linearly independent solutions.
2.3
Linear system of equations
In this section, we will define the concept of Wronskian for Linear systems of equations.
Recall the third order equation case: Let x, y, z be solutions of x000 +a1 x00 +a2 x0 +a3 x = 0.
12
Then the Wronskian of these solutions is
x y z W (x, y, z)(t) = x0 y 0 z 0 00 00 00 x y z Now let us convert the equation into 3 × 3 system and see the Wronskian in the new
variables Let x1 = x, x2 = x01 , x3 = x02 . Then x0 = x2 , x00 = x3 . So The Wronskian
becomes
x1 y1 z1 W = x2 y2 z2 .
x3 y3 z3 Motivated from above, We define
Definition 7. The Wronskian of n-vector valued functions, x1 (t), x2 (t), ..., xn (t) : [a, b] →
IRn , is defined as
x1 (t) x2 (t) ... xn (t)
1
1
1
1
2
n
x2 (t) x2 (t) ... x2 (t)
W (x1 , x2 , ..., xn )(t) = .
..
..
..
.
.
.
..
1
n
2
xn (t) xn (t) ... xn (t)
where we used the notation of vector valued function x (t) =
i
xi1 (t)
xi2 (t)
..
.
xin (t)
for i = 1..., n.
Definition 8. The vector valued functions, x1 (t), x2 (t), ..., xn (t) : [a, b] → IRn , are Linearly dependent if there exists c1 , c2 , ...cn (not all zero) such that
c1 x1 (t) + c2 x2 (t) + ... + c3 xn (t) = 0.
That is, the following system of equations has non-trivial solution
x11 (t) x21 (t) ... xn1 (t)
x12 (t) x22 (t) ... xn2 (t)
..
..
..
..
.
.
.
.
1
2
n
xn (t) xn (t) ... xn (t)
c1
c2
..
.
cn
=
0
0
..
.
0
(2.8)
13
Then as an immediate consequence, we have the following
Theorem 14. If the vector valued functions, x1 (t), x2 (t), ..., xn (t) : [a, b] → IRn , are
Linearly dependent then W (x1 , x2 , ..., xn )(t) = 0
Now we can use the uniqueness theorem to show the following:
Theorem 15. Let x1 (t), x2 (t), ..., xn (t) : [a, b] → IRn be solutions of X 0 = A(t)X. Then
W (x1 , x2 , ..., xn )(t0 ) = 0 for some t0 , implies W (x1 , x2 , ..., xn )(t) = 0 for all t.
Proof. From the above definitions, W (x1 , x2 , ..., xn )(t0 ) = 0 implies the existence of nontrivail solution (α1 , α2 , ...αn ) to the system (2.8). Now we can define x(t) = α1 x1 +α2 x2 +
... + αn xn . Then by the linearity of X 0 = AX, x(t) is a solution. Also from (2.8), it is
easy to see that x(t0 ) = 0. Therefore, by uniqueness theorem, x(t) ≡ 0.
Corollary: Let x1 (t), x2 (t), ..., xn (t) : [a, b] → IRn be solutions of X 0 = A(t)X. Then
x1 (t), x2 (t), ..., xn (t) are L.I. ⇐⇒ W (x1 , x2 , ..., xn )(t) = 0.
Theorem 16. Abel’s formula: x1 (t), x2 (t), ..., xn (t) : [a, b] → IR3 be solutions of X 0 =
A(t)X. Then their Wronskian is given by
Rt
W (t) = Ce
t0 (a11 (s)+a22 (s)+...+ann (s))ds
Proof. We give the proof for n = 2. In this case xi , i = 1, 2 satisfies the system
(xi1 )0
(xi2 )0
!
=
a11 a12
a21 a22
2 0 1 0
x21
d
(x1 ) (x1 ) x11
+
W (t) = 1
0
0
x2
dt
x22 (x12 ) (x22 )
a x1 + a x1 a x2 + a x 2
11 1
12 2
12 2
11 1
2
1
x2
x2
!
xi1
xi2
!
.
x21
x11
+
a21 x11 + a22 x12 a21 x12 + a22 x22
=a11 W + a22 W.
Integrating this, we get the required formula.
Next theorem is about the "General solution"
Theorem 17. Let x1 (t), x2 (t), ..., xn (t) : [a, b] → IRn be Linearly independent solutions of
X 0 = A(t)X. Then all solution of this system are in the linear span of x1 (t), x2 (t), ..., xn (t).
14
In other words, the set of all solutions X 0 = AX is a vector space of dimension n over
the field of complex numbers.
y 1 (t)
2
y (t)
0
Proof. Let Y (t) =
.. be a solution of X = AX. Then using the fact that
.
y n (t)
x1 , ...., xn are L.I., we get unique solution to the system of equations
x11 (t) x21 (t) ... xn1 (t)
x12 (t) x22 (t) ... xn2 (t)
..
..
..
..
.
.
.
.
1
2
n
xn (t) xn (t) ... xn (t)
c1
c2
..
.
=
cn
y 1 (t0 )
y 2 (t0 )
..
.
y n (t0 )
(2.9)
. has unique solution (α1 , α2 , ..., αn ). Now considering the function
z(t) = α1 y 1 (t) + α2 y 2 (t) + ... + αn y n (t),
we see that Y (t) and z(t) satisfies
X 0 = AX, Y (t0 ) = Z(t0 )
Now we conclude the proof using the Uniqueness theorem for systems.
Linear system with constant coefficients:
We consider the homogeneous system
X 0 = AX
where (aij ) are constants. In case of higher order equation, we found general solution by
substituting x(t) = emt . This suggests that we try substituting X(t) = eλt v in X 0 = AX.
Then we get
λeλt v = eλt Av.
This gives rise to the equation
(A − λI)v = 0,
where I is an n × n identity matrix. So it is clear now that λ is an eigenvalue and v is
the corresponding eigenvector.
15
We have the following cases
Case 1: A has distinct eigenvalues λ1 , λ2 , ...λn .
In this case, Let v i be the eigenvector corresponding to λi . We consider x1 (t) = eλ1 t v 1 , ...., xn (t) =
eλn t v n . Then x1 , x2 , ....xn are linearly independent as v 1 , v 2 , ....v n are linearly independent. i.e.,
v 1 v 2 ... v n 1 1
1 1 2
v2 v2 ... v2n W (x1 , x2 , ....xn )(0) = .
.. .. .. 6= 0.
.
. . . .
1 2
vn vn ... vnn Case 2: A has one (or more) eigenvalues repeated. But eigenvectors form a basis of IRn
In this case, say λ1 , λ2 , ..., λm are distinct eigenvalues (m < n), and let v 1 , v 2 , ....., v n are
eigenvectors that from basis of IRn . Then again, we can take x1 (t) = eλ1 t v 1 , ...., xm (t) =
eλm t v m , ..., xn (t) = eλn t v n . Then as
case we see W (x1 , x2 , ..., xn )(0) 6= 0.
in the previous
3 0 0
0
Example: X = AX, with A = 0 3 0 .
1 1 2
det(A − λI) = (3 − λ)2 (2 − λ).
So the eigenvalues are λ = 3 (double) and λ = 2. The eigenvectors are
0
0
1
2 3
1
v = 0 , v = 1 , v = 0
1
1
1
These are linearly independent. So the linearly independent solutions are x1 = v 1 e3t , v 2 =
v 2 e3t , x3 = v 3 e2t .
Case 3: Geometric multiplicity of λi is not equal to algebraic multiplicity of λi .
Let λ be a repeated eigenvalue twice and v 1 is the only eigenvector(L.I). Then we have
x1 (t) = eλt v 1 is a solution and let x2 (t) = v 1 teλt + ueλt and determine u such that x1 , x2
are linearly independent. Substituting x2 in the system, we get
λteλt v 1 + eλt v 1 + λeλt u = A(teλt v 1 + eλt u) = Av 1 teλt + Aueλt
Since Av 1 = λv 1 , we have
λteλt v 1 + eλt v 1 + λeλt u = λv 1 teλt + Aueλt
16
Canceling λteλt v 1 , we obtain eλt v 1 + λeλt u = Aueλt and hence
v 1 + λu = Au
That is, u is a solution of the system
(A − λI)u = v 1
This system will always have a non-trivial solution(exercise below). In fact, if a solution
exists, then it is non-trivial(because v 1 6≡ 0). Moreover one can show that u and v 1 are
linearly independent. If there exists c1 , c2 such that c1 u + c2 v = 0, then
c1 (A − λI)u + c2 (A − λI)v 1 = 0
since v 1 is an eigenvector, we get c1 (A − λI)u = 0. i.e., c1 v 1 = 0. This implies c1 = 0
and hence c2 = 0.
Exercise: Let λ the repeated eigenvalue of 2 × 2 matrix and v 1 is an eigenvector corresponding to λ. If the dimension of eigen space is 1, then (A − λI)u = v 1 has a solution.
solution: (A − λI)2 = 0 (by Caley-Hamilton theorem). Therefore, Ker(A − λI) = 1
and Ker(A − λI)2 = 2. Hence, there exists z ∈ Ker(A − λI)2 such that z 6∈ Ker(A − λI).
This z is the required solution.
1 1
0 1
Problem: Find L.I. solutions of X 0 = AX, with A =
!
Solution: Eigenvalues are λ = 1 twice. The eigenvector is v 1 =
!
1
x1 =
eλt .
0
Now solving the system (A − I)u = v 1 , namely,
0 1
0 0
!
u1
u2
!
=
1
0
1
0
!
. This yields
!
That is, u2 = 1 and u1 is arbitrary, say u1 = 0. So u =
0
1
!
So the second linearly
17
independent solution is
1
0
x2 (t) =
!
0
1
tet +
!
et .
Exponential of a Matrix: Let A be an n × n matrix. Define
eAt = I + tA +
t2 A2
tn An
+ ..... +
+ ....
2
n!
Taking t = 1, we get eA , the exponential matrix. The partial sums on the series on the
right hand side can be calculated. One can find the series in case of simple matrices like
Case 1: If A is nilpotent matrix. Then Ak = 0 for some k. So the series on the right
hand side is simply polynomial.
Case 2: A = D, a Diagonal matrix. Then
P dn tn
1
n!
e
tD
=
∞ n n
X
t D
n=0
n!
0
..
.
=
0
0
P dn2 tn
..
.
0
n!
...
...
..
.
0
0
..
.
...
P dnn tn
=
n!
etd1 0 ... 0
0 etd2 ... 0
..
..
.
.
. ... ..
..
0
0
. etdn
Case 3: A is diagonalizable.
In this case, there exists non-singular S such that S −1 AS = D. Then A = SDS −1 and
eAt = eSDS
−1 t
=
X tn (SDS −1 )n
n!
=S
X tn Dn
n!
S −1 = SetD S −1 .
Without going into the details of proof, we will state the following theorem:
Theorem 18. Let A be an n × n matrix. Then
2
2
1. Fix M > 0. Then the series I +tA+ t 2!A +....+.. converges absolutely and uniformly
for |t| ≤ M .
2. eA(t+h) = eAt eAh
3.
d At
e
dt
= AeAt .
4. (eA )−1 = e−A
Finally, if A is not diagonalizable, then by the uniqueness theorem and (3) of above
theorem, etA is the fundamental matrix X with X(0) = I. So we find the etA as follows:
18
Step 1: Let ei be solution of X 0 = AX, X(0) = (0, ..., 1, ....0)T (1 at the ith place).
Step 2: Then form the matrix with ei as ith column. This matrix is etA .
Alternatively,
Step 1: Find the Linearly independent solutions and form the matrix X(t) with these solutions as columns of X(t).
Step 2: Define B = X(0)−1 , (exists!), then etA = BX(t).
4
3
1
Example: Find eA of −4 −4 −2 .
8 12 6
1
Eigenvalues are λ1 = λ2 = λ3 = 2 and L.I. eigenvectors are e1 = 0 and e2 =
−2
0
1 . So the two linearly independent solutions are x1 = e1 e2t , x2 = e2 e2t . The third
−3
linearly independent solution corresponding to λ = 2 is of the form (αt + β)e2t where α
1
2
satisfies
0 and β satisfies (A − 2I)β = α. We take α = k1 e + k2 e . Then
(A − 2I)α =
k1
α=
k2
and β is a solution of
−2k1 − 3k2
β1
k1
2
3
1
k2
−4 −6 −2 β2 =
8 12 4
β3
−2k1 − 3k2
First two equations imply k2 = −2k1 . A simple non-trivial solution is k1 = 1, k2 = −2.
19
1
With this choice, α = −2 . With this choice of α we β to be solution of
4
2
3
1
β1
1
−4 −6 −2 β2 = −2 .
8 12 4
β3
4
0
A solution of this system is β = 0 . Therefore,
1
te2t
x3 = −2te2t
(4t + 1)e2t
e2t
0
te2t
a fundamental matrix is X(t) = 0
e2t
−2te2t . Therefore,
−2e2t −3e2t (4t + 1)e2t
1
0 0
0
1 0 . From this we can find etA = X(0)−1 X(t).
−2 −3 1
Therefore,
X(0) =
2.4
Non homogeneous problems
The problem
x0 = A(t)x = f (t)
is called non-homogenous problem if f (t) 6≡ 0. Assuming that the homogeneous part of
the problem x0 = A(t)X is solvable, we would like to study the general solution of the
non-homogenous problem.
Scalar case: Consider the equation x0 + ax = f (t) with constant a and f (t) continuous.
Then we know that xh (t) = ce−at is the general solution of x0 = ax. Now for a solution of
non-homogeneous equation, we consider xp (t) = c(t)e−at . Then substituting this in the
equation (non-homogenous)
f (t) = x0p + axp = c0 (t)e−at − ac(t)e−at + ac(t)e−at = c0 e−at
20
That is, c0 (t) = eat f (t). Therefore, xp (t) = e−at
homogenous equation.
R
e−as f (s)ds is a solution of non-
Linear System: We take the first order system. i.e., let A be n × n matrix and let X
be the fundamental matrix of the system x0 = Ax. For simplicity, we take n = 2. i.e.,
x01 = a11 x1 + a12 x2
x02 = a21 x1 + a22 x2
Suppose (x1 , x2 )T , (y1 , y2 )T are two solutions. Then we can write
x01 y10
x02 y20
!
=
a11 a12
a21 a22
x1 y1
Let X be the fundamental matrix
x2 y2
non-homogeneous system is of the form
!
x1 y 1
x2 y 2
!
!
. Then we can expect the solution of
yp = Xu
where u is a function of t. Substituting this in the system x0 = Ax + f , we get
Xu0 + X 0 u = AXu + f
since, X 0 = AX, the above equation is reduced to
Xu0 = f
In other words, u0 = X −1 f . Therefore, by (4) of the theorem 23, we get
Z
t
yp (t) = X(t)u(t) = X(t)
X −1 (s)f (s)ds
t0
Z t
=
X(t − s)f (s)ds
t0
is a solution of the non-homogeneous system. This is same as the formula obtained using
Duhamel’s principle. Therefore, the general solution will be
y(t) = yh (t) + yp (t)
21
where yh (t) is solution of homogeneous system X 0 = AX. This is known as Variation of
parameter’s formula.
Higher order equations: Let us take the second order equation x00 + a1 x0 + a2 x = f (t).
Then this equation can be written as an equivalent 2 × 2 system. Let φ, ψ be two linearly
independent solutions of the homogeneous part x00 +a1 x0 +a2 x = 0. Then the fundamental
matrix of the system is
X=
φ ψ
φ0 ψ 0
!
and X −1 =
Therefore
X
−1
!
ψ 0 −ψ
−ψ 0 φ
1
f=
W
y=X
0
f (t)
φ ψ
φ0 ψ 0
X −1 f =
ψ −ψ
−ψ 0 φ
1
W
and
Z
0
!
!
R −ψf !
W
R φf
=
W
!
R
R
ψf
W
φf
W
!
Taking the first component we get a solution of the second order equation x00 +a1 x0 +a2 x =
f as
Z
Z
ψf
φf
y(t) = −φ(t)
+ψ
.
W
W
This is called variation of parameters formula. This is a particular solution of the nonhomogeneous system.
Suppose xh is the general solution of homogeneous system and xp is a particular solution of non-homogenous system. Let y be any solution of the non-homogenous system,
then y − xp is a solution of homogeneous system. Therefore, y − xp = xh or y = xp + xh
is the general solution of non homogenous system.
Example 1: (Second order system): Consider the problem
x0 =
x0
y0
!
=
1 1
−3 5
!
x
y
!
+
The general solution of homogenous part is
c1
e2t
e2t
!
+ c2
e4t
3e4t
!
2t − 2
−4t
!
22
So X =
e2t e4t
e2t 3e4t
!
. Then X −1 =
e−6t
X −1 f =
2
e−6t
2
3e4t −e4t
−e2t e2t
!
Hence
Z
u=
5te−2t − 3e−2t
−3te−4t + e−4t
3e4t −e4t
−e2t e2t
2t − 2
−4t
!
=
!
. Therefore,
!
5te−2t − 3e−2t
−3te−4t + e−4t
=
−5 −2t
te + 41 e−2t
2
1 −4t
3 −4t
te − 16
e
4
!
!
Therefore,
y = Xu =
e2t e4t
e2t 3e4t
!
−5 −2t
te + 14 e−2t
2
3 −4t
1 −4t
te − 16
e
4
!
=
− 74 t +
− 14 t +
3
16
1
16
!
Example 2: (Second order equation): Consider the problem x00 + x = csc t.
Linearly independent solutions homogeneous part are φ = sin t, ψ = cos t and W = −1.
By the formula give above, we get
Z
Z
ψf
φf
yp = −φ(t)
+ψ
W
W
Z
Z
= sin t cos t csc tdt − cos t sin t csc t
= sin t log(sin t) − t cos t
Method of undetermined coefficients
This method is also known as Anhilator method. This method can be applied to equations
with constant coefficients or for the equations which can be transformed to equation
with constant coefficients. Let us consider a linear second order differential operator
L(x) = x00 + a1 x0 + a2 x = f (t) with a1 , a2 are constants and f (t) be a function such
that M (f ) = 0 for some Linear differential operator with constant coefficients. We can
assume that m1 , m2 be roots of characteristic polynomial p(m) corresponding to L and
let φ1 , φ2 be the linearly independent solutions of L(x) = 0. Then we note that for any
solution x of L(x) = f , we have
M (L(x)) = M (f ) = 0
and the operator M oL is again a differential operator with constant coefficients. It is easy
to check that the characteristic polynomial r(m) of M oL is the product p(m)q(m) where
23
q(m) is characteristic polynomial corresponding to M (f ) = 0. Let m1 , m−2, m3 , m4 ....mn
are roots of r(m) and φ1 , φ2 , .....φn are linearly independent solutions of M (L(x)) = 0.
Then among these φ’s first two are satisfy L(x) = 0. So we choose the candidate for a
particular solution of L(x) = f as
xp = c3 φ3 + c4 φ4 + ... + cn φn .
Then substituting this in the equation L(x) = f and comparing the coefficients, we can
solve for ci ’s.
Example 1: Find a particular solution of x00 + x = sin t.
Solution: In this case L(x) = x00 + x and M (x) = x00 + x. So the linearly independent
solutions of L(x) = 0 are sin t, cos t. The linearly independent solutions of M (L(x)) = 0
are sin t, cos t, t sin t, t cos t. Since sin t and cos t are already solutions of homogeneous
part x00 + x = 0, the candidate for a particular solution is
xp (t) = c1 t sin t + c2 t cos t.
Substituting this in the equation, we get
−2c2 sin t + 2c1 cos t = sin t.
Therefore, c1 = 0, c2 =
−1
2
and the required solution is xp =
−1
t cos t.
2
We have the following as simple observation from the above method:
Case 1: If f (t) is not a solution of L(x) = 0. In the table below f (t) is the right hand
side and xp is the candidate for a particular solution.
f (t)
1
aet
2
a sin βt or a cos βt
th
3 n degree polynomial
xp
Aet
A sin βt + B cos βt
An tn + ....A0
Case 2: In case f (t) is not a solution of L(x) = 0, as was observed in the above example,
we can multiply by a factor of t to get the possible xp (see the table below).
24
f (t)
1
aet
2
a sin βt or cos βt
th
3 n degree polynomial
xp
Atet
At sin βt + Bt cos βt
(n + 1)th degree polynomial
Example 2: Using the method of undetermined coefficients, find a particular solution
of
!
!
!
!
x0
1 1
x
2t − 2
0
x =
=
+
y0
−3 5
y
−4t
solution: By observation, it seems reasonable to try
xp =
at + b
ct + d
!
and determine the constants a, b, c, d. Substituting this in the given system, we obtain
a = (at + b) + (ct + d) + 2t − 2
c = −3(at + b) + 5(ct + d) − 4t
This reduces to the solving the system of equations,
a + c + 2 = 0, 3a − 5c + 4 = 0, b + d − 2 − a = 0, c + 3b − 5d = 0
Solving the first two equations, we get a =
3
1
we find b = 16
, d = 16
. Therefore,
xp =
2.5
−7
,c
4
−7
t
4
−1
t
4
=
+
+
−1
.
4
3
16
1
16
Now from the other two equations,
!
Problems
1. Show that the following IVPs has unique solution for all a, b ∈ IR:
(a) x00 = x|x|, x(0) = a, x0 (0) = b,
(b) x00 = max{0, x|x|}, x(0) = a, x0 (0) = b
2. If f (x, t) be T −periodic with respect to t and let x(t) be a solution of x00 = f (t, x)
defined for all t ∈ IR, such that x(0) = x(T ) and x0 (0) = x0 (T ). Show that x(t) is
periodic.
25
3. Let f (x) be differentiable function and let x : IR 7→ IR satisfy x(4) = f (x), x(i) (0) = 0
for i = 1, 2, 3. Show that x(t) is even.
4. Find the solutions of the following initial value problems:
(a) y 00 − 2y 0 − 3y = 0, y(0) = 0, y 0 (0) = 1 (b) y 00 + 10y = 0, y(0) = π, y 0 (0) = π 2 .
5. Find a function φ which has a continuous derivative on 0 ≤ x ≤ 2 which satisfies
φ(0) = 0, φ0 (0) = 1, and y 00 − y = 0 for 0 ≤ x ≤ 1, and y 00 − 9y = 0 for 0 ≤ x ≤ 2.
6. Let φ1 , φ2 be two differentiable functions on an interval I, which are not necessarily
solutions of an equation L(y) = 0. Prove the following:
(a) If φ1 , φ2 are linearly dependent on I, then W (φ1 , φ2 )(x) = 0 for all x in I.
(b) If W (φ1 , φ2 )(x0 ) 6= 0 for some x0 in I, then φ1 , φ2 are linearly dependent on
I.
(c) W (φ1 , φ2 )(x) = 0 for all x in I does not imply that φ1 , φ2 are linearly dependent on I.
(d) W (φ1 , φ2 )(x) = 0 for all x in I, and φ2 (x) 6= 0 on I, imply that φ1 , φ2 are
linearly dependent on I.
7. Find all solutions of the following equations:
(a) 4y 00 − y = ex
(b) y 00 + 4y = cos x
y 00 + 9y = sin 3x.
(c)
8. Let L(y) = y 00 + a1 y 0 + a2 y = 0, where a1 , a2 are constants, and let p be the
characteristic equation p(r) = r2 + a1 r + a2 .
(a) If A, α are constants and p(α) 6= 0, show that there is a solution φ of L(y) =
Aeαx of the form φ(x) = Beαx , where B is a constant.
(b) Compute a particular solution of L(y) = Aeαx in case p(α) = 0.
9. Are the following set of functions defined on −∞ < x < ∞ linearly dependent or
independent there? Why?
(a) φ1 (x) = 1, φ2 (x) = x, φ3 (x) = x3
(c) φ1 (x) = x, φ2 (x) = e2x , φ3 (x) = |x|.
10. Use the method of undetermined coefficients to find a particular solution of each of
the following equations: (a) y 00 + 4y = cos x (b) y 00 + 4y = sin 2x (c) y 00 − y 0 − 2y =
x2 + cos x
26
11. Find a real solution of the following Cauchy-Euler equations.
(a) x2 y 00 − 4xy 0 + 6y = 0, (b) 4x2 y 00 + 12xy 0 + 3y = 0, (c) x2 y 00 + 7xy 0 + 9y = 0,
(d) x2 y 00 − 2.5xy 0 − 2y = 0, (e) x2 y 00 + 7xy 0 + 13y = 0.
12. Solve the initial value problems.
(a) x2 y 00 − 2xy 0 + 2y = 0, y(1) = 1.5, y 0 (1) = 1.
(b) x2 y 00 + 3xy 0 + y = 0, y(1) = 3, y 0 (1) = −4.
(c) x2 y 00 − 3xy 0 + 4y = 0, y(1) = 0, y 0 (1) = 3.
13. Find all Linearly independent solutions of the following equations: (a) y 000 − 8y = 0
(b) y (4) + 16y = 0 (c) y (100) + 100y = 0 (d) y (4) − 16y = 0
14. Use the variation of parameters method to find a particular solution of the following
equations: (a) y 000 −y 0 = x, (b) y (4) +16y = cos x, (c) y (4) −4y (3) +6y 00 −4y 0 +y = ex .
15. Solve X 0 = AX, where
1 0 0
(a) A = 0 2 1
1 0 3
1
1 0 1
(b) A = 0 −1 0 , X(0) = 0
1
0 0 4
16. Solve the non-homogeneous system of equations
x0 = x + 2y + 2t
y 0 = 3y + t2
17. Solve the non-homogeneous system of equations
x0 = 2x + 6y + et
y 0 = x = 3y − et
© Copyright 2026 Paperzz