April 23

Ordinary differential equations
Lecture 6
April 23, 2013
Existence and uniqueness
x0 = f (t, x),
(1)
x(t0 ) = x0 .
f ∈ C(D, Rn ), D ⊂ Rn+1 open, (t0 , x0 ) ∈ D.
Theorem (Picard-Lindelöf). Assume that f satisfies a Lipschitz condition with
respect to x in D. There is a unique local solution x ∈ C 1 (I, Rn ) of (1) in some
open interval I containing t0 .
When does f satisfy a Lipschitz cond. w.r.t. x?
Proposition. Assume that f is continuous with continuous partial derivatives
with respect to x in D. Then f satisfies a Lipschitz condition with respect to x in
D.
Proof. Assume that
Q := {(t, x) : |t − t0 | ≤ a, kx − x0 k ≤ b} ⊂ D.
Let Dx f (t, x) ∈ Mn (R) denote the Jacobian matrix of f with respect to x and
kAk = sup
v6=0∈Rn
kAvk
= max kAvk
kvk=1
kvk
the matrix norm (induced by Eucl. norm). Then (t, x) 7→ kD x f (t, x)k is continuous on D and hence on the compact set Q, so
K := sup kD x f (t, x)k = max kD x f (t, x)k < ∞.
(t,x)∈Q
(t,x)∈Q
We have
Z
1
d
f (t, y + s(x − y)) ds
0 ds
Z 1X
n
∂f
=
(t, y + s(x − y))(xk − yk ) ds
0 k=1 ∂xk
Z 1
=
Dx f (t, y + s(x − y))(x − y) ds
f (t, x) − f (t, y) =
0
1
y
y + s(x
x
y)
so
kf (t, x) − f (t, y)k ≤
≤
1
Z
kDx f (t, y + s(x − y))(x − y)k ds
0
1
Z
0
kDx f (t, y + s(x − y))kkx − yk ds
≤ Kkx − yk.
Remark: The book assumes convexity with respect to x, but this is not needed
here (Q is automatically convex in x).
Corollary. Assume f and Dx f (t, x) cont. in a neighbourhood of (t0 , x0 ). Then
the IVP (1) has a unique solution on an open interval containing t0 .
Corollary. The IVP
u(n) = F (t, u, u0 , . . . , u(n−1) ),
u(t0 ) = u0 , u0 (t0 ) = u1 , . . . u(n−1) (t0 ) = un−1
has a unique solution in some open interval containing t0 if F (t, x1 , . . . , xn ) and
its partial derivatives w.r.t. x1 , . . . , xn are cont. in a neighb. of (t0 , u0 , . . . , un−1 ).
Numerical methods and Peano’s theorem
We can construct approximate solutions by replacing derivatives with finite differences. Especially useful in combination with computers.
Taylor expansion:
x(t0 + h) = x(t0 ) + x0 (t0 )h + O(h2 ).
Euler’s method
Set
xh (tm+1 ) = xh (tm ) + f (tm , xh (tm ))h,
tm = t0 + mh.
If needed, use linear interpolation in between (can’t define x for every t on a
computer anyway).
Also of theoretical interest. As h → 0 one can prove that a sequence of approximate
solutions converges to a solution (not required for the exam).
Theorem (Peano’s theorem). Suppose that f ∈ C(D, Rn ). There exists a solution
of the IVP on some interval containing t0 .
Remark: The solution need not be unique.
In practice, one uses better algorithms, e.g. 4th order Runge-Kutta.
2
Dependence on the data
It is important to know that the solution depends continuously on the ‘data’ (x0
and f ). E.g. if x0 comes from physical measurements there will be measuring
errors. We will show that x depends continuously on the data in a certain sense.
Lemma (Grönwall’s inequality — simple version). Assume that u ∈ C[0, T ] satisfies
Z t
(βu(s) + γ) ds, t ∈ [0, T ],
u(t) ≤ α +
0
where α, γ ∈ R and β ≥ 0. Then
u(t) ≤ αeβt +
γ βt
(e − 1),
β
t ∈ [0, T ].
Proof. Introduce the auxiliary function
Z t
z(t) = α +
(βu(s) + γ) ds.
0
Then
z 0 (t) = βu(t) + γ ≤ βz(t) + γ,
t ∈ [0, T ].
This implies that
z 0 (t) − βz(t) ≤ γ ⇒ (e−βt z(t))0 ≤ γe−βt
γ
⇒ e−βt z(t) − z(0) ≤ (1 − e−βt )
|{z} β
=α
⇒ u(t) ≤ z(t) ≤ αeβt +
γ βt
(e − 1),
β
t ∈ [0, T ].
Note: more advanced versions in Ch. 8.10 (not needed for exam).
Theorem. Suppose that f , g ∈ C(D, Rn ) and that f satisfies a uniform Lipschitz
condition with respect to x in D. If x(t) and y(t) are solutions of
(
(
x0 = f (t, x)
y 0 = g(t, y)
and
x(t0 ) = x0
y(t0 ) = y 0 .
Then
kx(t) − y(t)k ≤ kx0 − y 0 keL|t−t0 | +
M L|t−t0 |
(e
− 1)
L
where
L=
sup
(t,x),(t,y)∈K
x6=y
kf (t, x) − f (t, y)k
,
kx − yk
M = sup kf (t, x) − g(t, x)k
(t,x)∈K
and K ⊂ U is a compact set containing the graphs of x and y.
3
Proof. Assume w.l.o.g. that t0 = 0 and t ≥ 0. Using the integral formulation of
the IVP, we obtain
Z t
kf (s, x(s)) − g(s, y(s))k ds,
kx(t) − y(t)k ≤ kx0 − y 0 k +
0
where
kf (s, x(s)) − g(s, y(s))k ≤ kf (s, x(s)) − f (s, y(s))k + kf (s, y(s)) − g(s, y(s))k
≤ Lkx(s) − y(s)k + M.
Apply Grönwall’s ineq. with α = kx0 − y 0 k, β = L and γ = M .
If f = g we obtain
kx(t) − y(t)k ≤ kx0 − y 0 keL|t−t0 | ,
The solution depends continuously on the initial value. The bound grows exponentially in t!
Note: From the proof it follows that one can take M = supt∈I kf (t, y(t)) −
g(t, y(t))k.
Example (Simple pendulum).
No air resistence or friction. Movement in a plane. Ignore mass of rod.
`
x
m
mg sin x
mg
Newton’s 2nd law (tangential acceleration):
g
sin x = 0.
`
p
Non-dimensionalise by setting x(t) = x̃(t̃), t̃ = g` t:
m`x00 = −mg sin x ⇔ x00 +
x̃00 + sin x̃ = 0.
Remove tilde for notational convenience:
x00 + sin x = 0.
Assume x ≈ 0 ⇒ sin x ≈ x. Approximate model
x00 + x = 0.
4
2nd order ⇒ 2 initial conditions. How big is the difference between the solutions
if
x(0) = 0, x0 (0) = 1?
First order systems:
(
x01 = x2 ,
x02 = − sin x1 ,
(
y10 = y2 ,
y20 = −y1 .
Explicit solution
y1 (t) = sin t,
y2 (t) = cos t.
kf (t, y(t)) − g(t, y(t))k = k(y1 (t), − sin y2 (t)) − (y1 (t), −y2 (t))k = | sin y2 (t) − y2 (t)|
|t|3
1
1
|y2 (t)|3
≤
≤ 3
= ,
≤
3!
3!
2 · 3!
48
if |t| ≤ 1/2, so M = 1/48. Can take L = 1:
kf (t, x)−f (t, y)k2 = (x2 −y2 )2 +(sin x1 − sin y1 )2 ≤ (x2 −y2 )2 +(x1 −y1 )2 = kx−yk2 .
|
{z
}
≤(x1 −y1 )2
The difference at t = 1/2 can be estimated by
1 t
1
(e − 1) = (e0.5 − 1) ≈ 0.01.
48
48
If f is C 1 one can show that φ ∈ C 1 (w.r.t. all variables), where φ(t, x0 ) is the
solution with initial value x0 (Ch. 8.3.).
Extendability of solutions
x0 = f (t, x),
x(t0 ) = x0 ,
f continuous and satisfies a Lipschitz cond. w.r.t. x.
Suppose that x and y are solutions defined on I, J (open intervals). y is called
an extension of x if I ⊂ J and y(t) = x(t), ∀t ∈ I.
In the book it is only assumed that f is continuous, in which case extensions are
not unique in general. In our case they are.
Proposition. Suppose that y 1 , y 2 are both extensions of x to J1 and J2 , respectively. Then y 1 (t) = y 2 (t) for all t ∈ J1 ∩ J2 .
Proof. Suppose not and let J1 ∩ J2 = (a, b). Then there exists a maximal interval
(tl , tr ) such that y 1 (t) = y 2 (t), for t ∈ (tl , tr ) and either tl > a or tr < b. Suppose
tr < b. Continuity ⇒ y 1 (tr ) = y 2 (tr ). Picard-Lindelöf ⇒ the solutions agree in a
neigbourhood of tr . Contradiction!
5
This implies that there is a maximal interval (α, ω) to which the solution can be
extended and that the extension is unique. From now on, we we always assume
that x is defined on (α, ω).
Theorem. Let f ∈ C(D, Rn ) satisfies a Lipschitz condition with respect to x. If
K ⊂ D is compact, then (t, x(t)) ∈
/ K for t sufficiently close to α or ω.
Proof. Suppose not. Then ∃{Pk }, Pk = (tk , x(tk )), s.t. Pk ∈ K ∀k and tk → ω − ,
say. Bolzano-Weierstrass ⇒ ∃ convergent subsequence
Pkj = (tkj , x(tkj )) → (ω, y 0 ) ∈ K ⊂ D.
D open ⇒ ∃δ > 0 s.t.
Q := {(t, x) : ω − δ ≤ t ≤ ω + δ, kx − y 0 k ≤ δ} ⊂ D.
Let
M = max kf (t, x)k.
(t,x)∈Q
Then
Qkj := {(t, x) : tkj ≤ t ≤ ω + δ, kx − x(tkj )k ≤ δ/2} ⊂ Q
for j sufficiently large and we can solve the ODE starting from (tkj , x(tkj )) using
Picard-Lindelöf with solution defined on [tkj , a], with
δ
a := min ω + δ, tkj +
.
2M
But a > ω for j large, so that we have extended the solution beyond ω. This is a
contradiction.
We say that (t, x(t)) → ∂D as t → ω − or α+ .
Four possibilities (as t → ω − ):
(1) ω = ∞, global solution.
(2) ω < ∞, kx(t)k → ∞, the solution blows up in finite time.
(3) (t, x(t)) → (ω, y) ∈ ∂D.
(4) (t, x(t)) approaches ∂D, has no limit (e.g. x(t) = sin 1t , x0 (t) = −
cos 1t
t2
, t < 0).
Example. First order scalar eq. x0 = f (t, x), f , fx continuous on (a, b) × (c, d). If
ω < b, then x(t) → c or d as t → ω − (Thm. 1.3).
Corollary. Suppose f as in previous theorem, defined on Rn+1 and bounded. Then
(α, ω) = (−∞, ∞).
We can do a bit better.
6
Theorem. Suppose f as in previous theorem with D = I ×Rn and that ∀[T1 , T2 ] ∈
I, ∃M, L ∈ R s.t.
(t, x) ∈ [T1 , T2 ] × Rn .
kf (t, x)k ≤ M + Lkxk,
Then all solutions of (1) are defined for all t ∈ (a, b).
Proof. Take t0 = 0 w.l.o.g. and assume that ω < ∞. Choosing [T1 , T2 ] = [0, ω],
we find that
Z t
(M + Lkx(s)k) ds,
0 ≤ t < ω.
kx(t)k ≤ kx0 k +
0
Grönwall’s inequality ⇒
kx(t)k ≤ kx0 keLω +
M Lω
(e − 1),
L
0 ≤ t < ω.
Thus kx(t)k 6→ ∞ as t → ω − . Contradiction!
Corollary. Let A(t) and b(t) be continuous on I and t0 ∈ I. The linear system
x0 = A(t)x + b(t), x(t0 ) = x0 , has a unique solution x ∈ C 1 (I, Rn ) for each
x0 ∈ Rn .
Proof. f (t, x) = A(t)x + b(t) is continuous on I and satisfies a Lip. cond. w.r.t. x.
We can take
M = max kb(t)k, L = max kA(t)k.
t∈[T1 ,T2 ]
t∈[T1 ,T2 ]
7