Differential Equations I
800345A
Lecture notes summary
Summer 2013
Markus Harju
Introduction
1
1 First order equations
1.1 Separable equation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Homogeneous ODE . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Rational form ODE . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.1 Case aq − bp 6= 0 . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.2 Case aq − bp = 0, b 6= 0 . . . . . . . . . . . . . . . . . . . . . .
1.4 Linear equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.5 Bernoulli equation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.6 Exact equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.7 General equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.7.1 Solution with respect to y ′ . . . . . . . . . . . . . . . . . . . .
1.7.2 Form F (y ′ ) = 0 . . . . . . . . . . . . . . . . . . . . . . . . . .
1.7.3 Variable x or y is a function of the derivative y ′ . . . . . . . .
1.7.4 Method based on differentiation . . . . . . . . . . . . . . . . .
1.7.5 Geometric application: orthogonal trajectories to the family of
2 Higher order equations
2.1 Second order: special cases . . . . . . . . . . . . . . . . . .
2.1.1 Equation does not contain y . . . . . . . . . . . . .
2.1.2 Equation does not contain x . . . . . . . . . . . . .
2.2 Linear equations: general facts . . . . . . . . . . . . . . . .
2.3 Linear homogeneous equation with constant coefficients . .
2.3.1 Roots of (KY) real and simple . . . . . . . . . . . .
2.3.2 Roots of (KY) real but not simple . . . . . . . . . .
2.3.3 (KY) has complex roots . . . . . . . . . . . . . . .
2.4 Nonhomogeneous linear equation with constant coefficients
2.4.1 Method of undetermined coefficients . . . . . . . .
2.5 Linear equation of second order . . . . . . . . . . . . . . .
2.5.1 Elimination of first order derivative . . . . . . . . .
2.5.2 Nonhomogeneous equation: variation of constants .
2.5.3 Euler’s equation . . . . . . . . . . . . . . . . . . . .
2.6 Systems of differential equations . . . . . . . . . . . . . . .
2.7 Power series method . . . . . . . . . . . . . . . . . . . . .
2.8 Laplace transform . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
curves
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
8
8
9
10
10
11
12
14
14
16
16
17
17
17
18
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
20
20
20
20
21
22
22
23
23
24
24
26
27
27
28
29
30
30
Introduction
An ordinary differential equation (ODE) is a mathematical equation consisting of an unknown
function y = y(x) of one real variable x and its derivatives. The task is solve the equation or
to determine the unknown function as completely as possible.
ODEs are used to model changes in observable (physical) world.
Example 0.1 (Newton’s law of gravity). An object is dropped and it falls down as a result of
gravitation. Let m be the mass of the object and let h(t) be its height at time t > 0
total force = mass × acceleration
F = ma
Model:
d2 h
m 2 = −mg,
dt
where g is a constant and h′′ (t) is the acceleratoin (h′ (t) is the speed or velocity). Integrating
this twice gives
h(t) = −gt2 /2 + C1 t + C2 ,
where C1 , C2 are arbitrary constants (of integration). These constants can be determined if the
initial speed and height are known i.e.
C1 = h′ (0),
C2 = h(0).
Example 0.2 (Population growth). Suppose that a population at time t0 is P0 . Find population
at time t > t0 .
We assume that the birth rate and mortality rate on short time intervals [t, t + ∆t] are
directly proportional to the product of the population size P (t) and interval length ∆t. Let b
and c be the proportionality constants of the birth and mortality rates, respectively.
Model:
P (t + ∆t) = P (t) + bP (t)∆t − cP (t)∆t + ∆tǫ(∆t)
As ∆t → 0, then
dP
= (b − c)P (t) = kP (t),
dt
We shall see that this equation is solved by
k = b − c.
P (t) = Cekt ,
where C is an arbitrary constant. The constant C may be determined by the initial condition
P (t0 ) = P0 :
C = P0 e−kt0 .
Thus we obtain the so called exponential growth model
P (t) = P0 ek(t−t0 ) .
If the quantity k = b − c is not known, it can be determined provided that the population is
known at another time instant P (t1 ) = P1 . Calculations give
k=
ln P1 − ln P0
.
t1 − t0
1
Notation Notation of derivative (differentiation):
y = y(x)
dy
= Dy
y′ =
dx
d2 y
y ′′ = 2 = D2 y = D(Dy)
dx
..
.
y (n) =
dn y
= Dn y
dxn
(brush up on your differentiation and integration skills). Partial differentiation is denoted by
∂f (x, y)
,
∂x
Constants:
General form of ODE
∂ 2 f (x, y)
,
∂y∂x
∂f (x, y)
,
∂y
...
e ...
C, C1 , C2 , . . . , Cn , C ′ , C ′′ , C,
F (x, y, y ′ , y ′′ , . . . , y (n) ) = 0
Example 0.3.
dy
= x + 5,
dx
dz
x
=z+
dx
1−x
y ′′′ + 3(y ′′ )3 + y ′ = cos x,
Key concepts
The order of ODE is the highest order of derivatives of unknown functions..
Example 0.4 (Order?).
xy ′ + y = 2,
p
y ′ + ln y ′ + y 5 = 0,
(y ′′ )2 + (y ′ )3 + 2y = x2 + 1,
(y ′ )2 = 2y
d2 y
dy
+6 +y =0
2
dx
dx
Linear equation
pn (x)y (n) + pn−1 (x)y (n−1) + · · · + p1 (x)y ′ + p0 (x)y = q(x)
Functions pi (x) are called the coefficients of the ODE. If the equation is not linear it is said
to be nonlinear.
Example 0.5 (Linear or not?).
x2 y ′′ + xy ′ + (x2 − p2 )y = 0,
y ′′′ − y ′′ + y ′ − 2y = 0,
(y ′ )3 − y 4 = 0,
y ′′ + sin y = 0
2
y ′′ + sin x = 0
Solution of equation Function y = y(x) ∈ C n (I), I ⊂ R is a solution of an equation, if it
satisfies the equation (at the points of I). Note that solution may not exist (e.g. |y ′ | + 1 = 0).
Solutions types/classes
Full solution consists of all possible solutions to the equation (in maximal subset I ⊂ R).
Usually impossible to find.
General solution contains n arbitrary constants C1 , . . . , Cn i.e the general solution is of the
form
y = y(x, C1 , . . . , Cn ).
Each set of concrete values of Cj corresponds to a particular solution.
Special solution is a solution which cannot be obtained from the general solution by any
choice of contants Cj .
Example 0.6. The function
y(x) = C1 ex + C2 e−x
is the general solution of the equation y ′′ − y = 0 for all x ∈ R, when C1 , C2 are constants
(Why?). A particular solution is
y(x) = ex + e−x .
There are no special solutions.
Example 0.7.
x2 + 1
, x 6= 0
x
One finds by elementary differentiation that y(x) solves the equation outside of the origin.
x2 y ′′ + xy ′ − y = 0,
Example 0.8. The function
y(x) =
y(x) =
1
,
x+C
x 6= −C
is the general solution of
y ′ = −y 2
The function y(x) ≡ 0 is a special solution (which cannot be obtained from the general solution).
Note though that
1
→ 0, C → ±∞.
x+C
The full solution is
0,
x∈R
y(x) =
1
, x 6= −C
x+C
3
Solution satisfying initial values In practical problems one often seeks for a solution
satisfying some given condition. For example, one might require that the solution y satisfies
the initial condition
y(x0 ) = y0 ,
where x0 and y0 are given numbers. Sometimes there might be more conditions e.g.
y ′ (x1 ) = y1
etc. In these cases one speaks about the solution of an initial value problem. In general, the
nth order equation is accompanied with n initial conditions
y ′ (x1 ) = y1 , . . . ,
y(x0 ) = y0 ,
y (n−1) (xn−1 ) = yn−1
(so that arbitrary constants Cj can be determined).
Example 0.9.
y ′′ − x = 0,
y(0) = 0,
y ′ (1) =
1
2
Solution methods for initial value problem
Consider the first order initial value problem
y ′ = f (x, y),
y(x0 ) = y0 .
(IVP)
Picard(-Lindelöf ) method Suppose f is (sufficiently regular) function. Define a new function
Z x
f (t, y0 )dt.
y1 (x) = y0 +
x0
By replacing y0 inside the integral by the function y1 (x) we obtain another new function
Z x
f (t, y1 (t))dt.
y2 (x) = y0 +
x0
Continuing in this manner we obtain the function sequence
Z x
yn (x) = y0 +
f (t, yn−1 (t))dt, n = 1, 2, . . .
x0
y0 (x) ≡ y0
It can be proved that this sequence converges uniformly to the unique solution of the initial
value problem (near the point x0 ).
The method works because the initial value problem and the integral equation
Z x
y(x) = y0 +
f (t, y(t))dt
x0
have the same set of solutions. This in turn follows from the fundamental theorem of calculus
or
Z x
Z x
′
y(x) − y0 = y(x) − y(x0 ) =
y (t)dt =
f (t, y(t))dt.
x0
Example 0.10.
y ′ = ky,
y(0) = 1
4
x0
Euler method Consider the interval [x0 , x0 + r], r > 0. Divide this interval into subintervals
by denoting
h=
r
,
n
xi+1 = xi + h = x0 + (i + 1)h,
yi+1 = yi + hf (xi , yi ),
i = 0, 1, . . . , n − 1
The condition
yn (xi + th) = yi + thf (xi , yi ),
0≤t≤1
defines a function yn defined on the interval [x0 , x0 + r], whose graph is the polyline connecting
the points (xi , yi ) (Reason: yn (xi ) = yi ). Refining the subintervals (increasing n) one obtains a
better approximation for the solution.
Peano proved in 1890, that a subsequence of (yn )∞
n=1 converges uniformly on [x0 , x0 + r] to a
solution of initial value problem provided that f is continuous. If also ∂f /∂y is continuous then
the sequence yn itself converges uniformly to the unique solution of the initial value problem.
The method is based on approximating the integral by the trapezoidal rule
yi = yn (xi ) = y0 +
i
X
j=1
hf (xj−1 , yj−1 ) ≈ y0 +
Z
xi
f (t, yn (t))dt
x0
The method can be used in two ways:
1. approximate solution at a given point
2. constructing the solution function
Usually this method is regarded as a numerical method (computers).
Example 0.11. Approximate solution of the initial value problem
y ′ = y,
y(0) = 1
at x = 0.2.
Now f (x, y) = y, x0 = 0, y0 = 1. Choosing n = 2 we have h = 0.1, the points
x1 = 0.1, x2 = 0.2
and the approximate values
y1 = y(x1 ) = y0 + hf (x0 , y0 ) = 1.1
y2 = y(x2 ) = y1 + hf (x1 , y1 ) = 1.21.
Bigger values for n give the following approximations at x = 0.2:
4
8
16
32
64
128
n
yn 1.2155 1.2184 1.2199 1.2206 1.2210 1.2212
The correct solution is y(0.2) = e0.2 ≈ 1.2214.
Example 0.12. Initial value problem
y ′ = ky,
y(0) = 1
and the construction of its solution.
5
Now f (x, y) = ky, x0 = 0, y0 = 1. Choose r = x and consider the interval [0, x]. So h = x/n
and
yi+1 = yi + hf (xi , yi ) = (1 + kx/n)yi .
Hence the approximate solution at x is
yn = (1 + kx/n)yn−1 = . . . = (1 + kx/n)n .
By above discussion the solution at x is
y(x) = lim yn = lim (1 + kx/n)n = ekx .
n→∞
n→∞
Runge–Kutta method Consider again the interval [x0 , x0 + r]. Denote again
r
xi = x0 + ih, h = , i = 0, 1, . . . , n.
n
Define the numbers
k1
k2
k3
k4
= hf (x0 , y0 )
= hf (x0 + h/2, y0 + k1 /2)
= hf (x0 + h/2, y0 + k2 /2)
= hf (x0 + h, y0 + k3 )
Compute the approximation
1
y1 = y0 + (k1 + 2k2 + 2k3 + k4 )
6
(Justification: application of Simpson quadrature rule to the integral.)
This process is continued by replacing the point (x0 , y0 ) by the point (x1 , y1 ) etc. So we set
the numbers
k1
k2
k3
k4
= hf (xi , yi )
= hf (xi + h/2, yi + k1 /2)
= hf (xi + h/2, yi + k2 /2)
= hf (xi + h, yi + k3 )
and
1
yi+1 = yi + (k1 + 2k2 + 2k3 + k4 )
6
This gives us the approximate solutions yi at the points xi , i = 0, 1, . . . , n.
Example 0.13. Apply the Runge–Kutta method to the initial value problem
y ′ = y,
y(0) = 1
to find the approximate solution at x = 0.2
As above f (x, y) = y, x0 = 0, y0 = 1. Choosing n = 1 we get h = 0.2 and the numbers
k1
k2
k3
k4
= hf (xi , yi ) = 0.2
= hf (xi + h/2, yi + k1 /2) = 0.22
= hf (xi + h/2, yi + k2 /2) = 0.222
= hf (xi + h, yi + k3 ) = 0.2444
6
and the approximate solution
1
y1 = 1 + (k1 + 2k2 + 2k3 + k4 ) = 1.2214.
6
Note that this gave us more accurate approximation than using the Euler method. This is due
to the fact that the Runge–Kutta method is of order O(h4 ) which means that |yi −y(xi )| ≤ Ch4 .
7
Chapter 1
First order equations
General form
F (x, y, y ′ ) = 0
Normal form
y ′ = f (x, y)
Here F and f are some given functions.
Example 1.1.
x − y 2 − y ′ = 0 F (x, y, z) =
sin y ′ − x2 − y 2 = 0 F (x, y, z) =
y ′ + sin y ′ − y + cos x = 0 F (x, y, z) =
1.1
f (x, y) =
f (x, y) =
f (x, y) =
Separable equation
Separable equation:
Example 1.2. Are these separable?
dy
= g(x)h(y)
dx
xy + y
x
′
x−y
y =e
y′ =
y ′ = sin(xy)
x+y
y′ =
x−y
y ′ = exy
Solving separable equations
1. Special solutions: if y = y0 is the zero of h or h(y0 ) = 0, then the constant function
y(x) ≡ y0
is a special solution of the equation (why?).
8
2. General solution: for h(y) 6= 0 we may divide the equation to get
y′
= g(x)
h(y)
or
dy
= g(x)dx.
h(y)
Here the variables have been separated. Integration gives
Z
Z
dy
= g(x)dx + C
h(y)
Performing the integration yields
H(y) = G(x) + C,
which can usually be solved for y = y(x). Sometimes the solution must be left in implicit
form.
Example 1.3.
dy
x
= 2
dx
y
Example 1.4. Solve the separable equations of Example 1.2.
Example 1.5.
1.2
dy
= ay + b,
dx
a 6= 0.
Homogeneous ODE
Homogeneous ODE:
y
dy
=g
dx
x
Example 1.6. Are these homogeneous ODEs?
y′ =
y′ =
y+x
x
2xyex/y
x2 + y 2 sin(x/y)
y2
x
2
x +y
y′ =
x3
y′ =
9
Solving homogeneous ODEs Denote the new function(!)
u = u(x) =
y
.
x
Then y = xu and
y ′ = u + xu′ .
Substituting these into the ODE
y
dy
=g
dx
x
gives
u + xu′ = g(u).
So
g(u) − u
,
x
This separable ODE can be solved for u(x) as above and finally one returns to
u′ =
y = xu(x).
Example 1.7.
x
dy
=x+y
dx
Example 1.8 (”J15b)”).
y′ =
x2 + y 2
xy
Example 1.9 (”J17d)”).
2xyy ′ = y 2 − x2
1.3
Rational form ODE
Form more general than the preceding ones:
ax + by + c
dy
=f
dx
px + qy + r
This can be transformed into simpler form in order to solve it.
1.3.1
Case aq − bp 6= 0
In this case the system of equations
(
ax + by + c = 0
px + qy + r = 0
has exactly one solution (since the determinant is not zero)
(
x=h
y = k.
10
Substitute t = x − h and set the new function
z(t) = y(x) − k = y(t + h) − k.
The derivative of the new function z(t) with respect to the new variable t is by the chain rule
dy dx
dy
dz
=
−0=
,
dt
dx dt
dx
since
dx
= 1.
dt
Substituting these expressions to the original ODE implies
a(t + h) + b(z + k) + c
at + bz
dz
=f
=f
dt
p(t + h) + q(z + k) + r
pt + qz
since x = h, y = k solve the aforementioned system. So we have obtained the homogeneous
ODE
dz
a + bz/t
,
=f
dt
p + qz/t
which can be solved for z(t). Finally we return to the solution of the original equation as
y(x) = z(x − h) + k.
Example 1.10.
dy
1
=
dx
2
x+y−1
x+2
2
dy
=2
dx
y+2
x+y−1
2
Example 1.11.
1.3.2
Denote
Case aq − bp = 0, b 6= 0
q
α= .
b
Since aq = bp, then
p=
Moreover q = αb or
aq
= αa.
b
ax + by + c
ax + by + c
=
.
px + qy + r
α(ax + by) + r
Introduce the new function
v = ax + by.
Then
y=
or
v − ax
b
1 dv a
dy
=
− .
dx
b dx b
11
Substituting these into the original ODE gives
1 dv a
− =f
b dx b
v+c
αv + r
,
which is separable. Solve this and return to the solution of original ODE as v = ax + by.
Example 1.12.
dy
2x + 4y − 3
=
dx
x + 2y + 1
In particular, equations of the form
y ′ = f (ax + by),
b 6= 0
can be solved by substituting v = ax + by.
Example 1.13 (”J17a)”).
y ′ = (y − x)2 + y − x + 1
1.4
Linear equation
p1 (x)
Normalized form
dy
+ p0 (x)y = q1 (x)
dx
dy
+ p(x)y = q(x)
dx
Homogeneous equation (q(x) ≡ 0):
y ′ + p(x)y = 0
This equation always has the trivial solution y(x) ≡ 0. Other solutions may be found by
separating variables (or as follows).
Example 1.14. Are these linear?
dy
+ 3xy = cos x
dx
sin x
dy
+ (3x2 + 1)y = ln x
dx
dy
+ 3xy 3 = cos x
dx
y ′ + sin y = 0
(y ′ )2 = x
12
Solving the normalized equation
1. Integrate
P (x) =
and form the integrating factor
Z
eP (x) = e
p(x)dx
R
p(x)dx
2. Multiply the equation by the integrating factor to get
eP (x) y ′ + p(x)eP (x) y(x) = eP (x) q(x)
3. Note that the left hand side is derivative of a product or
d P (x) e
y = eP (x) q(x)
dx
4. Cancel differentiation by integrating
e
P (x)
y=
Z
eP (x) q(x)dx + C
5. Finally it is easy to solve
y=e
−P (x)
Z
e
P (x)
q(x)dx + C
This way we obtain the general solution. Instead of the above formulas one should remember
the solution principle.
Example 1.15.
y ′ + 3xy = x
Example 1.16.
(x + a)
dy
+ y = x − a,
dx
a>0
a) general solution?
b) solution satisfying the initial condition y(0) = 1?
Example 1.17 (”J7a)”).
xy ′ = 2y + 3x3 ,
13
y(1) = 1
1.5
Bernoulli equation
dy
+ p(x)y = q(x)y α
dx
If α = 0, 1, then this is linear equation, which is solved as above. Consider the case α 6= 0, 1.
If α > 0, then y(x) ≡ 0 is a solution.
To find other solutions divide by the nonlinear term y α (or multiply by the term y −α ). We
obtain
dy
y −α
+ p(x)y 1−α = q(x).
dx
Substituting
dy
dz
= (1 − α)y −α
z = y 1−α ,
dx
dx
we obtain (note the factor (1 − α))
dz
+ (1 − α)p(x)z = (1 − α)q(x).
dx
This linear equation is solved for z(x), after which y(x) = z(x)1/(1−α) .
Example 1.18.
y ′ + y = xy 2/3
Example 1.19.
y′ −
y
= −x3 y 4
x
Example 1.20.
y ′ − y = xy 5
Example 1.21 (”J13”).
y ′ − 3xy = xy 2
Example 1.22 (”J14a)”).
y′ =
1.6
xy 2 + y
x
Exact equation
The equation
M (x, y)dx + N (x, y)dy = 0
is exact, if there exists a function g = g(x, y), such that
∂g
= M (x, y) and
∂x
∂g
= N (x, y).
∂y
Such function g is called the integral of the equation.
A practical way of determining exactness is to apply the criterion for exactness: An
equation is exact if and only if
∂N
∂M
=
.
∂y
∂x
Let us investigate this claim more closely as it leads also to the solution method.
14
If equation is exact and g is its integral, then for (sufficiently regular) functions M ja N it
holds that
∂ ∂g
∂ ∂g
∂N
∂M
=
=
=
∂y
∂y ∂x
∂x ∂y
∂x
or the criterion holds.
Conversely, if criterion holds, then
Z
Z
∂
∂
N (x, y) −
M (x, y)dx = N (x, y) −
N (x, y)dx = N (x, y) − N (x, y) + ϕ(y) = ϕ(y).
∂y
∂x
So
N (x, y) =
Denoting
g(x, y) =
we obtain
Z
Z
∂
M (x, y)dx + ϕ(y).
∂y
M (x, y)dx +
Z
ϕ(y)dy
∂g
= M (x, y)
∂x
and
Z
∂g
∂
=
M (x, y)dx + ϕ(y) = N (x, y).
∂y
∂y
Thus g is the integral of the equation and the equation is exact.
The integral g of the equation has central role in solving the equation, since the following
holds: If g is the integral of exact equation then the function y = y(x) solves the equation if
and only if
g(x, y(x)) = C
for some constant C. Reason: integrate the equation
0 = M (x, y(x)) + N (x, y(x))y ′ (x) =
d
∂g ∂g ′
+
y (x) =
g(x, y(x))
∂x ∂y
dx
on both sides.
Solving exact equation By the above discussion solving exact equation consists of two
steps:
1. find the integral g from the equations
Z
g(x, y) = M (x, y)dx + h(y) and
∂g
= N (x, y).
∂y
2. find the general solution y = y(x) from the equation
g(x, y(x)) = C.
Sometimes this step leaves y(x) in implicit form.
Example 1.23.
e−y dx − (2y + xe−y )dy = 0
Example 1.24. Solve the initial value problem
(x2 + y 2 )dx + 2xydy = 0,
15
y(1) = 1.
Method of integrating factor Function µ(x, y) 6= 0 is called the integrating factor of the
equation
M (x, y)dx + N (x, y)dy = 0
if multiplying the equation by µ makes it exact i.e.
µ(x, y)M (x, y)dx + µ(x, y)N (x, y)dy = 0
is exact. These equations have the same solutions, since µ(x, y) 6= 0. In finding the integrating
factor µ(x, y) we restrict to the following special cases: (general case is difficult)
1. If
∂N
∂M
−
= f (x),
∂y
∂x
Z
R
µ(x) = exp( f (x)dx) = e f (x)dx .
1
N
then
2. If
∂M
∂N
= f (y),
−
∂y
∂x
Z
R
µ(y) = exp(− f (y)dy) = e− f (y)dy .
1
M
then
These are justified during lectures. Advanced readers may think about how this integrating
factor is related to that of linear equation.
Example 1.25.
(2y 3 + 2)dx + 3xy 2 dy = 0,
y(1) = 1.
Remark 1.26. If M and N are polynomials in x, y then one may try the ansatz µ = xr y s and
determine the exponents r, s from the exactness condition.
Example 1.27.
2y(x2 − y)dx − x3 dy = 0
1.7
General equation
Let us consider the solution of the general form
F (x, y, y ′ ) = 0
in some special cases.
1.7.1
Solution with respect to y ′
If the equation can be solved with respect to y ′ or
y ′ = fi (x, y),
i = 1, 2, . . . , n,
then we solve these separately. Solutions obtained this way are also solutions of the original
equation.
Example 1.28.
Example 1.29.
(y ′ )2 + yy ′ − x2 − xy = 0
(y ′ )2 + 4xyy ′ + 3x2 y 2 = 0
16
1.7.2
Form F (y ′ ) = 0
If the equation is of the form
F (y ′ ) = 0,
then substituting
y′ = p
we obtain F (p) = 0. The real roots p1 , . . . , pn of this equation lead, after integration, to the
solutions
y = pi x + Ci , i = 1, . . . , n.
Example 1.30.
(y ′ )3 − y ′ = 0
1.7.3
Variable x or y is a function of the derivative y ′
If
x = f (y ′ ) or y = f (y ′ ),
then applying the inverse function f −1 (provided it exists, of course) we get
y ′ = f −1 (x) or y ′ = f −1 (y).
These separable equations are solved as above.
Example 1.31.
y = arctan(y ′ ),
1.7.4
y(0) = π/2.
Method based on differentiation
Consider the special case
y = f (x, y ′ ).
Function y = y(x) solves this equation if and only if
y(x) = f (x, p(x)),
where p = p(x) solves the equation
p = fx (x, p) + fp (x, p)
dp
dx
So:
1. Substitute y ′ (x) = p(x)
2. differentiate the obtained equation (with respect to x)
Example 1.32.
dy dy
y=
− x + ln dx
dx
17
Clairaut equation Special case of the previous case:
y = xy ′ + f (y ′ )
Substitution y ′ = p yields
y = xp + f (p).
Differentiating this gives
p = p + xp′ + f ′ (p)p′
or
(x + f ′ (p))p′ = 0.
So we have two cases:
1. if p′ = 0, then p = C and y = Cx + f (C)
2. in the case x + f ′ (p) = 0 we consider the system
(
x = −f ′ (p)
y = f (p) − f ′ (p)p
Eliminate p from this system to obtain solution.
Example 1.33.
y = xy ′ −
1.7.5
p
(y ′ )2 + 1
Geometric application: orthogonal trajectories to the family
of curves
Consider the family of curves
G(x, y, C) = 0,
where the constant C is regarded as a parameter. Examples of families of curves:
1. circle: (x − x0 )2 + (y − y0 )2 = C
2 2
x − x0
y − y0
2. ellipse:
+
=C
a
b
3. parabola: y = Cx2 + 2x
4. hyperbola:
x2 y 2
− 2 =C
a2
b
5. y = Cex etc.
Orthogonal trajectory to the family of curves is a (regular) curve that intersects each
curve in the given family of curves perpendicularly (i.e. the tangents at intersecting points are
perpendicular to each other).
These orthogonal trajectories are found as follows:
18
1. form a system of equations from the equation of family of curves and the one obtained
by differentiating it. Formally:
(
G(x, y, C) = 0
Gx (x, y, C) + Gy (x, y, C)y ′ = 0
2. eliminate the parameter C to obtain ODE to the family of curves. Suppose it is
F (x, y, y ′ ) = 0.
(KPDY)
1
in this ODE (perpendicularity, product of slopes is −1) to obtain the
y′
ODE for the orthogonal trajectories
3. replace y ′ → −
F (x, y, −1/y ′ ) = 0.
4. solve (KLDY)
Example 1.34. Orthogonal trajectories to the family of curves
y 2 = Cx3
Example 1.35. Orthogonal trajectories to the family of curves
x2 + y 2 − 2y = C
Example 1.36. Orthogonal trajectories to the family of curves
x2 − y 2 = Cx
Example 1.37 (”J16a)”). Orthogonal trajectories to the family of curves
y = C/x2
19
(KLDY)
Chapter 2
Higher order equations
2.1
2.1.1
Second order: special cases
Equation does not contain y
F (x, y ′ , y ′′ ) = 0
Substituting z = y ′ we obtain the first order equation F (x, z, z ′ ) = 0, which is solved for z (cf.
algebraic equations). Then one integrates to obtain y(x).
Example 2.1.
d2 y
+ 2x
dx2
dy
dx
2
=0
Example 2.2 (”J34”).
(1 + x2 )y ′′ = 2xy ′
General solution and the solution satisfying the initial conditions y(0) = 1, y ′ (0) = 3?
2.1.2
Equation does not contain x
F (y, y ′ , y ′′ ) = 0
In this case one substitutes y ′ = p(y) and solves for p. Finally y is determined from the ODE
y ′ = p(y).
Example 2.3.
yy ′′ = (y ′ )2 ,
y(0) = 1, y ′ (0) = 2
Example 2.4.
y ′′ = yy ′ ,
y(0) = y ′ (0) = 2
20
2.2
Linear equations: general facts
nth order nonhomogeneous linear equation (normalized, cf. Introduction):
y (n) + pn−1 (x)y (n−1) + · · · + p1 (x)y ′ + p0 (x)y = q(x)
The corresponding homogeneous linear equation is
y (n) + pn−1 (x)y (n−1) + · · · + p1 (x)y ′ + p0 (x)y = 0.
Let y0 be a solution of the nonhomogeneous equation. Then y solves the nonhomogeneous
equation if and only if
y = y0 + yh ,
where yh is the general solution of the corresponding homogeneous equation.
Moreover, for the solutions y1 , . . . , yk of the homogeneous equation the linear combination
y(x) = C1 y1 (x) + · · · + Ck yk (x)
is also a solution of the homogeneous equation.
Wronskian Let y1 , . . . , yn be solutions of the homogeneous equation. If
y1
y2
···
yn ′
y1
y2′
···
yn′ W (x) = ..
..
.. 6≡ 0,
...
.
.
. (n−1) (n−1)
(n−1)
y1
y2
· · · yn
then the general solution of the homogeneous equation is
y(x) = C1 y1 (x) + · · · + Cn yn (x).
Such function set {y1 , . . . , yn } is said to be the basic system of solutions. This basic system
always exists (for the homogeneous equation).
Summary
1. homogeneous equation has the basic system of solutions {y1 , . . . , yn } (should be determined somehow)
2. the general solution of the homogeneous equation is
yh (x) = C1 y1 (x) + · · · + Cn yn (x)
3. the general solution of the nonhomogeneous equation is
y = y0 + yh ,
where y0 is a particular solution of the nonhomogeneous equation (should be determined
somehow).
21
2.3
Linear homogeneous equation with constant coefficients
a0 y (n) + a1 y (n−1) + · · · + an−1 y ′ + an y = 0,
aj ∈ R, a0 6= 0
(HY)
Differential operator makes it possible to denote this also as
P (D)y = (a0 Dn + a1 Dn−1 + · · · + an−1 D + an )y = 0
One can do arithmetic and factoring with these operators like with polynomials.
Example 2.5.
y ′′ − 2y ′ + 1 = 0
y ′′ − 3y ′ = 0
y ′′′ − 4y ′′ + 4y ′ = 0
P (D) = D2 − 2D + 1 = (D − 1)2
P (D) =
P (D) =
We try to solve the homogeneous equation (HY) using the ansatz
y(x) = erx .
Substituting this ansatz into the homogeneous equation leads us to the characteristic equation (or auxiliary)
P (r) = a0 rn + a1 rn−1 + · · · + an−1 r + an = 0.
(KY)
This equation has n roots (some may be multiple and/or complex-valued) so they correspond
to the solutions
yj (x) = erj x , j = 1, . . . , n.
Depending on the nature of the roots the solution is divided into subcases.
2.3.1
Roots of (KY) real and simple
Consider the case where (KY) has n simple real roots r1 , . . . , rn . Then each rj correponds, via
ansatz, to the (basic)solution
yj (x) = erj x ,
j = 1, . . . , n.
This way we obtain the basic system of solutions {y1 , . . . , yn } so the linear combination
yh (x) = C1 er1 x + C2 er2 x + · · · + Cn ern x
is the general solution of (HY).
Example 2.6.
y ′′ + 2y ′ − 15y = 0
Example 2.7.
y ′′ − 3y ′ + 2y = 0
Example 2.8 (”J6b)”).
y ′′′ + 5y ′′ + 4y ′ = 0
Example 2.9 (”J6c)”).
y ′′′ + 3y ′′ − y ′ − 3y = 0
22
2.3.2
Roots of (KY) real but not simple
Let us consider the situation, where the roots of (KY) are real but not all are simple roots. Let
the distinct roots be
r1 , r2 , . . . , rm , m < n.
Let the multiplicity of rj be kj , so that
n=
m
X
kj .
j=1
Then the root rj corresponds to the solution
yj (x) = (C1 + C2 x + · · · + Ckj xkj −1 )erj x ,
j = 1, . . . , m.
Again the sum (linear combination) of these gives the general solution of (HY).
Note that in this case some root may be of multiplicity 1. Then one proceeds as in previous
section. On the other hand, this principle works also for roots of multiplicity 1.
Example 2.10.
y ′′ + 6y ′ + 9y = 0
Example 2.11.
y ′′′ + y ′′ − y ′ − y = 0
Example 2.12 (”J11b)”).
y ′′′ + 4y ′′ + 5y ′ + 2y = 0
(HY) might be accompanied with some initial conditions. In such cases one proceeds as
before i.e. first finds the general solution and then determines the constants using the initial
conditions.
Example 2.13 (”J10a)”).
y ′′ + 20y ′ + 100y = 0,
2.3.3
y(0) = 0, y ′ (0) = 5
(KY) has complex roots
Let us finally consider the case where (KY) has one or more complex roots.
Let r = α + iβ be a simple root of (KY). This root corresponds to the solution
y(x) = eαx (C1 cos βx + C2 sin βx).
Reason: by the Euler formula
erx = e(α+iβ)x = eαx eiβx = eαx (cos βx + i sin βx) = eαx cos βx + ieαx sin βx.
The linear combination of these elementary functions is the aforementioned y(x).
If α + iβ is a root of multiplicity k then the solution consists of linear combinations of the
functions
xj eαx cos βx, xj eαx sin βx, j = 0, . . . , k − 1
23
Example 2.14.
y ′′ − 4y ′ + 13y = 0
Example 2.15 (”J10b)”).
y ′′ + 100y = 0,
y(0) = 0, y ′ (0) = 2
Example 2.16 (”J11a)”).
y ′′ + 2y ′ + 5y = 0
Example 2.17 (”J11c)”).
y (4) + 20y ′′ + 100y = 0
2.4
Nonhomogeneous linear equation with constant coefficients
a0 y (n) + a1 y (n−1) + · · · + an−1 y ′ + an y = q(x)
According to Section 2.2 the general solution of this nonhomogeneous equation is obtained by
adding a particular solution to the general solution of the corresponding homogeneous equation.
The general solution of the corresponding homogeneous equation
a0 y (n) + a1 y (n−1) + · · · + an−1 y ′ + an y = 0
is determined as in the previous section.
A method to find the particular solution is
2.4.1
Method of undetermined coefficients
We make an ansatz y0 , whose coefficients are determined by substituting the ansatz into the
nonhomogeneous equation. The form of ansatz depends on the right hand side q(x). We restrict
to the following special cases:
• q(x) = Cxj
1. If r = 0 is not a root of (KY), then
y0 = A1 + A2 x + · · · + Aj+1 xj
2. If r = 0 is a root of (KY) with multiplicity k, then
y0 = xk (A1 + A2 x + · · · + Aj+1 xj )
Example 2.18 (”J38e)”).
y ′′ − y ′ = x
Example 2.19 (”J38f)”).
y ′′ − y ′ − 2y = x
• q(x) = Cxj eαx
24
1. If α is not a root of (KY), then
y0 = (A1 + A2 x + · · · + Aj+1 xj )eαx
2. If α is a root of (KY) with multiplicity k, then
y0 = xk (A1 + A2 x + · · · + Aj+1 xj )eαx
Example 2.20.
y ′′ − 3y ′ + 2y = e5x
Example 2.21.
y ′′ − 3y ′ + 2y = ex
Example 2.22 (”J38b)”).
y ′′ − 16y = e−4x
Example 2.23 (”J38c)”).
y ′′ + 8y ′ + 16y = e−4x
• q(x) = Cxj eαx cos βx or q(x) = Cxj eαx sin βx
1. If α + iβ is not a root of (KY), then
y0 = (A1 + A2 x + · · · + Aj+1 xj )eαx cos βx + (B1 + B2 x + · · · + Bj+1 xj )eαx sin βx
2. If α + iβ is a root of (KY) with multiplicity k, then
y0 = xk (A1 + A2 x + · · · + Aj+1 xj )eαx cos βx + xk (B1 + B2 x + · · · + Bj+1 xj )eαx sin βx
Note that even if q(x) contains only cosine or sine, the ansatz must contain both! (Why?)
Example 2.24.
y ′′ + 4y = sin 3x
Example 2.25.
y ′′ + 4y = sin 2x
Tip: if calculations do not lead to any meaningful result, then the ansatz is badly formed.
Fix the ansatz and try again.
Principle of superposition If the right hand side of the nonhomogeneous equation q(x)
consists of several parts of the above form, then the particular solution consists of the sum of
particular solutions corresponding to each invididual part. Same formally: if q(x) = q1 (x) +
· · · + qk (x), then one finds the particular solutions of
P (D)yj = qj (x),
j = 1, . . . , k
By linearity, the sum of these particular solutions is the particular solution of the original
equation.
Example 2.26.
y ′′′ − y ′′ − y ′ + y = 3x + (24x − 4)ex
25
2.5
Linear equation of second order
Homogeneous equation
y ′′ + p(x)y ′ + q(x)y = 0
Using the notation of section 2.2 let us consider the basic system of solutions {y1 , y2 }. The
Wronskian is
y1 y2 = y1 y2′ − y2 y1′ 6= 0.
W = W (x) = ′
y1 y2′ So neither y1 nor y2 is the zero function.
Differentiating this expression and using the fact that y1 , y2 are solutions leads to the equation
W ′ + pW = 0,
and further (according to Section 1.4)
W = C exp(−
These preparations imply that
y2
y1
Z
p(x)dx).
′
R
C exp(− p(x)dx)
W
= 2 =
.
y1
y12
Z
R
C exp(− p(x)dx)
dx + C1
y12
Integrating both sides gives
y2
=
y1
or
y2 = y1
Z
R
C exp(− p(x)dx)
dx + C1
y12
(2.1)
So: if y1 is a solution of the homogeneous equation then the other one is obtained by (2.1).
Their linear combination
R
Z
exp(− p(x)dx)
y(x) = C1 y1 (x) + C2 y1 (x)
dx
y12
is the general solution of the homogeneous equation.
Example 2.27 (”J52a)” ). The function y(x) = x2 solves the equation
x2 y ′′ − 6xy ′ + 10y = 0.
Find the general solution of the equation.
Example 2.28. Solve
x2 y ′′ − 3xy ′ + 4y = 0,
when we know that some power of x solves the equation.
26
2.5.1
Elimination of first order derivative
It is possible to eliminate the first order derivative from the equation
y ′′ + p(x)y ′ + q(x)y = 0
by the following procedure.
Substitute (u = u(x))
1
y = u exp −
2
which leads to the expressions
Z
p(x)dx ,
y′ =
and
y ′′ =
Inserting these expressions into the original equation we obtain
1
1
u′′ + (q − p′ − p2 )u = 0,
2
4
which no longer contains the first order derivative. If this can be solved for u, then y can be
obtained from the preceding formula.
Example 2.29.
y ′′ + 2xy ′ + x2 y = 0
Example 2.30.
2.5.2
1
x2 y ′′ + xy ′ − y = 0
4
Nonhomogeneous equation: variation of constants
y ′′ + p(x)y ′ + q(x)y = r(x)
Let {y1 , y2 } be the basic system of solutions of the homogeneous equation
y ′′ + p(x)y ′ + q(x)y = 0
(found e.g. as above or given directly). The general solution of the homogeneous equation is
therefore
yh (x) = C1 y1 + C2 y2 .
We attempt to find the particular solution of the nonhomogeneous equation in the form
y0 (x) = C1 (x)y1 (x) + C2 (x)y2 (x),
where the constants have been varied. By direct calculation we get
y0′ = C1′ y1 + C1 y1′ + C2′ y2 + C2 y2′ = C1 y1′ + C2 y2′ ,
if
C1′ y1 + C2′ y2 = 0
(this assumption simplifies the next steps). Similarly we continue to obtain
y0′′ = C1′ y1′ + C1 y1′′ + C2′ y2′ + C2 y2′′ .
27
Substituting these expressions into the original equation (and noting that y1 , y2 are solutions)
we have
C1′ y1′ + C2′ y2′ = r(x).
These equations can be used to determine C1 (x), C2 (x).
In practice one forms the system of equations
(
C1′ y1 + C2′ y2 = 0
C1′ y1′ + C2′ y2′ = r(x).
This system in solved for C1′ , C2′ (it can be done because the determinant W 6= 0). Integration
gives C1 , C2 which in turn lead to the particular solution y0 .
Example 2.31.
y ′′ + y =
1
cos x
Example 2.32 (”J48b)”).
x2 y ′′ − 5xy ′ + 8y = x2
Tip: xk solves the homogeneous equation.
2.5.3
Euler’s equation
x2 y ′′ + axy ′ + by = r(x)
Let us substitute
x = et ,
t = ln x,
x>0
and denote
y(t) = y(et ) ⇔ y(x) = y(ln x)
The derivatives are transformed here as:
dy
dy dt
dy
dy
=
= x−1
= e−t
dx
dt dx
dt
dt
and
d2 y
d
=
2
dx
dt
e
−t dy
dt
dt
= e−2t
dx
d2 y dy
−
dt2
dt
Substituting these into the original equation we obtain a constant coefficient(!) equation
d2 y
dy
+
(a
−
1)
+ by = r(et )
2
dt
dt
This is solved for y(t) and using it we get
y(x) = y(ln x)
In the case x < 0 we substitute x = −et or t = ln(−x) which gives analogously
dy
d2 y
+ (a − 1) + by = r(−et ).
2
dt
dt
Note! If r(x) is even then these equations are the same.
28
Example 2.33.
x2 y ′′ − 3xy ′ + 5y = 5 ln |x|
Example 2.34.
x2 y ′′ + 7xy ′ + 5y = x
Example 2.35 (”J40a)”).
x2 y ′′ + 2xy ′ = 10x,
x>0
Example 2.36 (”J44”).
x2 y ′′ + axy ′ = 0,
2.6
x > 0, a ∈ R
Systems of differential equations
The preceding differential equations can also be combined into systems of two or more equations.
A constant coefficient system involving the functions
xj = xj (t)
is best to transform into the form
P11 (D)x1 + · · · + P1n (D)xn = g1 (t)
P21 (D)x1 + · · · + P2n (D)xn = g2 (t)
..
.
P (D)x + · · · + P (D)x = g (t),
n1
1
nn
n
n
where each Pij is a known polynomial operator (see Chapter 2.3). In this system one may
perform the following operations (cf. the method of elimination for algebraic systems):
1. changing the order of appearance of equations
2. multiplying an equation by a constant c 6= 0
3. applying a polynomial operator to an equation (so e.g. differentiation)
4. adding or subtracting equations
The goal is to obtain an equation (with constant coefficients) which involves only one unknown function xj (t). The solution xj (t) of that equation can then be used to determine other
unknown functions from other equations (either from the original equations or those obtained
during elimination).
In the case of few equations (at most 3), the unknown functions are sometimes denoted by
x(t), y(t), z(t).
Example 2.37.
Example 2.38.
(
2x′ (t) + y ′ (t) − 4x(t) − y(t) = et
x′ (t) + 3x(t) + y(t) = 0
(
(D − 1)x + Dy = 2t + 1
(2D + 1)x + 2Dy = t
29
Example 2.39.
Example 2.40.
Example 2.41.
2.7
(
(D + 2)x + 3y = 0
3x + (D + 2)y = 2e2t
dx = 3x − 4y
dt
dy
= 4x − 7y
dt
(
x′ + x = y ′
x′′ + 3x = y ′ + y
Power series method
A sufficiently regular function can be represented by the power series (about the origin x = 0)
∞
X
ak x k ,
k=0
where the numbers ak are the coefficients of the series. These coefficients determine the function
uniquely.
This representation can be used to solve differential equations as follows: one sets up an
ansatz
∞
X
y(x) =
ak x k ,
k=0
and attempts to determine the coefficients (which are now unknown) in such a way that y(x)
solves the given differential equation. Note that, a power series can be differentiated (and
integrated) term by term i.e.
∞
X
y ′ (x) =
kak xk−1
k=0
etc.
Example 2.42.
y ′ = y + x2 ,
y(0) = 1
Example 2.43.
y ′′ + y = 0
2.8
Laplace transform
The Laplace transform of a function f : [0, ∞] → R is
Z ∞
e−st f (t)dt = F (s),
L {f (t)} (s) =
0
provided that the integral exists (converges). The inverse transform is denoted by L−1 {F } =
f (t) (its formula is cumbersome, we use tables below).
Laplace transform can sometimes be used to solve differential equations with constant coefficients (either homogeneous or nonhomogeneous).
30
Example 2.44.
L e
at
(s) =
Z
∞
e
at−st
0
∞
1
e(a−s)t ,
=
dt =
a−s 0
s−a
s>a
The function f (t) = 1/t does not have a Laplace transform, because the improper integral
does not converge in this case. It can be proved that if the function f (t) satisfies the condition
|f (t)| ≤ M eat ,
t≥T
with some constants M, a, T , then the function f (t) has the Laplace transform for s > a.
√
Example 2.45. If f (t) = 1/ t, then
r
π
L {f (t)} (s) = ... =
s
Since integration is a linear operation then so is Laplace transformation:
L {af (t) + bg(t)} (s) = aL {f (t)} (s) + bL {g(t)} (s) = aF (s) + bG(s).
The inverse transform is also linear.
Example 2.46. Let
f (t) = cosh at =
eat + e−at
.
2
Then Example 2.44 implies that
1
L {f (t)} (s) =
2
1
1
+
s−a s+a
Similarly
L {sinh at} (s) =
Example 2.47. By the Euler formula
=
a
,
s 2 − a2
s2
s
,
− a2
s > |a|.
s > |a|.
eiat = cos at + i sin at.
Thus
L eiat (s) =
s + ia
1
= 2
,
s − ia
s + a2
s > 0.
On the other hand
L eiat (s) = L {cos at + i sin at} (s) = L {cos at} (s) + iL {sin at} (s).
Comparing real and imaginary part we see that
L {cos at} (s) =
s2
s
+ a2
ja L {sin at} (s) =
Example 2.48.
L {tn } (s) =
n!
sn+1
,
31
s2
n = 0, 1, 2, . . .
a
,
+ a2
s > 0.
f (t)
F (s)
eat
√
1/ t
cosh at
sinh at
eiat
cos at
sin at
tn
domain of definition
1
s−a
p
π/s
s>a
s
− a2
a
2
s − a2
1
s − ia
s
2
s + a2
a
2
s + a2
n!
sn+1
s > |a|
s2
s > |a|
s>0
s>0
s>0
s>0
Table 2.1: Some transforms
These known transforms are usually collected into a table which can be used to find the
inverse transforms too.
Example 2.49 (”J21,22”). Use the table to find
1. L {2 + 2t + 5t3 }
2. L {sin(5t) + cos(5t)}
1
−1
3. L
s3 + 4s2 + 3s
2
−1
.
4. L
(s + 1)(s2 + 4)
Shifting theorem
L eat f (t) (s) = F (s − a) = L {f (t)} (s − a)
Transform of derivatives
L {f ′ (t)} (s) = sL {f (t)} (s) − f (0)
L {f ′′ (t)} (s) = s2 L {f (t)} (s) − sf (0) − f ′ (0)
etc.
L f (n) (t) (s) = sn L {f (t)} (s) − sn−1 f (0) − sn−2 f ′ (0) − · · · − f (n−1) (0)
Example 2.50.
L sin2 at =
32
2a2
s(s2 + 4a2 )
The formula for derivatives can be applied for the solution of constant coefficient differential
equation if it is accompanied with the initial conditions
y(0) = y0 , y ′ (0) = y1 , . . . , y (n−1) (0) = yn−1 .
Example 2.51.
y ′′ − 4y = 0,
y(0) = 1, y ′ (0) = 2
Applying Laplace transform to the left and right sides gives
L {y ′′ } − 4L {y} = 0.
Formula for derivatives allows us to write this as
s2 L {y} − sy(0) − y ′ (0) − 4L {y} = 0,
which is further simplified by the initial conditions to
s2 L {y} − s − 2 − 4L {y} = 0.
So
1
s+2
=
2
s −4
s−2
Table 2.1 can now be used to find the inverse transform of the right hand side. It follows that
L {y} =
y(t) = e2t .
Example 2.52 (”J23a)”).
y ′ + y = sin(2t),
y(0) = −1
Example 2.53 (”J23b)”).
y ′′ + 4y ′ + 3y = 1,
33
y(0) = y ′ (0) = 0
© Copyright 2026 Paperzz