dx dt = ax + by dy dt = cx + dy dx / dt dy / dt = a b c d x y dx / dt dy / dt

!
MATH 230!
classnotes (page 1 of 4)!
Last subject:
(warning: heavy notes day)
4/29/08!
Peter Dine-Jergens
dx
= ax + by
dt
dy
= cx + dy
dt
in the beginning we looked for straight line solutions
look back at how this worked: what we did and why it works
why were straight line solutions important?
we know how they behave
clear connection between (x, y) and (dx/dt, dy/dt)
able to write down solution formula that stays on that line
reinterpretation of this into matrix-vector context:
! dx / dt $ ! a b $ ! x $
#" dy / dt &% = #" c d &% #" y &%
find solutions by looking for eigenvectors of the matrix so that
find E-values of
! dx / dt $ ! a b $ ! x $
! x$
=
=
'
#" dy / dt &% #" c d &% #" y &%
#" y &%
! a b$
#" c d &% & corresponding E-vector:
! x$
! a b $ ! x$
" x%
! and the corresponding e-vector # & such that #
=
!
$# y '&
" y%
" c d &% #" y &%
idea is that if
so by
if
! x$
! x$
! x$
d ! x$
d
= A # & and if A # & = ' # & then the
operation must result in a multiplication by !
#
&
" y%
" y%
" y%
dt " y %
dt
a = b,!b = c ! a = c then finding the derivative is the same as finding ! , which leads us back to eat
! x$
#" y &% is an e-vector for e-value lambda, then so is every scalar multiple of
multiple is also an eigenvector, such as
! x$
" x%
" x%
C # & or e!t $ ' or Ce!t $ '
" y%
# y&
# y&
all we need to do is create an eigenvector so that
therefore
! x$
#" y &% : once you have an eigenvector a scalar
d
of it is the same as ! , which is why this works
dt
x
! x(t)$
't ! $
#" y(t)&% = Ce #" y &% is a solution (eigenvector) -- so find eigenvector, then say this is a solution because we have
established a solid chain of equalities
the point is that (1) differentiating
! x$
d
!t
causes multiplication by ! because of the e term and (2) the A # & operation
" y%
dt
causes a multiplication by lambda because
! x(t)$
#" y(t)&% is an e-vector of A for eigenvalue ! for all t .
this is why the straight line solutions were important: they showed us the way
!
MATH 230!
classnotes (page 2 of 4)!
4/29/08!
Peter Dine-Jergens
what we really want is a general solution so we can write down a solution for an arbitrary initial condition
we need two
! so we can make a linear combination
to find a general solution we needed two straight line solutions (translating into two eigenvalues each with their own
eigenvector: both are solutions for the same reasons
x
x
! x1 (t)$
! x2 (t)$
'1t ! 1 $
'2 t ! 2 $
=
Ce
and
=
De
#" y (t)&%
#" y &%
#" y (t)&%
#" y &%
1
1
2
2
how do we get a general solution from these parts? add them
we must ensure that when we add them then the new creature is also a solution
how? simple addition: when differentiating a sum, differentiate the pieces and add. matrix mappings are linear, and so is the
differentiation.
we need to know that
! x1 (t)$ ! x2 (t)$
#" y (t)&% + #" y (t)&% solves the system, and the eigenvectors must be linearly independent so that
1
2
adding them gives a solution for every possible initial condition, not just those on the line.
If we have two different eigenvectors with two different eigenvalues they must be linearly independent. If that is the case,
then therefore, for a fixed t, we can reach any initial condition. Our general solution is :
x
! x1 (t)$
'1t ! 1 $
#" y (t)&% = Ce #" y &%
1
1
!!!!+!!!!!!!!!!!!!!!+
(note: I’m not really clear what he meant by writing it this way)
x
! x2 (t)$
'2 t ! 2 $
#" y (t)&% = De #" y &%
2
2
Now that we have a general solution, we want to step now to what happens if we have no straight line solution: that is, if
there is no real !
so where do things become different?
how did we find
b &
#a ! "
! ? finding a ! necessitates that det %
= 0 , so (a ! " )(d ! " ) ! bc = 0 which is a
d ! " ('
$ c
quadratic in lambda which means that we may not have a real
! and hence no straight line solution.
There are always 2 solutions if we allow complex lambdas (which opens up a whole new universe)
"b ± b 2 " 4ac
!=
(not the same abc as the abcd in the matrix above) so if the discriminant is positive then we get two
2a
real ! ; if zero then one solution; if negative then two complex ! .
so
! = a + ib (a and b are neither the previous a and b, nor related to any previous a and b)
we are now in the complex “universe” and have to deal with a potentially imaginary part
claim: everything works the same
writing down
! gets us everything just as before whether ! has an imaginary part or not
so what we want is
! x$
! x$
! x$
d ! x$
=
A
,
and
we
have
found
a
(possibly)
complex
!
=
a
+
ib
so
that
A
=
'
#" y &%
#" y &%
#" y &% for
dt #" y &%
some (possibly) complex eigenvector
! x$
#" y &% , ie x and y in the eigenvector may be complex
!
MATH 230!
classnotes (page 3 of 4)!
4/29/08!
for this to be true, the eigenvector must have an imaginary part whenever the
!
Peter Dine-Jergens
has an imaginary part
now what?
we have a complex eigenvalue and eigenvector. everything works the same, even with complex numbers
so
x
! x(t)$
't ! $
#" y(t)&% = Ce #" y &% will still be a solution to the system
remember,
! x$
#" y &% is an eigenvector for the eigenvalue lambda
Earlier we needed two solutions to cover every initial condition
however, in this case we only need one solution to cover all the initial conditions. we don’t need the second one, and it’s
longer that way -- there is another way
here’s what we’re going to do:
the vector
! x$
! xr $ ! xi $
can
be
written
as
#" y &%
#" y &% + i #" y &% (the imaginary term cannot be the zero vector)
r
i
(all the numbers in A are real, as are the x/y coordinates: we must have an imaginary part in both the eigenvector and
!)
here’s the bonus: the real and imaginary components are linearly independent and so we have access to a general solution
through one !
" x%
" xr %
" xi %
e!t $ ' = e!t $ ' + ie!t $ '
# y&
# yr &
# yi &
we will see that each of
" xr %
" xi %
e!t $ ' and e e!t $ ' are solutions and linearly independent
# yr &
# yi &
how do we prove this?
we want to show that these are solutions
example:
dx
= !2x ! 3y
dt
dy
= 3x ! 2y
dt
" dx / dt % " !2 !3% " x %
$# dy / dt '& = $# 3 !2 '& $# y '&
we are trying to find eigenvalue and eigenvector, so find
!3 &
# !2 ! "
det %
=0
!2 ! " ('
$ 3
(!2 ! " )(!2 ! " ) ! (!3)(3) = 0
" 2 + 4 " + 13 = 0
!4 ± 4 2 ! 4(1)(13) !4 ± 16 ! 52 !4 ± !36
"=
=
=
= !2 ± 3i
2(1)
2
2
applying quadratic formula,
! = "2 ± 3i so we have two !
!
MATH 230!
classnotes (page 4 of 4)!
4/29/08!
Peter Dine-Jergens
using ! = "2 + 3i , find an eigenvector (note: only using the plus 3i, not both at once here)
" !2 !3% " x %
" x%
=
(!2
+
3i)
$# 3 !2 '& $# y '&
$# y '&
so
!2x ! 3y = (!2 + 3i)x
3x ! 2y = (!2 + 3i)y
simplifying gives
y = !ix
x = iy
are they compatible? if y=-ix then it is true that x=iy (sign change is due to the i2) and
so now
! x$ ! i $
#" y &% = #" 1&%
i
! x(t)$
('2 + 3i )t ! $
=
e
#" y(t)&%
#" 1&%
but we can’t deal with complex numbers in the real world so now we assume that writing this as a real piece plus i*another
real piece then we have two linearly independent solutions and we are done
now the task is to rewrite this as V(t) = Vr(t) + iVi(t) (V = vector shorthand)
how do we do this?
! +i "
to do this, we can use the famous Euler’s formula which says that e
[implication:
= e! ei" = e! (cos(" ) + i sin(" ))
ei! + 1 = 0 : world’s five most famous numbers in one equation and nothing else!]
so how do we use this identity to rewrite our formula in some useful way?
" i%
e(!2 + 3i )t $ '
# 1&
" i%
= e!2t + 3ti $ '
# 1&
" i%
= e!2t (cos(3t) + i sin(3t)) $ '
# 1&
" (e!2t (cos(3t) + i sin(3t)))(i)%
= $ !2t
# (e (cos(3t) + i sin(3t)))(1)'&
" e!2t cos(3t)i + e!2t i 2 sin(3t)%
= $ !2t
# e cos(3t) + e!2t i sin(3t) '&
" !e!2t sin(3t) + ie!2t cos(3t)%
= $ !2t
# e cos(3t) + ie!2t sin(3t) '&
" !e!2t sin(3t)% " e!2t cos(3t)%
= $ !2t
+i
# e cos(3t) '& $# e!2t sin(3t) '&
=we end up with a realvector + i*realvector
so complex lambda leads to complex vector leads to two independent pieces for the general solution
egad!