Lecture 16

Questions
Questions
Questions
When isis A
A diagonalizable?
diagonalizable?
•• When
• When is A diagonalizable?
Questions
A isis diagonalizable,
diagonalizable, how do we find matrices
•• IfIf A
matrices P
P and
and D
D such
such that
that
−1is diagonalizable, how do we find matrices P and D such that
−1
•
If
A
AP
=
D?
PP AP
D?
• When
is A =
diagonalizable?
−1
P AP = D?
Diagonalizability
• If A is diagonalizable, how do we find matrices P and D such that
Given
matrix
A, suppose
suppose A has
λ11,of
λ22,,basis
.. .. .. ,, λλnn with
Given
A,
eigenvalues
with correcorreP −1matrix
AP
= coordinates,
D?
Encodings,
and
change
Given
matrix
A, suppose
sponding
eigenvectors
�x11A
, �x2has
, . . eigenvalues
. , �xn. Thenλ1, λ2, . . . , λn with corresponding
eigenvectors
sponding eigenvectors �x1, �x2, . . . , �xn. Then
Given
matrix A, suppose
A Algebra
has eigenvalues λ1, λ2, . . . , λn with correMath
40, Introduction
to Linear
spondingFebruary
eigenvectors
�x1, �x2, . .A�
. ,x�x1n.=Then
Wednesday,
22, 2012
λ1�x1
�
A�x1 = λ1�x1 �
A�x2 = λ2�x2
A�
2
A�x1 x=2 λ=..1�xλ12�x�
... .
A�x2 = λ2�x2
xn
A�
n = λn�
...xx
A�
xn
n = λn �
A�xn = λn�xn


 

 || ||
|
||
|  |
| |
|
|
|
|

�x�x11 �x�x22 ·· ·· ·· �xn
=λ1�x1 λ2�x2 · · · λnn�x�x


A
A
n
= | aλ1representative
· · | learn
�xn that
�x1 | λ2�x2 · · ·matrix
|�x1 | �xyou
| λn�xnofnneach class
2 ·will
Down A
the road,
|�x |||· · ·canonical
A the
�x1 ||Jordan
�xn| |= form.
λ1�x1 | λ| 2�x2 |· |· · λn�xn
| ||
is called
2

 


| |
|
|
|
|
λλ11
 | it to
λto


1 a....diagonal
|
|


Ideally, given a matrix, we would like
be
similar


λ
λ
 | |
 | λ1


2
2
λ


2entries of
matrix! (Then eigenvalues would
be�
precisely
the
diagonal
=|�x
x
�
x
·
·
·
�
x


|
|

1
2
n


.
.
=
�
x
·
·
·
�
x


.... 
1
2
n
Characterization
of
diagonalizability
.

λ

.


2
.
�xis1 similar.)
the diagonal matrix to which
= it
�|x2| ·| · |· �xn | | 
 
.
.
.

 λn λλnn
|
|
|
Definition A matrix A is diagonalizable if it is similar to a diagonal
λn
=⇒
AP
=
P
D
=⇒
=⇒
AP
=
P
D
matrix D, i.e.,
=⇒ AP = P D
P isis
AP
=
D
When
PP invertible?
invertible?
=⇒eigenvectors
eigenvectorsofofA Aareare
linearly
independent
When
invertible?
linearly
independent
P
=⇒
linearly
independent
When is P invertible?
=⇒
eigenvectors
of
A
are
linearly
independent
for some invertible matrix P and some diagonal matrix D
=⇒ AP = P D.
Theorem.
Let
be
annn××nn�matrix.
matrix.Then
Theorem.
LetAA
Abe
bean
Then
Theorem.
Let
Theorem. Let A be an n × �n4 matrix.
Then
−2
Example The
A = 1 1 is diagonalizable since A ∼ D
� 2 0 matrix
�
linearly
independent
AAhas
n nlinearly
independent
linearly
independent
where
Ddiagonalizable
= 0 3 . Check that
A has
nhas
linearly
independent
is
⇐⇒
A
diagonalizable
⇐⇒
A
is
diagonalizable
A is diagonalizable� ⇐⇒
eigenvectors.
� � 1 2 � � 1 2 � � 2 0eigenvectors.
� eigenvectors.
eigenvectors.
4 −2
=
,
1 � � 0��3 �
1��1 � � 1��1 �
� 1��
�Math
Lecture
Math
Prof.
Kindred
3 33
Lecture
15
Math
40,Spring
Spring
’12,
Kindred
Page
Lecture
15
Math
Prof.
Kindred
Page
Lecture
1515
40, 40,
Spring
’12, ’12,
Prof.
Kindred
Page 3Page
−1
P
A
P
D
-1
Ifand
wePknow
that A because
is diagonalizable,
is invertible
det P �= 0. then P AP = D, and
• columns of P are n linearly independent
of A,
� 0 eigenvectors
�
1
Example We claim that the matrix A = 0 0 is not diagonalizable.
• and diagonal entries of D are eigenvalues of A.
Suppose, by way of contradiction (BWOC), that A is diagonalizable,
i.e., there exist invertible matrix P and diagonal matrix D such that
P −1AP = D. Then A = P DP −1, and we have
2
−1
−1
2
−1
• columns
of of
P Pareare
n linearly
independent
eigenvectors
of of
A,A,
• columns
n linearly
independent
eigenvectors
−1
If•we
know that
diagonalizable,
then PP−1
AP
D,and
and of A,
A is
diagonalizable,
then
AP
==D,
columns
of
P
are
n
linearly
independent
eigenvectors
• columns of P are n linearly independent eigenvectors
of A,
• and
diagonal
entries
of
D
are
eigenvalues
of
A.
−1
−1
• and
diagonal
entries
of
D
are
eigenvalues
of
A.
IfIfwe
know
that
A
is
diagonalizable,
then
P
AP
=
D,
and
we
know
that
A
diagonalizable,
then
P
AP
=
D,
and
columns of P are nn linearly
independent eigenvectors
eigenvectorsofofA,
A,
independent
••• and
entrieslinearly
of D
D are
are
eigenvalues
A.
and diagonal
diagonal entries
of
eigenvalues
ofofA.
�of
� � of
• and diagonal
entries
of D
D are
areindependent
eigenvalues ofofeigenvectors
A.
entries
eigenvalues
A.
columns
of�3 P
P03 0are
ofofA,
•• columns
independent
eigenvectors
A,
0 0 n linearly
�
�
�
�
4 24 2. .
3
0
0
Example
AA
= = −2−2
ExampleConsider
Consider
3 0 ��
0
−2−2
1 entries
51 5 A of
−2
Consider
=��D33−2
• and
and diagonal
diagonal
are
ofofA.
•Example
are
eigenvalues
A.
00 0404 22eigenvalues
Example
of
diagonalization
Example
Consider
A=
..
−2
−2
44 2121 55..
−2
Consider
A
=
Example
A
=
−2
• Find
eigenvalues
of of
A.A.
• Find
eigenvalues
−2
−2 11 55
•• Find
of A.
A.� 33 00 00 ��
Find eigenvalues
eigenvalues of
eigenvalues
of
A.
•det(A
Find
of−
A.
−2
44 22λ)(5
Example
Consider
..− −
−2
0Example
=
det(A
−−
λI)
==
(3 (3
−A
λ)[(4
−−
λ)(5
λ)λ)
−−
2] 2]
Consider
=λ)[(4
0=
λI)
−2
11=55 (3 − λ)[(4 − λ)(5 − λ) − 2]
00 =
= det(A
det(A −
− λI)
λI)
2−22 =
λ)[(4
−
λ)(5
−
λ)
− 2]
0=
det(A
−
λI)
=
(3
−
λ)[(4
−−λ)
(3
−−
λ)(λ
9λ
+−
18)
==
det(A
−λ)(λ
λI)−
=−
(3(3
−
λ)[(4
λ)(5
λ)−
−2]2]
(3
9λ
+
18)−−2λ)(5
2
•
Find
eigenvalues
of
A.
=
(3
−λ)(λ
λ)(λ
−+9λ18)
+ 18)
22
Findeigenvalues
eigenvalues
=
(3
−
−
0=
det(A
−
λI)
(3
−
λ)(λ
• •Find
=
(3
−−
λ)(λ
9λ9λ
++
18)18)
==
(3of(3
−A.−
λ)(λ
−=
3)(λ
6)
λ)(λ
−
3)(λ
−
6)−−9λ
=
(3
−
λ)(λ
−3)(λ
3)(λ
−−6)
6)
=
−
λ)(λ
−
−
==
(3
−(3
λ)[(4
− λ)
2]
(3
−
3)(λ
=
(3
−
λ)(λ
3)(λ
=
det(A
−mult.
λI)
=
(3
−
λ)[(4
−
−
λ)
Therefore,
3 (alg.
and
λ2−−
=2λ)(5
6.λ)(5
Therefore,0λ01=
λ=1det(A
=
3 (alg.
mult.
and
λ−
=
6.−−6)6)
−
λI)
=2)
(32)
−λ)(λ
λ)[(4
−
λ)(5
−
λ)−−2]2]
2
Therefore,λλ=
λ1=
=
3(alg.
(alg.
mult.
2)18)
and
229λ
(3
−3(alg.
λ)(λ
−
26.
Therefore,
λ
=
(alg.
mult.
2)
and
λ=λ26.
==λ6.6.
Therefore,
33(3
mult.
2)
and
λλ18)
= 3 (alg. mult. 2),
11 1=
22=
Therefore,
mult.
2)+
and
=
−
λ)(λ
−
9λ
++
=
(3
−
λ)(λ
−
9λ
18)
=
(3
−
λ)(λ
−
3)(λ
−
6)
λ=6
• Find
eigenspaces
of of
A.A.
• Find
eigenspaces
�� ���
(3
−�λ)(λ
−��
3)(λ
−−6)6)
•••• Find
eigenspaces
A. =
eigenspaces
of
=��
(3��
3)(λ
1−1λ)(λ
1 1−
Find
A.
• Find
eigenspaces
of A.
Find
ofofA.
A.
��
�� ���� ��
��
� ��
��
���
����
�0. �0. =⇒
11 11 11 1 1
2 2, mult.
0 0 2)
Solve
(A(A
−−
3I)�
x=
=3 =
span
Solve
3I)�
x=
=⇒ E3E
span
,
Therefore,
λ
=
3
(alg.
and
λ
=
1
2
�
Therefore,
= 3 EE
(alg.
mult.
2)
22 and
Solve (A −
3I)�
xxx=
=
span
,2,2 0,0,λ020=6.6.
�0.��
1=⇒
Solve
−
3I)�
0. λ=⇒
=⇒
=
span
03E
1span
Solve
−
3I)�
==0.
=⇒
span
03=
1
Solve
0.
3=
3E
0
�������� �� 0��00 11 1 1
0 0
����
����
��
0��
Find
eigenspaces
ofEA.
A.
�0.
0 00� � ��
�0.
��
1
••6I)�
Find
eigenspaces
of
Solve
(A(A
−−
6I)�
x Solve
=
=⇒
E
=
span
.
1
Solve
x=
=⇒
=
span
.
�
6
6
11 1 .�
(A
xx =
=
span
�0.�
Solve (A
(A −
−
6I)�
=⇒
span
. �1 ��
6E
Solve
− 6I)�
0. =⇒
=⇒ EE16E
=
span ��
Solve
6I)�
x ==�0.
0.
=⇒
span
16=
6=
11 2111 , . .0 1
Solve
(A
−
3I)�
x
=
0.
=⇒
E
=
span
Solve (A − 3I)�x = �0. =⇒ E33 = span 0121 , 1 0
• Construct
matrices
P Pand
D.D. PP and
• Construct
matrices
and
• Construct
Construct
matrices
0�� 1
matrices
D.
• Construct
matrices
P and
D.D.
��
••• Construct
Pand
and
D.
Construct
matrices
P
and
D.
��
�� AP = PD and P is
0
� � � � � ��3 0 �0 ��� � � ��1 1 0 ��Then
0
1
3−036I)�
00 0x = �
1
1
0
Solve
(A
0.
=⇒
E
=
span
..
1
1
0
�
�
�
�
�
�
�
�
6
300
110
1
Solve
E P=
0 3=⇒
00 2
2 0 1111.0invertible,
D
=
==span
so P-1AP = D
0 306I)�
03 0x
DD
=(A
, =
=
=−
,D 0.
.
0 33=
0 00,0,210 1.6P
2 0 1 1.0
=P P
1
0
0
6
0
1
1
D=
= 0 000 363 00 10,1,1 1 PP ==0 12210011 . .
0 0060 6
A is diagonalizable.
00 00D.
66
00111and
1
• Construct
matrices
P
and
Then
AP
P
D
PPso
isisD.
invertible,
PP−1
−1AP = D and A is
−1−1
P and
AP
=is
Pinvertible,
D and
and
invertible,
so=
AP
=
D
Then
APAP=•=
PConstruct
D
and
P matrices
invertible,
P
D
and
A
is
Then
PThen
D
and
Pis=
so
P APAP=so
D
and
A
is and A is
−1AP =
Then
= P D and
P
isis�invertible,
so
PP−1
Then
AP
and
P
invertible,
so
AP
=DDand
andAAisis
�
�
�
diagonalizable.
�3 0 0 �
�1 1 0 �
diagonalizable.
diagonalizable.diagonalizable.
diagonalizable. D = 03 30 00 ,
diagonalizable.
P = 210110 .
D = 00 03 60 ,
P = 021011 .
0 0 6 with distinct eigenvalues
011
Theorem. If A is an n×n matrix
λ1 , . . . , λ k ,
−1
Theorem.
If=A P
is D
an
n×n
with distinct
λ1,and
. . ,A
λk ,is
Then
APmatrix
and
Pmatrix
is invertible,
so Pλeigenvalues
Theorem.
If If
A Theorem.
is
anan
n×n
with
distinct
eigenvalues
, AP
.1AP
., . ,. λ
,D
Theorem.
A
is
n×n
matrix
with
distinct
eigenvalues
λ
.=
,=
, λλ.and
1−1
kλλ
k
then
the
collection
of
all
basis
vectors
for
the
eigenspaces
E
,
E
,
.
.
.
,. E
Then
AP
=
P
D
and
P
is
invertible,
so
P
D
If
A
is
an
n×n
matrix
with
distinct
eigenvalues
,
.
.
λ,λisλ
1
2
kk ,k ,
Theorem.
distinct
eigenvalues
1 ,1., . .A
then
the collection of alln×n
basismatrix
vectorswith
for the
eigenspaces
Eλ1 , Eλλ
,,.E
λk
diagonalizable.
2
then
thethe
collection
of
all
basis
vectors
for
the
eigenspaces
E
,
E
,
.
.
.
,
E
then
collection
of
all
basis
vectors
for
the
eigenspaces
E
,
E
,
.
.
,
E
λ
λ
λ
is
linearly
independent.
λ
λ
λ
1 1 2 2
k,k
diagonalizable.
then
the
collection
of
all
basis
vectors
for
the
eigenspaces
E
E
,
.
.
.
λkλk
is linearly independent.
then
all basis vectors
for the eigenspaces Eλ1λ1 , Eλ2λ2 , . . ,. E
,E
Linearly
independent
eigenvectors
is is
linearly
independent.
linearly
independent.
is
linearly
independent.
is linearly
Lecture 15
Math 40, Spring ’12, Prof. Kindred
Page 4
Lecture 15
Spring ’12,with
Prof. distinct
Kindred
Theorem.
If A is an Math
n×n40,matrix
eigenvalues Page
λ1, .4 . . , λk ,
Proof
is
based
on
the
that
distinct
eigenvalues
have
Theorem.
If
A
is
an
n×n
matrix
with
distinct
eigenvalues
λ , . . 4. , λk ,
Lecture
15 15
Math
40,
Spring
’12,fact
Prof.
Kindred
Page
4 4 eigenvectors
Lecture
Math
40,
Spring
’12,
Prof.
Kindred
Page
Lecture
15collection of all
Math
40,
’12,
then
the
basis
vectors
for Kindred
the
eigenspaces
Eλ1 , E1Page
Lecture
15
Math
40, Spring
Spring
’12,Prof.
Prof.
Kindred
λPage
2 , . .4. , Eλk
Proof
is
based
on
the
fact
that
distinct
eigenvalues
have
eigenvectors
Proof
is
based
on
the
fact
that
distinct
eigenvalues
have
eigenvectors
Proof
islinearly
the
factbasis
that vectors
distinct for
eigenvalues
have eigenvectors
then
the
collection
of
all
the
eigenspaces
E
,
E
that
are
independent.
λ
λ2 , . . . , Eλ k
1
is linearly
independent.
that
are
linearly
independent.
are
independent.
that
are linearly
independent.
isthat
linearly
independent.
Proof uses the fact that distinct eigenvalues have eigenvectors that are linearly independent.
Lecture 15
Math
’12, Prof.with
Kindred
Page 4
Corollary.
If A is an
n 40,
× Spring
n matrix
n distinct eigenvalues,
Corollary.
If
A
is
an
n
×
n
matrix
with
n
distinct
eigenvalues,
Lecture
15
Math
40,
Spring
’12,
Prof.
Kindred
Page 4
Corollary. If A is
Corollary.
is an
an nn ×
×nn matrix
matrixwith
withnndistinct
distincteigenvalues,
eigenvalues,
then
A
is
diagonalizable.
then
AA
is is
diagonalizable.
then
A
is
diagonalizable.
then
diagonalizable.
Remark In
Inour
ourprevious
previousexample,
example,
the
3×3
matrix
A has
two
distinct
Remark
the
3×3
AAhas
two
distinct
In
our last In
example,
the 3 example,
xexample,
3 matrix
A3×3
hadmatrix
two
distinct
eigenvalues:
Remark
In
our previous
the
matrix
two
distinct
Remark
our
previous
the
3×3
matrix
Ahas
has
two
distinct
eigenvalues:
3 (alg.
(alg.2)mult.
mult.
and
(alg.
mult.
eigenvalues:
and
6 66(alg.
1).
3 (alg. 3mult.
62)2)
(alg.
mult.
1).mult.
eigenvalues:
mult.
2)
and
mult.
1).1).
eigenvalues:
33 (alg.
(alg. and
mult.
2)
and
6(alg.
(alg.
mult.
1).
∪∪∪∪
alinearly
linearly
independent
basis
forEE
E6 isis
We
know
basis
for
basis
for
a alinearly
independent
set.
basis
for
EE3E
We
know
independent
set.set.
basis
for
We
know
basis
for
basis
for6E6is
We
know
basis
for
E333
6 is a linearly independent set.
can
have
atat
can
have
most
can
have
most
can
have
atat
most
can have at most
two
vectors
two
vectors
two
vectors
two
vectors
two vectors
can
only
have
can
only
have
can
only
have
can only have
vector
one
vector
one
vector
one vector
For
any
eigenvalue,
For
any
eigenvalue,
For
any
eigenvalue,
For any eigenvalue,
algebraic
mult.
geometric
mult.
algebraic
mult.
! geometric
mult.
algebraic
mult.
!!geometric
mult.
algebraic mult. ! geometric mult.
Thus,
diag’able,
needed
the
eigenspace
E33 E
to to
Hence,
fororder
A for
tofor
be
we
need
eigenspace
E3 to beEtwo-dimensional.
in in
order
AAdiagonalizable,
totobebediag’able,
wewe
needed
the
eigenspace
to
Thus,
the
eigenspace
Thus,ininorder
orderfor
forAAtotobe
bediag’able,
diag’able, we
we needed
needed the
eigenspace
E33 to
two-dimensional(its
(itsbasis
basishas
has
two
vectors),
i.e.,
bebe
two-dimensional
vectors),
i.e.,
two-dimensional
(its
basis
hastwo
two
vectors),
i.e.,
be
be two-dimensional (its basis has two vectors), i.e.,
geom.
mult.
eigenvalue
3=
=dim(E
dim(E
))==
mult.
eigenvalue
geom.
mult.
ofofof
eigenvalue
3 3=
2 2=2=alg.
mult.
ofofeigenvalue
3.3. 3.
3 )3 =
geom.
mult.
eigenvalue
dim(E
=alg.
alg.
mult.
of
eigenvalue
geom. mult. of eigenvalue 3 = dim(E333 ) = 2 = alg. mult. of eigenvalue 3.
For any eigenvalue,
can only have
can have at most
any eigenvalue,
For
anyFor
eigenvalue,
algebraic
mult.
! geometric mult.
For any
eigenvalue,
can
onlyvector
have
can
only
have
one
can two
have
at most
can have
at
most
vectors
can
only
have
algebraic
mult.
!
geometric
mult. mult.
algebraic
mult.
!
geometric
can have at most
one vector
one vector
two vectors
algebraic mult. ! geometric mult.
two vectors
one
vector
two vectors
Thus, in order for A to be diag’able, we needed the eigenspace E to
3
Thus,
in order
for A
toAbetodiag’able,
we needed
the eigenspace
E3 to E3 to
Thus,
in
order
for
be
diag’able,
we
needed
the
eigenspace
Thus,
in
order
for
A
to
be
diag’able,
we
needed
the
eigenspace
E
to
becharacterization
two-dimensional
(its basis
has
two vectors),
i.e.,
3
be be
two-dimensional
(its basis
hasof
two
vectors),
i.e., i.e.,
Full
diagonalizability
two-dimensional
(its basis
has
two vectors),
be two-dimensional (its basis has two vectors), i.e.,
geom.
mult.
of eigenvalue
3 = 3dim(E
= 2 = alg.
mult. of eigenvalue
3.
3 )alg.
geom.
mult.
of eigenvalue
3 = dim(E
) = 2 )=
eigenvalue
3.
geom.
mult.
of eigenvalue
3 = dim(E
= 2 =mult.
alg. of
mult.
of eigenvalue
3.
geom. mult. of eigenvalue 3 = dim(E33 ) = 2 = alg. mult. of eigenvalue 3.
Theorem.
Let Let
A beAanben an
× nn matrix
with distinct
eigenvalues
Theorem.
×nnmatrix
matrix
withdistinct
distinct
eigenvalues
Theorem.
Let
A
be
an
n
×
with
eigenvalues
Theorem.
Let
A
an following
n ×are
n matrix
with distinct eigenvalues
λ1 , λ
λ12, .λ.2.,,.λ.k.., λ
Then
thebefollowing
equivalent:
the
are equivalent:
k . Then
λλ1,, λλ2,, .. .. .. ,, λ
λkk.. Then
Then the
the following
following are
are equivalent:
equivalent:
1 2
(1)(1)
A isAdiagonalizable.
is diagonalizable.
(1)
(1) A
A is
is diagonalizable.
diagonalizable.
(2) Algebraic and geometric multiplicities are equal for each eigen(2) Algebraic
Algebraicand
and geometricmultiplicities
multiplicities areequal
equal for
each eigen(2)
(2)
Algebraic
geometric multiplicities are
are equal for
for each
each eigeneigenvalue
of A. and geometric
value of A.
A.
value
value of
of A.
(3) Union of bases of eigenspaces of A has n vectors.
Union of
of bases
bases ofofeigenspaces
eigenspacesofofA
Ahas
hasn
nvectors.
vectors.
(3)
(3) Union
Union of
bases of
eigenspaces of
A has
n vectors.
Invertibility and diagonalizability Which of the following stateInvertibility
of
the
following
stateInvertibility
anddiagonalizability
diagonalizabilityWhich
Which
following
statements
are true? and
Invertibility
and
diagonalizability
Which
ofof
thethe
following
state-
ments
ments are
are true?
true?
true? then A is invertible.
•ments
If A isare
diag’able,
• If A is invertible, then A is diag’able.
• If A is
invertible,
then A
is diag’able.
•• If
then
A
If A
A is
diag’able,
then
invertible.
If
A
isis diag’able,
diag’able,
then
AAis
isisinvertible.
invertible.
•Math
If A
is not’12,diag’able
Lecture 15
40, Spring
Prof. Kindredand not invertible,
Page 5then A is the matrix o
• If A is not diag’able and not invertible, then A is the matrix
of all
zeros
(A
=
0).
Lecture
15
Math
40,
Spring
’12,
Prof.
Kindred
Page
55 5
Lecture
15
Math40,
40,
Spring
’12,Prof.
Prof.Kindred
Kindred
Page
’12,
Page
• If
invertible, Math
then
ASpring
is diag’able.
zeros
(AA15
=is0).
All
are
false!!
• If A is invertible, then
All and
arenot
false!!
• If A is diag’able.
not diag’able
invertible,
then
A is
the matrix of all
zerosnot
(Ainvertible,
= 0). then A is the
• If A is not diag’able and
Vennmatrix
diagramof all
Invertibility
and diagonalizability
Venn diagram
zeros (A = 0).
All are false!!
Which All
of the are
following
statements
are true?
false!!
tible
diag
e
nver
ona
i
rtibl
e
v
n
i then
• If A is diagonalizable,
A is invertible.lizabl
Venn
diagram
e
• If A is invertible, then A is diag’able.
Venn•diagram
If A is
diag
invertible, then A is diagonalizable.
ona
liza
b
le
C invertible, then A is theCmatrix of all
• If A is not diag’able and not
diag
e
• If A is not diagonalizable and not
ona A is the matrix of all
rtibl invertible, then
inve
l
zeros (A = 0).
e
zerosve(A
rtibl= 0).
in
C
A=
A=
�
210
020
003
�
� A �
210
020
003
�
2 −1
nali
Aizable
C
D
le
All
are false!!
false!!
All
are
B
D
� �A � � �B�
��
�
�
�
�B
��
146
000
2 −1 2 2 1 0
1 4 6 2 −1 2
000
diag 0
e
0 ,0 D
−2= D
, 0C1 1= . 0 2 5 , D = 0 1 1 .
0 0=v−2
, B= A
o=
rtib0l 2, 0C ,=B
na2li 5
in e D
3003
�
210
020
003
�
0 0 z3abl0e 0
3 003
0 0
�� �
2210
A=
zab
B
Venn diagram
0 −20 2, 0C
, B = A0 =
0 0
A
diag
o
��
�
C ��
2 −1 2
146
,=B 0=2 5 A0, 0D−2
=
003 0 0 3
, B=
�
2 −1 2
0 0 −2
0 0 3
�
003
001
3
�
�
� �1 4 6�
0
0
0
000
011 .
,0 B
1C1 =. 0 2 5 , D =
003
001
, C=
�
001
D
146
025
003
�
, D=
�
000
011
001
�
.
001
Differentiation as a linear transformation
P3 =⇒ set of all polynomials of degree at most 3 with real coefficients
P2 =⇒ set of all polynomials of degree at most 2 with real coefficients
vector spaces
Consider linear transformation T : P3 → P2 of differentiation:
T (p) = p�
In other words,
Verify that this
is a linear
transformation.
for polynomials p ∈ P3 .
T (ax3 + bx2 + cx + d) = 3ax2 + 2bx + c.
What is a basis for P3 ?
What is a basis for P2 ?
�
�
B = x3 , x2 , x, 1
�
�
D = x2 , x, 1
Important basis results
Encodings and coordinates
Theorem. Given a basis B = {�
v1 , . . . , �
vk } of subspace S, there is
a unique way to express any �
v ∈ S as a linear combination of
basis vectors �
v1 , . . . , �
vk .
Proof sketch on Friday.
Theorem. The vectors {�
v1 , . . . , �
vn } form a basis of Rn if and only if
rank(A) = n, where A is the matrix with columns �
v1 , . . . , �
vn .
Consider linear transformation T : P3 → P2 of differentiation:
Proof sketch (⇒).
⇒
A�
x = �0 has only
trivial sol’n �
x = �0
RREF of A
⇒ A�x = �b is consistent
is I
for any �b ∈ Rn
⇒
cols of A are
linearly indep.
⇒
T (p) = p for polynomials p ∈ P3 .
⇒
⇒
R
� 3 2
�
�
�
Same ideas can 2
B = x , x , x, 1
D
= x be,used
x,to1prove converse direction.
 
Then
a
 b Matrices (extended)
Fundamental Theorem of Invertible
  ∈ R4
=⇒Let A[bepan]Bn x n=matrix.
p ∈ P3 =⇒ p = ax3 + bx2 + cx + dTheorem.
 cThefollowing statements are
equivalent:
where a, b, c, d are scalars
d B
A is invertible.
�
rank(A) = n
⇒
cols of A
span .n
cols of A are
basis for Rn
•
• A�
∈ Rnbasis
for all �b to
.
x = �of
b haspa unique
encoding
withsolution
respect
B,
�
• A�
x = 0 has only the trivial solution �
x = �0 .
or coordinates of v with respect to B
• The RREF of A is I.
 
a
rank(A) = n.
2
3

for RR
.
=⇒ q = ax + bx + c =⇒ [ q Columns
b a basis ∈
]D =of A form
where a, b, c are scalars
c D
• A is the product of elementary matrices.
q ∈ P2
•
•
n
encoding of q with respect to basis D
Matrix of the linear transformation
Now that we have a way to encode the polynomials, we consider
encoding our linear transformation T using a matrix.
T : P3 → P2 =⇒
�
�
B = x3 , x2 , x, 1
To find the
matrix of T, we
ask ourselves
what the linear
trans. T does to
basis vectors.
�
�
D = x2 , x, 1
T (x3 ) = 3x2
� 3 × 4 matrix �
of T
T (x2 ) = 2x
T (x) = 1
T (1) = 0
 
 
 
 
3
0
0
0
2
[ 3x ]D = 0 [ 2x ]D = 2 [ 1 ]D = 0 [ 0 ]D = 0
0 D
0 D
1 D
0 D
matrix of T
with respect to
bases B and D
[ T ]D←B

3
= 0
0
0
2
0
0
0
1

3
= 0
0
0
2
0

0
0
0 D←B
Using the matrix of T
How do we find the image of a
polynomial p under T ?
[ T ]D←B
How do we use T to find the derivative of p ?
Theorem.

[ T (v) ]D = 
Example: Consider

have
3 0 0
0 2 0
0 0 1
T


D←B
0
0
1

0
0
0 D←B
� �
v B
p(x) = 5x3 - 3x + 2. We want to find T(p(x)). We
 
 

decode back to polynomial
5
15
0
0
  = 0 
p� (x) = T (p(x))
0
−3
−3 D
0 D←B
= 15x2 − 3
2 B
Diagram
encoding
using
basis B
T
P3
R4
P2
decoding
using
basis D

 T
R3


D←B
Change of basis
�
�
In our last example, we chose a basis B = x3 , x2 , x, 1 for the vector
space P3 =⇒ set of all polynomials of degree at most 3 with real coefficients.
What if we decided we wanted to
use a different basis for this space?
�
�
B = x3 , x2 , x, 1
For example,
 
1

0

[ x3 ]B = 
0
0 B
�
�
C = (x + 1)3 , x2 , x + 1, x − 1
and


1
−3

[ x3 ]C = 
−2
−1 C
Is it possible to find [ x3 ]C from [ x3 ]B using matrix multiplication?
Yes!!
Change of basis
What is the relationship between encodings of polynomials with respect to B
and those with respect to C?
�
�
B = x3 , x2 , x, 1
�
�
C = (x + 1)3 , x2 , x + 1, x − 1
 
1

0

[ x3 ]B = 
0
0 B
For example,
and
I


1
−3

[ x3 ]C = 
−2
−1 C
the identity
transformation
P3
P3
basis B
basis C
Computing change-of-basis matrix
�
�
B = x3 , x2 , x, 1
To find the
matrix of I, we
ask ourselves
what the linear
trans. I does to
basis vectors.
I
the identity
transformation
P3
I(x3 ) = x3
P3
�
�
C = (x + 1)3 , x2 , x + 1, x − 1
I(x2 ) = x2
I(x) = x
I(1) = 1
 
 
 

0
0
1
0








−3
2
1 [ x ] =  01  [ 1 ] =  01 
[ x3 ]C = 
[
x
]
=
C
C
C
 
 
−2
0
2
2
1
−1 C
0 C
− 12 C
2 C

change-of-basis
matrix from
B to C
[ I ]C←B

1
−3
=
−2
−1
0
1
0
0
0
0
1
2
1
2

0
0 

1 
2
− 12 C←B