Eigenvalues and Eigenvectors
Definition 0.1. Let A ∈ Rn×n be an n × n (real) matrix. A number λ ∈ R is a (real)
eigenvalue of A if there exists a nonzero vector ~v ∈ Rn such that A~v = λ~v . The vector ~v
is called an eigenvector of A for λ.
Example 0.2. Let
A=
3
2
−1 0
,
and ~v = (10, −5)T ∈ R2 . We compute
A~v = 2~v .
Thus 2 is an eigenvalue of A and ~v = (10, −5)T is an eigenvector of A belonging to 2.
Suppose that A ∈ Rn×n and λ ∈ R. Then
EA (λ) = {~v ∈ Rn | A~v = λ~v }
is a subspace of Rn , as
EA (λ) = N (λIn − A).
λ is an eigenvalue of A if and only if EA (λ) 6= {~0}. If λ is an eigenvector of A, then EA (λ)
is called an eigenspace of A.
Definition 0.3. Suppose that A ∈ Rn×n and t is an indeterminate. The characteristic
polynomial of A is
χA (t) = Det(tIn − A).
Expanding the determinant, we see that
χA (t) = tn + lower order terms in t
is a polynomial in t of degree n.
The following theorem gives a method of computing the eigenvalues of a matrix.
Theorem 0.4. Suppose that A ∈ Rn×n . Then λ ∈ R is a real eigenvalue of A if and only
if λ is a real root of χA (t) = 0.
Proof. λ ∈ R is a real eigenvalue of A if and only if λ ∈ R and
EA (λ) = N (λIn − A) 6= {~0},
which holds if and only if λ ∈ R and χA (λ) = Det(λIn − A) = 0.
Corollary 0.5. A matrix A ∈ Rn×n has at most n distinct eigenvalues.
This follows from the fact that a polynomial of degree n has at most n distinct roots.
Since Det(A − tIn ) = (−1)n χA (t), eigenvalues may also be computed as the roots of
Det(A − tIn ) = 0, and eigenspaces as
EA (λ) = N (λIn − A) = N (A − λIn ).
Example 0.6. Let
3
2
A=
.
−1 0
Compute the real eigenvalues, and a basis of each of the eigenspaces of A.
1
We compute
(t − 3) −2 = t2 − 3t + 2 = (t − 1)(t − 2),
Det(tI2 − A) = 1
t which has the real roots t = 1 and t = 2. Thus the real eigenvalues of A are λ = 1 and
λ = 2.
We now compute a basis of the eigenspace EA (1) of A. We have that EA (1) = N (I2 −A).
−2 −2
1 1
I2 − A =
→
1
1
0 0
is the RRE form of I2 − A. The standard form solution of the associated homogeneous
system is
x1 = −t
x2 = t
with t ∈ R. We write
x1
x2
=t
−1
1
to see that
−1
1
is a basis of EA (1).
We compute a basis of the eigenspace EA (2) of A. We have that EA (2) = N (2I2 − A).
−1 −2
1 2
2I2 − A =
→
1
2
0 0
is the RRE form of 2I2 − A. The standard form solution of the associated homogeneous
system is
x1 = −2t
x2 = t
with t ∈ R. We write
x1
x2
=t
−2
1
to see that
−2
1
is a basis of EA (2).
Going back to Example 0.2, we found from direct computation that ~v = (10, −5)T is an
eigenvector of A belonging to 2. We can also see this from the fact that the eigenvectors
of A belonging to 2 are the nonzero elements of the line
EA (2) = Span((−2, 1)T ) = {c(−2, 1)T | c ∈ R},
and so ~v = −5(−2, 1)T ∈ EA (2) is an eigenvector of A belonging to 2.
Definition 0.7. Suppose that V is a (real) vector space and L : V → V is a linear map.
A number λ ∈ R is a (real) eigenvalue of L if there exists a nonzero vector ~v ∈ V such
that L(~v ) = λ~v . The vector ~v is called an eigenvector of L for λ.
2
The eigenspace EL (λ) of an eigenvalue λ of L is
EL (λ) = {~v ∈ V | L(~v ) = λ~v }.
EL (λ) is a subspace of V .
Example 0.8. Let C ∞ (R) be the infinitely differentiable functions on R. Let L : C ∞ (R) →
df
C ∞ (R) be differentiation: L(f ) = dx
for f ∈ C ∞ (R). Then every real number is an
eigenvalue of L. For λ ∈ R, {eλx } is a basis of EL (λ).
If A ∈ Rn×n , then the eigenvalues and eigenspaces of the matrix A and of the linear
map LA : Rn → Rn are the same.
Recall that a linear map L : V → W is determined by its values on a basis of V .
Example 0.9. Let L : P2 → P2 be the linear map defined by
L(1) = 3 − x
L(x) = 2.
1) Show that f = 6 − 3x is an eigenvector for L.
2) Determine the eigenvalues of L and find bases of the eigenspaces of L.
We first solve 1). We compute
L(f ) = L(6 − 3x) = 6L(1) − 3L(x) = 6(3 − x) − 3 × 2 = 12 − 6x = 2(6 − 3x) = 2f.
Thus f is an eigenvector for L belonging to the eigenvalue λ = 2.
We now solve 2). The standard basis of P2 is β ∗ = {1, x}. We compute
3 2
β∗
A = Mβ ∗ (L) = (L(1)β ∗ , L(x)β ∗ ) =
.
−1 0
This is the matrix A that we studied in Example 0.6. We found that the eigenvalues of A
are λ = 1 and λ = 2, a basis for EA (1) is {(−1, 1)T } and a basis of EA (2) is {(−2, 1)T }.
We have that
−1
(−1 + x)β ∗ =
1
and
(−2 + x)β ∗ =
−2
1
.
Thus λ = 1 and λ = 2 are the eigenvalues of L, {−1 + x} is a basis of EL (1) and {−2 + x}
is a basis of EL (2).
Theorem 0.10. Suppose that L : V → V is a linear map and λ1 , . . . , λr are distinct
eigenvalues of L. Suppose that {v1i , v2i , . . . , vdi i } are bases of the eigenspaces EL (λi ) for
1 ≤ i ≤ r, with di = dim(EL (λi )). Then
{v11 , . . . , vd11 , v12 , . . . , vd22 , . . . , v1r , . . . , vdrr }
are linearly independent.
3
We prove this in a special case; we suppose that λ1 is an eigenvalue of A with eigenvector
v1 , λ2 is an eigenvalue of A with eigenvector v2 , and λ1 6= λ2 . We will show that {v1 , v2 }
are linearly independent.
Suppose that c1 , c2 ∈ R and
c1 v1 + c2 v2 = ~0.
(1)
Then
(2)
~0 = L(~0) = L(c1 v1 + c2 v2 ) = c1 L(v1 ) + c2 L(v2 ) = c1 λ1 v1 + c2 λ2 v2 .
subtracting equation (2) from λ1 times equation (1), we obtain
~0 = λ1 (c1 v1 + c2 v2 ) − (c1 λ1 v1 + c2 λ2 v2 )
= c2 (λ1 − λ2 )v2 .
Since λ1 − λ2 6= 0 and v2 6= ~0, we have that c2 = 0. Now going back to (1), we see that
c1 v1 = ~0. Since v1 6= 0, we have that c1 = 0. As c1 = c2 = 0 is the only solution to (1),
we have that {v1 , v2 } are linearly independent.
Definition 0.11. Suppose that L : V → V is a linear map of finite dimensional vector
spaces. L is diagonalizable if there exists a basis {v1 , . . . , vn } of V consisting of eigenvectors
of L.
The word “diagonalizable” in Definition 0.11 is explained by the following theorem.
Theorem 0.12. Suppose that L : V → V is a linear map, and there exists a basis
β = {v1 , . . . , vn }
of V consisting of eigenvectors of L. Then Mββ (L) is a diagonal matrix.
Proof. Let c1 , . . . , cn ∈ R be the eigenvalues of the vi , so that L(vi ) = ci vi for 1 ≤ i ≤ n.
We then have
c1 0 · · · 0
0 c2 · · · 0
Mββ (L) = ((L(v1 ))β , (L(v2 ))β , . . . , (L(vn ))β ) =
..
.
0 0 · · · cn
is a diagonal matrix.
Recall that matrices A, B ∈ Rn×n are similar (over the reals) if there exists an invertible
matrix C ∈ Rn×n such that B = C −1 AC.
Definition 0.13. A matrix A is diagonalizable (over the reals) if A is similar (over the
reals) to a diagonal matrix D.
Theorem 0.14. The matrix A ∈ Rn×n is diagonalizable (over the reals) if and only if the
linear map LA : Rn → Rn is diagonalizable.
We will prove the most interesting direction, that LA diagonalizable implies A is diagonalizable.
4
Let β = {v1 , . . . , vn } be a basis of Rn consisting of eigenvectors of A. Let c1 , . . . , cn be
the corresponding eigenvalues, so that L(vi ) = ci vi for 1 ≤ i ≤ n. Then
c1 0 · · · 0
0 c2 · · · 0
D = Mββ (L) = ((L(v1 ))β , (L(v2 ))β , . . . , (L(vn ))β ) =
..
.
0
0
···
cn
is a diagonal matrix.
Let β ∗ be the standard basis of Rn , and let
C = Mββ∗ = (v1 , v2 , . . . , vn ).
We have
∗
∗
C −1 AC = Mββ Mββ∗ (LA )Mββ∗ = Mββ (LA ) = D.
Thus A is similar to a diagonal matrix.
Theorem 0.14 gives an algorithm to determine if a matrix is diagonalizable, and if it is,
how to diagonalize it.
Suppose that A is a square matrix of size n (A ∈ Rn×n ). Let λ1 , . . . , λr be the distinct
(real) eigenvalues of A. Then
dim EA (λ1 ) + dim EA (λ2 ) + · · · + dim EA (λr ) ≤ n = size(A).
We have that A is diagonalizable (over the reals) if and only if
dim EA (λ1 ) + dim EA (λ2 ) + · · · + dim EA (λr ) = n = size(A).
Example 0.15. Let
3
2
−1 0
A=
1. Determine if A is diagonalizable (over the reals).
2. If A is diagonalizable, find an invertible matrix C and a diagonal matrix D such
that C −1 AC = D.
We calculated in Example 0.6 that the eigenvalues of A are λ1 = 1 and λ2 = 2, and
that a basis of EA (1) is {(−1, 1)T } and a basis of EA (2) is {(−2, 1)T }.
1. A is diagonalizable (over the reals) since 1 and 2 are the real eigenvalues of A and
dim EA (1) + dim EA (2) = 2 = size(A).
2. Set
C=
−1 −2
1
1
and
D=
1 0
0 2
.
Then C −1 AC = D (by the algorithm of Theorem 0.14).
5
Example 0.16. Let
A=
0 1
0 0
.
1. Determine if A is diagonalizable (over the reals).
2. If A is diagonalizable, find an invertible matrix C and a diagonal matrix D such
that C −1 AC = D.
The characteristic polynomial of A is
t −1 = t2 .
χA (t) = Det(tI2 − A) = 0 t Thus the only eigenvalue of A is λ = 0. We compute a basis of the eigenspace
EA (0) = N (0I2 − A) = N (−A) = N (A).
0 1
A=
0 0
is the RRE form of A. The standard form solution of the associated homogeneous system
is
x1 = t
x2 = 0
with t ∈ R. Writing (x1 , x2 )T = t(1, 0)T , we see that {(1, 0)T } is a basis of EA (0).
1. A is not diagonalizable (over the reals) since 0 is the only eigenvalue of A and
dim EA (0) = 1 < 2 = size(A).
The complex numbers C are defined by adjoining the “imaginary” number i =
R; that is i2 = −1 and
C = {a + bi | a, b ∈ R}.
√
−1 to
For z1 = a + bi, z2 = c + di ∈ C with a, b, c, d ∈ R, we have the formulas
z1 + z2 = (a + bi) + (c + di) = (a + c) + (b + d)i,
z1 z2 = (a + bi)(c + di) = (ac − bd) + (ad + bc)i
(3)
and if z1 6= 0, then
(4)
1
1
a − bi
a
b
=
=
= 2
−
i.
z1
a + bi
(a + bi)(a − bi)
a + b2 a2 + b2
We define Cn to be the n × 1 column vectors with complex coefficients, Cn to be the
1 × n row vectors with complex coefficients, and Cm×n to be the m × n matrices with
complex coefficients.
Almost everything that we have done in this class is valid if we replace R with C; for
instance we have real vector spaces (over the reals R) and complex vector spaces (over
the complex numbers C). The only exception is that an inner product must be defined
differently on a complex vector space.
Complex numbers are important for us because they have the important property that
they are “algebraically closed”.
6
Theorem 0.17. (Fundamental Theorem of Algebra) Suppose that f (t) = tn + a1 tn−1 +
· · · + an is a polynomial with complex coefficients, and n ≥ 1. Then f (t) = 0 has a complex
root α.
As a corollary, every complex polynomial factors into a product of linear factors.
Definition 0.18. Let A ∈ Cn×n be an n × n (complex) matrix. A number λ ∈ C is a
(complex) eigenvalue of A if there exists a nonzero vector ~v ∈ Cn such that A~v = λ~v . The
vector ~v is called an eigenvector of A for λ.
Suppose that A ∈ Cn×n and λ ∈ C. Then the (complex) eigenspace of λ,
C
EA
(λ) = {~v ∈ Cn | A~v = λ~v }
C (λ) is the complex null space
is a subspace of the complex vector space Cn . EA
C
EA
(λ) = N C (λIn − A) = {~v ∈ Cn | (λIn − A)~v = ~0}.
Since the reals are contained in the complex numbers, any real matrix is also a complex
matrix, any real eigenvalue is also a complex eigenvalue and any real eigenvector is a
complex eigenvector.
A complex matrix A ∈ Cn×n is diagonalizable (over C) if A is similar (over C) to a
(complex) diagonal matrix D; that is, there exists an invertible matrix C ∈ Cn×n such
that D = C −1 AC.
Example 0.19. Consider the matrix
A=
0
1
−1 0
.
1. A is not diagonalizable over the reals.
2 A is diagonalizable over the complex numbers.
We compute
t −1 = t2 + 1 = (t − i)(t + i).
χA (t) = Det(tI2 − A) = 1 t A has no real eigenvalues, so A is not diagonalizable over the reals.
However, A has two complex eigenvalues, λ1 = i and λ2 = −i. We find a (complex)
C (i) = N C (iI − A).
basis of EA
2
i −1
1 i
1 i
iI2 − A =
→
→
1 i
i −1
0 0
is the RRE form of A − iI2 . The first operation is to interchange the two rows. The last
operation is the elementary row operation of adding −i times the first row to the second
row, as −i(1, i) = (−i, −i2 ) = (−i, 1). The standard form solution of the associated
homogeneous system is
x1 = −it
x2 = t
with t ∈ C. Writing (x1 , x2 )T = t(−i, 1)T , we see that
−i
1
C (i).
is a basis of EA
7
C (−i) = N C (−iI − A).
Now we find a (complex) basis of EA
2
−i −1
1 −i
1 −i
−iI2 − A =
→
→
1 −i
−i −1
0 0
is the RRE form of −iI2 − A. The first operation is to interchange the two rows. The last
operation is the elementary row operation of adding i times the first row to the second row,
as i(1, −i) = (i, −i2 ) = (i, 1). The standard form solution of the associated homogeneous
system is
x1 = it
x2 = t
with t ∈ C. Writing (x1 , x2 )T = t(i, 1)T , we see that
i
1
C (−i).
is a basis of EA
i and −i are the eigenvalues of A and
C
C
dim EA
(i) + dim EA
(−i) = 2 = size(A)
so that A is diagonalizable (over the complex numbers).
Set
C=
−i i
1
1
and D =
i 0
0 −i
.
Then C −1 AC = D (by the algorithm of Theorem 0.14).
Going back to the matrix
A=
0 1
0 0
,
which we showed was not diagonalizable over R, we see that the only complex eigenvalue
C (0).
of A is 0 (since χA (t) = t2 ), and {(1, 0)T } is a basis of the complex eigenspace EA
0 is the only complex eigenvalue of A and
C
dim EA
(0) = 1 < 2 = size(A)
so that A is not diagonalizable (over the complex numbers).
Diagonalization of Real Symmetric Matrices. Suppose that A ∈ Rn×n is a symmetric matrix. Then the spectral theorem tells us that all eigenvalues of A are real and that
Rn has a basis of eigenvectors of A. Further, eigenvectors with distinct eigenvalues are
orthogonal. Thus Rn has an orthonormal basis of eigenvectors. This means that we may
refine our diagonalization algorithm above, adding an extra step, using Gram Schmidt to
obtain an ON basis ui,1 , . . . , ui,si of E(λi ) from the basis vi,1 , . . . , vi,si of E(λi ) which we
compute in that algorithm. Since eigenvectors with distinct eigenvalues are perpendicular,
we may put all of these ON sets of vectors together to obtain an ON basis
u1,1 , . . . , u1,s1 , u2,1 , . . . , ur,sr
8
of Rn . Let Q = (u1,1 , . . . , u1,s1 , u2,1 , . . . , ur,sr ). Q is an orthogonal matrix (Lecture Note
8), so that Q−1 = QT . We have orthogonally diagonalized A,
λ1
..
.
λ1
T
Q AQ = D =
λ2
.
..
λr
where all nondiagonal entries of D are zero, and Q is an orthogonal matrix.
Example 0.20. Find an orthogonal matrix
4
A= 2
2
Q which orthogonally diagonalizes
2 2
4 2 .
2 4
Solution: From the equation
t − 4 −2
−2
Det(tI3 − A) = Det −2 t − 4 −2 = t3 − 12t2 + 36t − 32 = (t − 2)2 (t − 8) = 0
−2
−2 t − 4
we see that the eigenvalues of A are λ = 2 and λ = 8.
To factor this polynomial, use the fact that the rational roots must be integers which
divide the constant term -32. Testing divisors of -32, we find that 2 and 8 are roots. Then
we divide t3 − 12t2 + 36t − 32 by (t − 2)(t − 8) = t2 − 10t + 16 to get t − 2 (with no
remainder).
The eigenspace EA (2) is the nullspace N (2I3 − A). We have that
−2 −2 −2
2I3 − A = −2 −2 −2 ,
−2 −2 −2
and the RRE form of this matrix is
1 1 1
0 0 0 .
0 0 0
From the standard form solution
x1
−t1 − t2
−1
−1
x2 =
= t1 1 + t2 0
t1
x3
t2
0
1
with t1 , t2 ∈ R, we deduce that
{(−1, 1, 0)T , (−1, 0, 1)T }
is a basis of EA (2).
The eigenspace EA (8) is the nullspace N (8I3 − A). We have that
4 −2 −2
8I3 − A = −2 4 −2 ,
−2 −2 4
9
and the RRE form of this matrix is
1 0 −1
0 1 −1 .
0 0 0
From the standard form solution
t
1
x1
x2 = t = t 1
t
1
x3
with t ∈ R, we deduce that
{(1, 1, 1)T }
is a basis of EA (8).
We next find an orthogonal basis of EA (2), using Gram-Schmidt. We have
u1 =
1 1
1
(−1, 1, 0)T = (− √ , √ , 0)T ,
T
k(−1, 1, 0) k
2 2
v2 = (−1, 0, 1)T − < (−1, 0, 1), u1 > u1
= (−1, 0, 1)T − √12 (− √12 , √12 , 0)T
= (− 12 , − 12 , 1)T
and using the trick explained in Lecture Note 8 when we learned Gram-Schmidt, we
compute
u2 =
1
1
1 2
1
v2 =
(−1, −1, 2)T = (− √ , − √ , √ )T .
kv2 k
k(−1, −1, 2)k
6
6 6
Thus
1 1
1
1 2
{(− √ , √ , 0)T , (− √ , − √ , √ )T }
2 2
6
6 6
is an ON basis of EA (2)
Applying Gram-Schmidt to our basis of EA (8), we compute
u=
1
1 1 1
(1, 1, 1)T = ( √ , √ , √ )T
T
k(1, 1, 1) k
3 3 3
so that
1 1 1
{( √ , √ , √ )T }
3 3 3
is an ON basis of EA (8).
Finally, we have that
Q=
− √12
− √16
− √16
√1
2
√2
6
0
√1
3
√1
3
√1
3
is an orthogonal matrix such that
2 0 0
QT AQ = D = 0 2 0 .
0 0 8
10
Some applications of these methods:
1) Solutions to systems of linear ordinary differential equations, Section 6.2 of Leon,
and most generally Chapter 3 of Braun, Differential equations and their applications.
2) The Page Rank Algorithm, pages 315 - 316 of Leon.
11
© Copyright 2026 Paperzz