Finding Eigenvalues and
Eigenvectors
What is really
important?
Approaches
Find the characteristic polynomial
–
Find the largest or smallest eigenvalue
–
–
Power Method
Inverse Power Method
Find all the eigenvalues
–
–
–
–
2
Leverrier’s Method
Jacobi’s Method
Householder’s Method
QR Method
Danislevsky’s Method
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
Finding the Characteristic Polynomial
Reduces to finding the coefficients of the
polynomial for the matrix A
Recall |lI-A| = ln+anln-1+an-1ln-2+…+a2l1+a1
Leverrier’s Method
–
–
Set Bn = A and an = -trace(Bn)
For k = (n-1) down to 1 compute
3
Bk = A (Bk+1 + ak+1I)
ak = - trace(Bk)/(n – k + 1)
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
Vectors that Span a Space and Linear
Combinations of Vectors
4
Given a set of vectors v1, v2,…, vn
The vectors are said span a space V, if given
any vector x ε V, there exist constants
c1, c2,…, cn so that
c1v1 + c2v2 +…+ cnvn = x and x is called a
linear combination of the vi
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
Linear Independence and a Basis
5
Given a set of vectors v1, v2,…, vn and
constants c1, c2,…, cn
The vectors are linearly independent if the
only solution to c1v1 + c2v2 +…+ cnvn = 0 (the
zero vector) is c1= c2=…=cn = 0
A linearly independent, spanning set is called
a basis
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
Example 1: The Standard Basis
6
Consider the vectors v1 = <1, 0, 0>, v2 = <0, 1, 0>,
and v3 = <0, 0, 1>
Clearly, c1v1 + c2v2 + c3v3 = 0 c1= c2= c3 = 0
Any vector <x, y, z> can be written as a linear
combination of v1, v2, and v3 as
<x, y, z> = x v1 + y v2 + z v3
The collection {v1, v2, v3} is a basis for R3; indeed, it
is the standard basis and is usually denoted with
vector names i, j, and k, respectively.
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
Another Definition and Some Notation
7
Assume that the eigenvalues for an n x n
matrix A can be ordered such that
|l1| > |l2| ≥ |l3| ≥ … ≥ |ln-2| ≥ |ln-1| > |ln|
Then l1 is the dominant eigenvalue and |l1|
is the spectral radius of A, denoted r(A)
The ith eigenvector will be denoted using
superscripts as xi, subscripts being reserved
for the components of x
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
Power Methods: The Direct Method
8
Assume an n x n matrix A has n linearly
independent eigenvectors e1, e2,…, en
ordered by decreasing eigenvalues
|l1| > |l2| ≥ |l3| ≥ … ≥ |ln-2| ≥ |ln-1| > |ln|
Given any vector y0 ≠ 0, there exist constants
ci, i = 1,…,n, such that
y0 = c1e1 + c2e2 +…+ cnen
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
The Direct Method (continued)
9
If y0 is not orthogonal to e1, i.e., (y0)Te1≠ 0,
y1 = Ay0 = A(c1e1 + c2e2 +…+ cnen)
= Ac1e1 + Ac2e2 +…+ Acnen
= c1Ae1 + c2Ae2 +…+ cnAen
Can you simplify the previous line?
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
The Direct Method (continued)
10
If y0 is not orthogonal to e1, i.e., (y0)Te1≠ 0,
y1 = Ay0 = A(c1e1 + c2e2 +…+ cnen)
= Ac1e1 + Ac2e2 +…+ Acnen
= c1Ae1 + c2Ae2 +…+ cnAen
y1 = c1l1e1 + c2l2e2 +…+ cnlnen
What is y2 = Ay1?
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
The Direct Method (continued)
n
y c1l e c2 l e ... cn l e ck l2k e k
2
2 1
1
2
2
2
2
n
n
k 1
in general,
n
y i c1l1i e1 c2 li2e 2 ... cn lin e n ck lik e k
k 1
or
n
lk
i
i
1
y l1 c1e ck
l
k 2
1
11
7/31/2017
k
e
i
DRAFT Copyright, Gene A
Tagliarini, PhD
The Direct Method (continued)
i
n
l
y i l1i c1e1 ck k ek
l1
k 2
So what?
i
lk
lk
Recall that
1, so 0 as i
l1
l1
12
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
The Direct Method (continued)
Since
i
n
l
y i l1i c1e1 ck k e k
l1
k 2
then
y i l1i c1e1 as i
13
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
The Direct Method (continued)
14
Note: any nonzero multiple of an eigenvector
is also an eigenvector
Why?
Suppose e is an eigenvector of A, i.e., Ae=le
and c0 is a scalar such that x = ce
Ax = A(ce) = c (Ae) = c (le) = l (ce) = lx
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
The Direct Method (continued)
Since y i l1i c1e1 and e1 is an eigenvecto r
i
y will become arbitraril y close to an eigenvecto r
and we can prevent y i from growing inordinate ly
by normalizin g
i 1
Ay
i
y
Ay i 1
15
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
Direct Method (continued)
16
Given an eigenvector e for the matrix A
We have Ae = le and e0, so eTe 0 (a
scalar)
Thus, eTAe = eTle = leTe 0
So l = (eTAe) / (eTe)
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
Direct Method (completed)
i
y Ay
y y
i T
i T
i
i
approximat es the dominant eigenvalue
r i i I A y i is the residual error vect or
and
r i 0 when i is an eigenvalue with eigenvecto r y i
17
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
Direct Method Algorithm
1. Choose 0, m 0, and y 0.
Set i 0 and compute x Ay and .
2. Do
y
x
x
x Ay
yT x
T
y y
r y - x
i i 1
3. While ( r ) and (i m)
18
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
Jacobi’s Method
19
Requires a symmetric matrix
May take numerous iterations to converge
Also requires repeated evaluation of the
arctan function
Isn’t there a better way?
Yes, but we need to build some tools.
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
What Householder’s Method Does
20
Preprocesses a matrix A to produce an
upper-Hessenberg form B
The eigenvalues of B are related to the
eigenvalues of A by a linear transformation
Typically, the eigenvalues of B are easier to
obtain because the transformation simplifies
computation
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
Definition: Upper-Hessenberg Form
A matrix B is said to be in upper-Hessenberg
form if it has the following structure:
b1,1 b1,2
b b
2,1 2,2
0 b3,2
B
0 0
0 0
21
b1,3 b1,n -1
b 2,3 b 2,n -1
b 3,3 b 3,n -1
b 4,3
b n -1,n -1
0
b n,n -1
7/31/2017
b1,n
b 2,n
b 3,n
b n -1,n
b n,n
DRAFT Copyright, Gene A
Tagliarini, PhD
A Useful Matrix Construction
Assume an n x 1 vector u 0
Consider the matrix P(u) defined by
P(u) = I – 2(uuT)/(uTu)
Where
–
–
–
22
I is the n x n identity matrix
(uuT) is an n x n matrix, the outer product of u
with its transpose
(uTu) here denotes the trace of a 1 x 1 matrix and
is the inner or dot product
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
Properties of P(u)
P2(u) = I
–
–
P-1(u) = P(u)
–
–
23
P(u) is its own inverse
PT(u) = P(u)
–
The notation here P2(u) = P(u) * P(u)
Can you show that P2(u) = I?
P(u) is its own transpose
Why?
P(u) is an orthogonal matrix
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
Householder’s Algorithm
Set Q = I, where I is an n x n identity matrix
For k = 1 to n-2
–
–
–
–
–
24
a = sgn(Ak+1,k)sqrt((Ak+1,k)2+ (Ak+2,k)2+…+ (An,k)2)
uT = [0, 0, …, Ak+1,k+ a, Ak+2,k,…, An,k]
P = I – 2(uuT)/(uTu)
Q = QP
A = PAP
Set B = A
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
Example
1 2 3
Let A 3 5 6 .
4 8 9 3 x 3
Clearly, n 3 and since n - 2 1, k takes only the value k 1.
2
2
Then a sgn(a 21 ) a21
a31
sgn(3 ) 32 4 2 1 5 5
25
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
Example
uT [0,...,0, a21 a , a31 ,..., an1 ] [0, a21 a , a31 ]
[0 ,3 5,4] [0 ,8,4]
0
8 0 8 4
1
0
0
4
uu T
P I 2 T 0 1 0 2
?
u u
0
0 0 1
0 8 48
4
26
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
Example
0
8 0 8 4
1 0 0
T
4
uu
P I 2 T 0 1 0 2
u u
0
0 0 1
0 8 48
4
1 0 0
0 0 0 1
2
0 1 0 0 64 32 0
80
0 0 1
0 32 16
0
27
7/31/2017
0
3
5
4
5
0
4
2
. Find P ?
5
3
5
DRAFT Copyright, Gene A
Tagliarini, PhD
Example
Initially, Q I, so
1 0 0 1
Q QP 0 1 0 0
0 0 1
0
28
7/31/2017
0
3
5
4
5
0 1
4
0
5
3
0
5
0
3
5
4
5
0
4
5
3
5
DRAFT Copyright, Gene A
Tagliarini, PhD
Example
Next, A PAP , so
1
A 0
0
29
0
3
5
4
5
0 1 2 3 1
4
3 5 6 0
5
4
8
9
3
0
5
7/31/2017
0
3
5
4
5
0
4
?
5
3
5
DRAFT Copyright, Gene A
Tagliarini, PhD
Example
Hence,
1
A 0
0
0
3
5
4
5
- 18
1
5
357
- 5
25
0 - 24
25
30
0 1 2 3 1
4
3 5 6 0
5
3 4 8 9
0
5
1
5
26
25
-7
25
0
3
5
4
5
7/31/2017
0 1
4
5
5
3
0
5
2
47
5
4
5
3 1
54
0
5
3
0
5
0
3
5
4
5
DRAFT Copyright, Gene A
Tagliarini, PhD
0
4
5
3
5
Example
Finally, since the loop only executes once
- 18
1
5
357
B A - 5
25
0 - 24
25
So what?
31
7/31/2017
1
5
26
.
25
-7
25
DRAFT Copyright, Gene A
Tagliarini, PhD
How Does It Work?
Householder’s algorithm uses a sequence of
similarity transformations
B = P(uk) A P(uk)
to create zeros below the first sub-diagonal
uk=[0, 0, …, Ak+1,k+ a, Ak+2,k,…, An,k]T
a = sgn(Ak+1,k)sqrt((Ak+1,k)2+ (Ak+2,k)2+…+ (An,k)2)
By definition,
–
–
32
sgn(x) = 1, if x≥0 and
sgn(x) = -1, if x<0
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
How Does It Work? (continued)
The matrix Q is orthogonal
–
–
–
B = QT A Q hence Q B = Q QT A Q = A Q
–
33
the matrices P are orthogonal
Q is a product of the matrices P
The product of orthogonal matrices is an
orthogonal matrix
Q QT = I (by the orthogonality of Q)
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
How Does It Work? (continued)
If ek is an eigenvector of B with eigenvalue
lk, then B ek = lk ek
Since Q B = A Q,
A (Q ek) = Q (B ek) = Q (lk ek) = lk (Q ek)
Note from this:
–
–
34
lk is an eigenvalue of A
Q ek is the corresponding eigenvector of A
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
The QR Method: Start-up
Given a matrix A
Apply Householder’s Algorithm to obtain a
matrix B in upper-Hessenberg form
Select >0 and m>0
–
–
35
is a acceptable proximity to zero for subdiagonal elements
m is an iteration limit
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
The QR Method: Main Loop
Do {
Set Q T I
For k 1 to n - 1{
Bk ,k
c
2
k ,k
B
B
2
k 1, k
; s
Bk 1,k
2
k ,k
B
B
2
k 1, k
;
Set P I; Pk,k Pk 1,k 1 c; Pk 1,k Pk,k 1 s;
B PB ;
Q T PQ T ;
}
B BQ;
i ;
} While (B is not upper block tria ngular) and (i m)
36
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
The QR Method: Finding The l’s
Since B is upper block tria ngular, one may compute lk from
the diagonal blocks of B. Specifical ly, the eigvalues of B are
the eigenvalue s of its diagonal blocks B k .
If a diagonal block B k is 1x1, i.e., B k a , then lk a.
a b
If a diagonal block B k is 2x2, i.e., B k
,
c d
trace(B k ) trace 2 (B k ) 4 det( B k )
then lk ,k 1
.
2
37
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
Details Of The Eigenvalue Formulae
a b
Suppose B k
.
c d
l a b
lI B k
c l d
lI B k ?
38
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
Details Of The Eigenvalue Formulae
l a b
Given lI B k
c l d
lI B k (l a)(l d ) bc
l (a d )l ad bc
2
l trace(B k )l det( B k )
2
39
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
Finding Roots of Polynomials
40
Every n x n matrix has a characteristic
polynomial
Every polynomial has a corresponding n x n
matrix for which it is the characteristic
polynomial
Thus, polynomial root finding is equivalent to
finding eigenvalues
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
Example Please!?!?!?
Consider the monic polynomial of degree n
f(x) = a1 +a2x+a3x2+…+anxn-1 +xn
and the companion matrix
an an1
1
0
A 0
1
0
0
0
0
41
a2 a1
0
0
0
0
0
0
0 1
0
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
Find The Eigenvalues of the
Companion Matrix
an an 1
1
0
lI A lI 0
1
0
0
0
0
42
7/31/2017
a2 a1
0
0
0
0 ?
0
0
0 1
0
DRAFT Copyright, Gene A
Tagliarini, PhD
Find The Eigenvalues of the
Companion Matrix
l an an1 a2
1
l 0
lI A 0
1 0
0
0 l
0
0 0 1
a1
0
0
0
l
an1 a2 a1
(l an )ln 1 (1)
43
1 0
0
0
l
0
0
0 1 l
7/31/2017
DRAFT Copyright, Gene A
Tagliarini, PhD
© Copyright 2026 Paperzz