Practice_exam_2_Solution.pdf

MATH 314 Fall 2011
Section 001
Practice Midterm 2 - Solutions
1. Compute the following:
(a) This is a triangular matrix, so the determinant is the product of the
diagonal
entries:
1 1 0 3 0 2 0 0 0 0 1 1 = 1 · 2 · 1 · (−3) = −6
0 0 0 −3
(b) Method

1
2
A=
0
0
Since B
elementary row operations:
1: We use 


1 0 3
1 1 0 3
0
0 0 0 0 
2 0 6
2 −2R1
 R2 =R


−→
B
=
0 0 1 1 ,so det(A) = det(B).
0 1 1
0 0 −3
0 0 0 −3
has a row of zero entries, det(B) = 0 and so det(A) = 0.
Method 2: We use the cofactor expansion along the third column of the
4 × 4 determinant followed by a cofactor expansion along the third row
of
the 3 × 3 determinant:
1 1 0 3 1 1 3 2 2 0 6 = (−1)3+3 2 2 6 = (−1)3+3 (−1)3+3 1 1 = 0
0 0 1 1 2 2 0 0 −3
0 0 0 −3
Solution notes: there are many other valid approaches.
(c) We use cofactor expansions along the first row of the matrix then we
notice
the smaller
matrices are diagonal:
a b 0 . . . 0
d 0 . . . 0
c 0 . . . 0
c d 0 . . . 0
0 1 . . . 0
0 1 . . . 0
0 0 1 . . . 0
= (−1)1+1 ·a ..
+(−1)1+2 ·b ..
=
.
.
.
.
.. ..
.
.
. . .
.
. .
. 0 0 . . . 1
0 0 . . . 1
0 0 0 . . . 1
= ad − bc.
Solution notes: there are other possible solutions; for example by
keeping expanding along the last row.
1


1 1 0
(d) Given A = 1 0 1, compute:
2 1 1
(a)
det(A)
det(5A)
det(A−1 )
det(A10 )
= 0+2+0−1−1−0=0
= 53 det(A) = 125 · 0 = 0
=
does not exist
=
det(A)10 = 010 = 0.
(b) The minors of A (arranged in a matrix); mij is the minor obtained
by eliminating row i and column j of A:


−1 −1 1
1 −1
M = 1
1
1 −1
(c) The cofactors of A (cij = (−1)i+j mij ):


−1 1
1
1
C = −1 1
1 −1 −1
(d) The adjoint matrix of A (adj(A) = C T ):


−1 −1 1
1 −1
adj(A) =  1
1
1 −1
(e) The inverse matrix A−1 does not exits (since det(A) = 0, A is not
invertible).
2


2 1 2
2. Let A = 1 2 −2.
0 0 3
(a) Find the characteristic equation of A.
2 − λ
1
2
2 − λ
1
2 − λ −2 = (3 − λ) det(A − λI) = 1
1
2 − λ
0
0
3−λ
= (3−λ)[(2−λ)2 −1] = (3−λ)(λ2 −4λ+3) = (3−λ)(λ−3)(λ−1) = −(λ−3)2 (λ−1)
The characteristic equation is: −(λ − 3)2 (λ − 1) = 0.
(b) Find the eigenvalues of A.
The eigenvalues are: λ1 = 3, λ2 = 1.
(c) Find the eigenspaces of A.
Eλ1 = N ull(A − 3I) :





−1 1
2 | 0
1 −1 −2 | 0
x1 = s + 2t
 1 −1 −2 | 0 −→ 0 0

0 | 0 =⇒ x2 =
s

0
0
0 | 0
0 0
0 | 0
x3 =
t


   

2 
 s + 2t

 1
Eλ1 =  s  ; s, t ∈ R = Span 1 , 0




t
0
1
Eλ2 = N ull(A − I) :





1 1 2 | 0
1 1 0 | 0
x1 = −t
1 1 −2 | 0 −→ 0 0 1 | 0 =⇒ x2 = t

0 0 2 | 0
0 0 0 | 0
x3 = 0
 

 
 −t

 −1 
Eλ1 =  t  ; t ∈ R = Span  1 




0
0
(d) Find for each eigenvalue the algebraic and geometric multiplicity. Is A
diagonalizable?
For λ1 = 3: algebraic multiplicity 2, geometric multiplicity 2.
For λ1 = 1: algebraic multiplicity 1, geometric multiplicity 1.
Since for each of the two eigenvalues the algebraic multiplicity is equal
to the geometric multiplicity, A is diagonalizable.
3
3. Let A be an unknown 3x3 matrix that
λ1 = 3 and
= 1 and
2
has
λ
 eigenvalues
 
2 
 1
 3 
corresponding eigenspaces Eλ1 = Span 1 , 0 , Eλ2 = Span 1 .




0
1
0
(a) What are the algebraic and
multiplicities of λ1 , λ2 ? Explain. A
 geometric
 
1
2
quick check reveals that 1 , 0 are linearly independent, hence dim(Eλ1 ) =
0
1
2. So the geometric multiplicity of λ1 is 2 and the geometric multiplicity of
λ2 is dim(Eλ2 ) = 1.
(b) Find an
1
P = 1
0
invertible
 P and

3
2 3
0 1 , D = 0
0
1 0
a diagonal
D such that P −1 AP = D.

0 0
3 0
0 1
(c) Compute A100 .
P −1 AP = D ⇒ (P −1 AP )100 = D100 ⇒ P −1 A100 P = D100 ⇒ A100 = P D100 P −1 .



 100
−1 3
2
3
0
0
0
2
3100
0  , P −1 = 21  0
D100 =  0
100
1 −1 −2
0
0
1
 
  100


3
0
0
1 2 3
−1 3
2
3100
0  · 12  0
0
2
A100 = 1 0 1 ·  0
100
0
0
1
0 1 0
1 
−1 −2
100
101
101
3−3
3 − 3 2(3 − 3)
1
100 3101 − 1 2(3100 − 1) .
1
−
3
=
2
0
0
3100
(d) Find a probability vector ~x such that A~x = ~x.
 
3
A probability vector is an eigenvector for λ2 = 1 therefore ~v = c 1. So
0
 
3/4
3c + c = 1, hence c = 1/4 and ~v = 1/4 .
0
(e) Can A be a stochastic matrix?
A cannot be a stochastic matrix because the eigenvalues of a stochastic
matrix must obey |λ| ≤ 1 (1 is the dominant eigenvalue) , whereas |λ1 | =
3 > 1.
4
4.
~ in R3
(a) Give an example of three linearly independent vectors ~u, ~v, w
~ , but w
~ is not
such that ~u is orthogonal to ~v, ~v is orthogonal to w
orthogonal to ~u. You will use these vectors for the rest of the problem.
~ will probably be different.
Solutionnotes:
your
choice of ~u, ~v, w

 
1
1
~ = 2 and check that ~u and w
~ are not orthogonal
Let ~u = 0 , w
0
3
~ = 1 6= 0. Notice that ~v must be orthogonal to both ~u and
since ~u · w
~ . A vector with this property is ~u × w
~ so set
w
 
i j k
0

~v = ~u × w
~ = 1 0 0 = −3j + 2k = −3 .
1 2 3 2
(b) Let S = Span{~u, ~v}. Compute projS (~
w) and perpS (~
w).
Note that {~u, ~v} is an orthogonal basis of S. Therefore
 
1
~ · ~v
~ · ~u
w
w
projS (~
w) = (
)~u + (
)~v = ~u + 0 · ~v = ~u = 0 .
~u · ~u
~v · ~v
0
     
1
1
0





~ − projS (~
perpS (~
w) = w
w) = 2 − 0 = 2
3
0
3
(c) Find an orthogonal basis B of R3 that contains the vectors ~u and ~v.
Since ~p = perpS (~
w) is orthogonal to both ~u and ~v, it is convenient to
pick
      
0
0 
 1





0 , −3 , 2
B = {~u, ~v, ~p} =


0
2
3
~ with respect to the basis B described above.
(d) Find the coordinates of w
~ · ~v
~ · ~p
~ · ~u
w
w
w
)~u + (
)~v + (
)~p = 1 · ~u + 0 · ~v + 1 · ~p
~u · ~u
~v · ~v
~p · ~p
 
1

Therefore [~
w]B = 0.
1
~ =(
w
5
5. The purpose of this problem is to prove that eigenspaces are subspaces.
(a) Let A be an n × n matrix. Define what an eigenvalue λ of A is and
what an eigenvector ~v corresponding to the eigenvalue λ is.
A scalar λ and a non-zero vector ~v are called an eigenvalue of A and
an eigenvector corresponding to the eigenvalue λ respectively if they
satisfy A~v = λ~v.
(b) Let ~v1 and ~v2 be eigenvectors corresponding to the same eigenvalue λ.
Prove that ~v1 + ~v2 is an eigenvector corresponding to the eigenvalue λ
as well.
Av~1 = λv~1
=⇒ Av~1 +Av~2 = λv~1 +λv~2 =⇒ A(v~1 + v~2 ) = λ(v~1 + v~2 )
Av~2 = λv~2
=⇒ ~v1 + ~v2 is an eigenvector corresponding to the eigenvalue λ.
(c) Let ~v be an eigenvectors corresponding to the eigenvalue λ and let
c be a scalar. Prove that c~v is an eigenvector corresponding to the
eigenvalue λ.
A~v = λ~v =⇒ cA~v = cλ~v =⇒ A(c~v) = λ(c~v)
=⇒ c~v is an eigenvector corresponding to the eigenvalue λ.
(d) Define the eigenspace Eλ and explain how the facts above prove it is a
subspace of Rn .
The eigenspace Eλ is the set of all eigenvectors corresponding to the
eigenvalue λ:
Eλ = {~v; A~v = λ~v}.
Part (b) shows that Eλ is closed under vector addition and part (c)
shows that Eλ is closed under scalar multiplication, therefore Eλ is a
subspace of Rn .
6