Written - Fresno State email

Homework # 11 (Written) Solutions
Math 152, Fall 2014
Instructor: Dr. Doreen De Leon
p. 293-4 (Section 5.4): 6, 10, 16, 22
6. Let T : P2 → P4 be the transformation that maps a polynomial p(t) into the polynomial
p(t) + 2t2 p(t).
(a) Find the image of p(t) = 3 − 2t + t2 .
Solution: We seek T (p(t)).
T (p(t)) = p(t) + 2t2 p(t) = (3 − 2t + t2 ) + 2t2 (3 − 2t + t2 ) = 3 − 2t + 7t2 − 4t3 + 2t4 .
(b) Show that T is a linear transformation.
Let p, q ∈ P2 and c be a scalar.
•
T (p + q) = [p(t) + q(t)] + 2t2 [p(t) + q(t)]
= [p(t) + 2t2 p(t)] + [q(t) + 2t2 q(t)]
= T (p) + T (q).
•
T (cp) = [cp(t)] + 2t2 [cp(t)]
= c(p(t) + 2t2 p(t))
= cT (p).
(c) Find the matrix for T relative to the bases {1, t, t2 } and {1, t, t2 , t3 , t4 }.
Solution: Let B = {1, t, t2 } and C = {1, t, t2 , t3 , t4 }. Then, we need to find the
C-coordinate vectors for the transformation applied to the polynomials in basis B.
1
So, we have
 
1
0
 

T (1) = 1 + 2t2 (1) = 1 + 2t2 , so [T (1)]C = 
2
0
0
 
0
1
 

T (t) = t + 2t2 (t) = t + 2t3 , so [T (t)]C = 
0
2
0
 
0
0
 

T (t2 ) = t2 + 2t2 (t2 ) = t2 + 2t4 , so [T (t2 )]C = 
1 .
0
2
Therefore,

1
0

M =
2
0
0


p(−2)
 p(3) 

10. Define T : P3 → R4 by T (p) = 
 p(1) .
p(0)
0
1
0
2
0

0
0

1
.
0
2
(a) Show that T is a linear transformation.
Solution: Let p, q ∈ P3 and let c be a scalar. Then
2
•


(p + q)(−2)
 (p + q)(3) 

T (p + q) = 
 (p + q)(1) 
(p + q)(0)


p(−2) + q(−2)
 p(3) + q(3) 

=
 p(1) + q(1) 
p(0) + q(0)

 

p(−2)
q(−2)
 p(3)   q(3) 
 

=
 p(1)  +  q(1) 
p(0)
q(0)
= T (p) + T (q)
•


(cp)(−2)
 (cp)(3) 

T (cp) = 
 (cp)(1) 
(cp)(0)


cp(−2)
 cp(3) 

=
 cp(1) 
cp(0)


p(−2)
 p(3) 

= c
 p(1) 
p(0)
= cT (p).
(b) Find the matrix for T relative to the basis {1, t, t2 , t3 } for P3 and the standard basis
for R4 .
Solution: Let B = {1, t, t2 , t3 } and E = {e1 , e2 , e3 , e4 } (the standard basis for
R4 ). We need to find the E-coordinate vectors for the transformation applied to the
3
polynomials in basis B.
 
 
1
1
1
1

 
T (1) = 
1 , so [T (1)]E = 1
1
1
 
 
−2
−2
 3
 3

 
T (t) = 
 1 , so [T (t)]E =  1
0
0
 
 
4
4




9
2
9
T (t2 ) = 
,
so
[T
(t
)]
=
E
1
1
0
0
 
 
−8
−8




27
3

 27 
T (t3 ) = 
 1  , so [T (t )]E =  1  .
0
0
Therefore,


1 −2 4 −8
1
3 9 27
.
M =
1
1 1
1
1
0 0
0
16. Define T : R2 → R2 by T (x) = Ax. Find a basis B for R2 with the property that [T ]B is
diagonal.
4 −2
A=
.
−1
5
Solution: We must determine if A is diagonalizable, so A = P DP −1 . If so, then B will
consist of the columns of P .
0 = det(A − λI)
4 − λ −2 =
−1 5 − λ
= (4 − λ)(5 − λ) − 2
= λ2 − 9λ + 18
= (λ − 3)(λ − 6).
Therefore, λ = 3, 6.
λ = 3 Find a basis for Nul (A − 3I). We have that
1 −2
A − 3I =
.
−1
2
4
We next do Gaussian elimination to find an eigenvector:
1 −2 | 0 r2 →r2 +r1 1 −2 | 0
−−−−−−→
.
−1
2 | 0
0
0 | 0
We see that x2 is a free variable and x1 − 2x2 = 0
x2 = r, r ∈ R. Then x1 = 2r, so
2r
2
x=
=r
.
r
1
2
Therefore, Nul (A − 3I) is spanned by
and
1
2
v1 =
1
=⇒
x1 = 2x2 . So, let
is an eigenvector corresponding to λ = 3.
λ = 6 Find a basis for Nul (A − 6I). We have that
−2 −2
A − 6I =
.
−1 −1
We next do Gaussian elimination to find an eigenvector:
−2 −2 | 0 r2 →r2 − 21 r1 −2 −2 | 0
−−−−−−→
.
−1 −1 | 0
0
0 | 0
We see that x2 is a free variable and −x1 − x2 = 0
x2 = r, r ∈ R. Then x1 = −r, so
−r
−1
x=
=r
.
r
1
−1
Therefore, Nul (A − 6I) is spanned by
and
1
−1
v2 =
1
=⇒
x1 = −x2 . So, let
is an eigenvector corresponding to λ = 6.
2 −1
3 0
We see that A is diagonalizable, with P =
and D =
. So,
1
1
0 6
2
−1
B=
,
.
1
1
5
22. Prove: If A is diagonalizable and B is similar to A, then A is also diagonalizable
Solution: Since A is diagonalizable, A = P DP −1 for some invertible matrix P . Since
B is similar to A, then B = QAQ−1 for some invertible matrix Q. Then we have the
following
B = Q(P DP −1 )Q−1
= QP DP −1 Q−1
= (QP )D(QP )−1 .
If we let S = QP , we have B = SDS −1 . Therefore, B is diagonalizable.
p. 300 (Section 5.5): 22
22. Let A be a complex (or real) n×n matrix, and let x ∈ Cn be an eigenvector corresponding
to an eigenvalue λ ∈ C. Show that for each nonzero complex scalar µ, the vector µx is
an eigenvector of A.
Solution: We need to show that A(µx) = λ(µx).
A(µx) = µ(Ax)
= µλx
= λ(µx).
Therefore, µx is an eigenvector of A corresponding to λ.
Extra Problem
Find eA for
1 −3
A=
.
−4
5
Solution: We must first diagonaliz A, if possible.
0 = det(A − λI)
1 − λ −3 = −4 5 − λ
= (1 − λ)(5 − λ) − 12
= λ2 − 6λ − 7
= (λ + 1)(λ − 7).
Therefore, λ = −1, 7.
6
λ = −1 Find a basis for Nul (A + I). We have that
2 −3
A+I =
.
−4
6
We next do Gaussian elimination to find an eigenvector:
2 −3 | 0 r2 →r2 +2r1 2 −3 | 0
−−−−−−→
.
−4
6 | 0
0
0 | 0
3
We see that x2 is a free variable and 2x1 − 3x2 = 0 =⇒ x1 = x2 . So, let x2 = 2r, r ∈ R.
2
Then x1 = 3r, so
3r
3
x=
=r
.
2r
2
3
Therefore, Nul (A + I) is spanned by
and
2
3
v1 =
2
is an eigenvector corresponding to λ = −1.
λ = 7 Find a basis for Nul (A − 7I). We have that
[r] − 6 −3
A − 7I =
.
−4
−2
We next do Gaussian elimination to find an eigenvector:
−6 −3 | 0 r2 →r2 − 21 r1 −6 −3 | 0
−−−−−−→
.
0
0 | 0
−4 −2 | 0
We see that x2 is a free variable and −6x1 − 3x2 = 0
x2 = 2r, r ∈ R. Then x1 = −r, so
−r
−1
x=
=r
.
2r
2
−1
Therefore, Nul (A − 7I) is spanned by
and
2
−1
v2 =
2
is an eigenvector corresponding to λ = 7.
7
=⇒
1
x1 = − x2 . So, let
2
Therefore, we have that A = P DP −1 , where
3 −1
−1 0
P =
and D =
.
2
2
0 7
We calculate P −1 :
P
−1
1
1
1
2 1
2 1
4
=
=
=
− 14
det P −2 3
8 −2 3
1
8
3
8
.
Then, eA = P eD P −1 , or
1
3 −1 e−1 0
4
e =P =
2
2
0 e7 − 14
3 −1 14 e−1 18 e−1
.
=
2
2 − 41 e7 38 e7
A
1
8
3
8
Finally, we have
3 −1
e +

eA =  4
1 −1
e −
2

1 7
e
4
1 7
e
2
3 −1
e −
8
1 −1
e +
4

3 7
e
8 
.
3 7
e
4
p. 309 (Section 5.6): 2
4
3
2. Suppose the eigenvalues of a 3 × 3 matrix A are 3, , and , with corresponding eigen5
   
 
 5
1
2
−3
−2
vectors  0 ,  1, and −3. Let x0 = −5. Find the solution of the equation
−3
−5
7
3
xk+1 = Axk for the specified x0 , and describe what happens as k → ∞.
Solution: We know that xk = c1 λk1 v1 + c2 λk2 v2 + c3 λk3 v3 . Therefore,
x0 = c1 λ01 v1 + c2 λ02 v2 + c3 λ03 v3 = c1 v1 + c2 v2 + c3 v3 .
We now solve this equation for c1 , c2 , and c3 .
 
 
 
 
−2
1
2
−3
−5 = c1  0 + c2  1 + c3 −3 .
3
−3
−5
7
We now solve the equivalent system by



1
2 −3 | −2
1
r3 →r3 +3r1
 0
1 −3 | −5 −−
−−−−→ 0
−3 −5
7 |
3
0
Gaussian elimination:



2 −3 | −2
1 2 −3 | −2
r3 →r3 −r1
1 −3 | −5 −−
−−−−→ 0 1 −3 | −5 .
1 −2 | −3
0 0
1 |
2
8
Using back substitution, we have
c3 = 2
c2 − 3c3 = −5 =⇒ c2 = −5 + 3c2 = 1
c1 + 2c2 − 3c3 = −2 =⇒ c1 = −2 − 2c3 + 3c3 = −2 − 2 + 6 = 2.
Therefore, the solution to the system is
 
 
 
k
k −3
1
2
4  
3  
1 +2
−3 .
xk = 2 3k  0 +
5
5
−3
−5
7
As k → ∞, the solution behaves like


1
xk ≈ 2 3k  0 .
−3
9