Solutions to HW #1, Sections 5.1, 5.2
Batman
February 8, 2010
Hi kids! Here are some solutions for you from the first homework assignment. I can’t
possibly type every solution out, but I have chosen some representative solutions to type
out. Generally, Corey will find someone (such as myself ) to type out some solutions to
your homework. The solutions you might find here will likely not be a repeat of a solution
Corey went over in class, and otherwise, he’d try to choose some questions that he thinks
you’ll all benefit from seeing his solutions of. In addition, he would probably have solutions
to the questions that he grades from your assignments so that he may refer you to these
solutions as he grades your homework. Of course, at any time, any of you could request to
see a particular solution to any homework question, and Corey will find a way to share it
with you. So, here are some solutions to the first homework assignment!
5.1.8 (a) Suppose T is invertible and 0 is an eigenvalue of T . Then there exists a nonzero
eigenvector v satisfying T v = 0v = 0. Thus, ker T 6= {0}, and it follows that T is not
one-to-one, contradicting the hypothesis of invertibility. Conversely, suppose 0 is not
an eigenvalue of T . Then there are no nonzero vectors v so that T v = 0v = 0. Thus,
ker T = {0} and T is one-to-one, and since V is finite dimensional, and T : V → V ,
it follows that T is onto as well. So, T is invertible.
(b) Suppose T is invertible and λ is an eigenvalue of T . From the previous question,
λ 6= 0. Then there exists a nonzero vector v so that T v = λv. Applying T −1 to both
sides and dividing the resulting equation by λ, we have λ1 v = T −1 v, and it follows
that λ1 is an eigenvalue of T −1 . The converse is proven exactly the same way.
(c) The results above extend to matrices by replacing the phrase “linear operator”
with ”square matrix”. The proofs of these results follow exactly the same way.
1
5.1.9 We compute the roots of the characteristic polynomial if U is an upper triangular
matrix. Suppose the diagonal entries of U are λ1 , . . . , λn . Then U − tI is the upper
triangular matrix with the same entries as U , except that the diagonal entries are
now λ1 − t, . . . , λn − t. Since the determinant of an upper triangular matrix is the
product of its diagonal entries (see page 222, number 23. This fact can be proved by
induction with expanding the determinant about the first column), we have
Char(U ) = Πni=1 (λi − t).
Thus, the roots of this polynomial (the eigenvalues of U ) are λ1 , . . . , λn .
5.1.11 (a) Suppose A is similar to λI. Then there exists a matrix Q so that Q−1 AQ = λI.
So then AQ = Q(λI) = λQI = λQ, and that A = λQQ−1 = λI.
(b) Suppose A is diagonalizable, and A has only one eigenvalue λ. Then A is similar
to the diagonal matrix diag(λ, . . . , λ) = λI. By the previous result, A = λI.
(c) The given matrix has 1 as its only eigenvalue. If it were diagonalizable, then it
would have to equal 1I = I. It clearly does not. Therefore it is not diagonalizable.
5.2.3 (e) Suppose T (z, w) = (z + iw, iz + w). Choose the basis β = {(1, 0), (0, 1)} for C2
(remember, V = C, so we regard 1 as a complex number in both of the above vectors,
not as some sort of real part of some other complex number. Seeing that contextual
difference is part of why Corey assigned this problem). Thus,
1 i
.
[T ]β =
i 1
The characteristic polynomial of T is therefore Char(T ) = (1−t)2 −(i2 ) = t2 −2t+2,
which has (complex) roots 1 ± i. It follows that T is diagonalizable, since all of its
eigenvalues are distinct, and Char(T ) splits. The set of eigenvectors forming a basis
for C2 are {(1, 1), (1, −1)}. Although, if one doesn’t see that right away, these basis
vectors can be found as follows. We have
−i i
−i i
1 −1
[T ]β − (1 + i)I =
∼
∼
.
i −i
0 0
0 0
So that (a, b) solves the corresponding homogeneous system if and only if a = b, and
(a, b) = (a, a) = a(1, 1). Thus, (1, 1) is an eigenvector corresponding to the eigenvalue
1 + i.
For the eigenvalue 1 − i, we consider
i i
i i
1 1
[T ]β − (1 − i)I =
∼
∼
.
i i
0 0
0 0
2
Thus (a, b) solves the corresponding homogeneous system if and only if a = −b, so
that (a, b) = (−b, b) = −b(1, −1). Thus, being a linearly independent collection of
two eigenvectors, {(1, 1), (1, −1)} is a basis diagonalizing T .
(f) Consider the basis for M2×2 (R) given below:
e1 =
1 0
0 0
, e2 =
0 1
0 0
, e3 =
We have T (e1 ) = e1 , T (e4 ) = e4 , T (e2 ) = e3 , and
ordered basis β = {e1 , e2 , e3 , e4 }, we have
1 0 0
0 0 1
[T ]β =
0 1 0
0 0 0
0 0
1 0
, e4 =
0 0
0 1
.
T (e3 ) = e2 . So, with respect to the
0
0
.
0
1
Omitting the determinant calculations, we have Char(T ) = (t − 1)3 (t + 1), which has
roots 1, 1, 1, −1 when repeated according to multiplicity. For the eigenvalue −1, we
have
2 0 0 0
2 0 0 0
0 1 1 0 0 1 1 0
[T ]β − (−1)I =
0 1 1 0 ∼ 0 0 0 0 .
0 0 0 2
0 0 0 2
The corresponding solutions (x, y, z, w) of the homogeneous system satisfy
2x
y+z
= 0
= 0
2w = 0.
Thus, x = w = 0, and z = −y. So a solution is of the form (x, y, z, w) = (0, y, −y, 0) =
y(0, 1, −1, 0). So, the eigenvector corresponding to the eigenvalue −1 can be chosen
as (0, 1, −1, 0), which is the matrix
0 1
e2 − e3 =
.
−1 0
(As a cool little observation, I note that math is saved:
T (e2 − e3 ) =
0 1
−1 0
T
=
0 −1
1 0
=−
0 1
−1 0
= −(e2 − e3 ),
illustrating the fact that e2 − e3 is an eigenvector corresponding to the eigenvalue
−1.)
3
As for the eigenvalue 1, we have
0 0
0 0
0 −1 1 0
[T ]β − (−1)I =
0 1 −1 0
0 0
0 0
0 0 0 0
0 −1 1 0
∼
0 0 0 0 .
0 0 0 0
The corresponding solutions (x, y, z, w) of the homogeneous system satisfy y = z. So,
any solution must be of the form
(x, y, z, w) = (x, y, y, w) = (x, 0, 0, 0)+(0, y, y, 0)+(0, 0, 0, w) = xe1 +y(e2 +e3 )+we4 .
Namely, e1 , e2 +e3 , and e4 is a basis for the eigenspace corresponding to the eigenvalue
1. Concluding, the operator T is diagonalizable with ordered basis of eigenvectors
B = {e2 − e3 , e1 , e2 + e3 , e4 }, and
−1 0 0 0
0 1 0 0
[T ]β =
0 0 1 0 .
0 0 0 1
(One could see ahead of time that the eigenvalues must be ±1, since T −1 = T , and
then using the result of problem 5.8.1.b.)
5.2.7 Computing the characteristic polynomial of A, one finds Char(A) = (1−t)(3−t)−8 =
t2 − 4t − 5 = (t − 5)(t + 1). So, the eigenvalues of A are 5 and −1, and it follows
that A is diagonalizable. In an attempt to find the corresponding eigenvectors and
diagonalize A, we study
−4 4
1 −1
.
A − 5I =
∼
0 0
2 −2
The corresponding solutions (x, y) of the homogeneous system satisfy x = y. So, any
solution must be of the form (x, y) = (x, x) = x(1, 1). So, (1, 1) is an eigenvector
corresponding to the eigenvalue 5.
For the eigenvalue −1, we study
A − (−1)I =
2 4
2 4
∼
1 2
0 0
.
The corresponding solutions (x, y) of the homogeneous system satisfy x = −2y.
So, any solution must be of the form (x, y) = (−2y, y) = y(−2, 1). So, (−2, 1) is
4
an eigenvector corresponding to the eigenvalue −1. Thus, the transition matrix Q
changing us from the standard basis to our new ordered basis {(1, 1), (−2, 1)} is
1
1 −2
1 2
−1
Q=
, and so Q =
.
1 1
3 −1 1
We have, as a result of our diagonalization efforts, diag(5, −1) = Q−1 AQ, and so
Q · diag(5, −1) · Q−1 = A, and so (as illustrated in class),
An = Q · diag(5n , (−1)n ) · Q−1
=
1 1
1 −1
5n
0
0 (−1)n
1
3
−1
3
2
3
1
3
1 2 5n (−1)n
3
3
−1
1
n (−1)n+1
5
3
3
(5n − (−1)n )
(2 · 5n + (−1)n )
1
.
3
(5n − (−1)n+1 ) (2 · 5n + (−1)n+1 )
=
=
5.2.12 (a) Let Eλ,T be the eigenspace of V corresponding to the eigenvalue λ of T . We have
shown that λ−1 is an eigenvalue of T −1 already. We must show that, if we denote
Eλ−1 ,T −1 as the eigenspace corresponding to the eigenvalue λ−1 of the operator T −1 ,
then Eλ,T = Eλ−1 ,T −1 . But notice T v = λv if and only if T −1 v = λ−1 v. So that
v ∈ Eλ,T if and only if v ∈ Eλ−1 ,T −1 .
(b) Let λ1 , . . . , λk be the eigenvalues of the operator T . We have already showed
−1
−1 , and that E
that λ−1
λi ,T = Eλ−1 ,T −1 . Then if T
1 , . . . , λk are the eigenvalues of T
i
is diagonalizable, then
k
k
M
M
V =
Eλ,T =
Eλ−1 ,T −1 ,
i=1
i=1
so that V is the direct sum of the eigenspaces of the operator T −1 . Thus, T −1 is
diagonalizable.
Remark: There is a subtle point here worth mentioning in this approach. In the
above, we show that V is the direct sum of some eigenspaces for T −1 . Is V the direct
sum of all of the eigenspaces of T −1 ? It’s subtle because we already have the V is
the direct sum of what we already have: how could there be any more eigenspaces to
throw in (as a direct sum)?
Also, one may approach this by comparing the multiplicities of the eigenvalues of
T −1 to the multiplicities of the eigenvalues of T . Namely, one could argue that T is
diagonalizable so that Char(T ) splits and that the multiplicities of each eigenvalue are
equal to the appropriate dimension of the corresponding eigenspace. This approach
5
can be complicated, although Bethany did a good job on her homework exploring
that option. There is another, more direct route we give below.
Assume T is invertible and diagonalizable. Then there exists a diagonal matrix
D = diag(λ1 , . . . , λn ) and basis β so that [T ]β = D. None of the diagonal entries
of D are zero, since these are the eigenvalues of an invertible operator (by previous
homework problem). Therefore, D is invertible (since det(D) = λ1 · · · λn 6= 0), and
inverting the above, we have:
−1
[T ]β = D ⇐⇒ [T ]−1
β =D .
−1
−1 is diagonalizable.
Now, D−1 = diag(λ−1
1 , . . . , λn ) is diagonal, so that T
5.2.18 (a) Suppose T and U are simultaneously diagonalizable. Then using a basis β which
diagonalizes them simultaneously, we have
[T ]β = diag(λ1 , . . . , λn ),
[U ]β = diag(η1 , . . . , ηn ),
and so
[T U ]β =
=
=
=
=
=
[T ]β [U ]β
diag(λ1 , . . . , λn )diag(η1 , . . . , ηn )
diag(λ1 η1 , . . . , λn ηn )
diag(η1 λ1 , . . . , ηn λn )
[U ]β [T ]β
[U T ]β .
So, these operators commute since, when expressed with respect to a basis, they
commute. In fact, since these operators commute with respect to one basis, it follows
that they commute with respect to every basis: if Q is a transition matrix to the
basis γ, then
[T U ]γ = Q−1 [T U ]β Q = Q−1 [U T ]β Q = [U T ]γ .
(b) We show that A and B commute by first noting that, as in the above proof, if
D and D0 are diagonal matrices, then DD0 = D0 D. If Q is the invertible matrix for
which D = A−1 AQ and D0 = Q−1 BQ, then
DD0 = (Q−1 AQ)(Q−1 BQ) = Q−1 ABQ,
and
D0 D = (Q−1 BQ)(Q−1 AQ) = Q−1 BAQ.
So since DD0 = D0 D, we have Q−1 ABQ = Q−1 BAQ, and it follows that AB = BA.
6
© Copyright 2026 Paperzz