solutions

MA 405 Introduction to Linear Algebra and Matrices
Final Exam
May 10, 2010
Instructions: Show all work and justify all your steps relevant to the solution of each problem. No texts,
notes, or other aids are permitted. Calculators are not permitted. Please turn off your cellphones. You have
180 minutes to complete this exam. There are seven problems which carry a total 110 points. Good luck!
(16 pts) Problem 1. Define the matrix A as follows.

1
A= 2
0
2
3
2

0 1
−1 1 
2 2
(2p) a. Note that A ∈ M3×4 . Explain why there are no matrices in M3×4 which commute with A.
(*) There are no matrices in M3×4 that commute with A because multiplication is not defined on either the left or the right.
(2p) b. Explain why the columns are linearly dependent.
(*) The columns are elements of R3 , which is a vector space of dimension 3. But there are
four vectors (too many).
You may have noticed that the first and last columns sum up to give the second
column.
(4p) c. Compute a basis for the column space and state its dimension.
(*) Row reduction yields


1 0 −2 −1
 0 1 1
1 
0 0 0
0
There are pivots in columns one and two. Hence, the first two columns of the original
matrix form a basis of the column space. Since there are two basis vectors, the dimension
is two.
1
(4p) d. Compute a basis for the null space and state its dimension.
(*) Since there are two pivots in columns one and two, columns three and four correspond
to free variables. (Note that the RHS of the linear system is the zero column). Let column
i correspond to the variable xi . Then let x3 = s and x4 = t. We then have x1 = 2s + t and
x2 = −s − t. The general form of the solution is (2s + t, −s − t, s, t). The solution space
(hence, the nullspace) is
span {(2, −1, 1, 0), (1, −1, 0, 1)}
(2p) e. Does the vector (3, 5, 2)t lie in the column space of A?
(*) Yes. This vector is the sum of the first two columns.
If you didn’t catch that by looking at the matrix, you can determine the answer by
checking the existence of scalars c1 and c2 so that (3, 5, 2) is a linear combination of the
basis vectors of the column space
c1 (1, 2, 0) + c2 (2, 3, 2) = (3, 5, 2)
Set up and solve the system of equations to get c1 = c2 = 1.
(2p) f. Does the vector (−1, 0, −1, 1)t lie in the null space of A?
(*) If (−1, 0, −1, 1) is in the null space of A, I can write the vector as a linear combination of
the two basis vectors (as seen above). This is the hard solution. The easier way is to recall
from the definition of the nullspace that, if the answer is yes, A(−1, 0, −1, 1)t = 0.
Multiply A by the vector to get the zero vector and see the answer is yes.
(16 pts) Problem 2. Define the set S as
S = p(x) = Ax3 + Bx2 + Cx + D | A + B + C + D = 0
(8p) a. Prove that S is a subspace of P3 .
(*) Let p(x) = As x3 + Bs x2 + Cs x + Ds and q(x) = Aq x3 + Bq x2 + Cq x + Dq be two
polynomials in S. Then
p(x) + αq(x)
= As x3 + Bs x2 + Cs x + Ds + αAq x3 + Bq x2 + Cq x + Dq
= (As + αAq )x3 + (Bs + αBq )x2 + (Cs + αCq )x + (Ds + αDq )
The result is in S because As + Bs + Cs + Ds = 0 and α(Aq + Bq + Cq + Dq ) = 0.
2
(5p) b. Compute a basis for S.
(*) There are several ways to write a basis, and most quick solutions involve writing one
of the four coefficients (A − D) as the difference of the remaining three. Using the more
systematic approach we utilized for other vector spaces, the requirement is that A + B +
C + D = 0. This gives the linear system
1
1
1
1
0
The first column has a pivot. The other three columns are then free variables. Hence,
A = −B − C − D. The general form of the solution is
(−B − C − D, B, C, D)
Which means a basis is
−x3 + x2 , −x3 + x, −x3 + 1
(3p) c. What is the dimension of S?
(*) There are three vectors in the basis. So the dimension is 3. A way to solve this problem
without answering part b is to simply notice that you are free to pick three of the four
coefficients. The fourth is then determined from your choices.
3
(16 pts) Problem 3. As discussed in class, the derivative is an example of a linear transformation. Consider
T : P3 → P3 defined as
T (p(x)) = p 00 (x)
i.e. the second derivative. Note that if you cannot answer part b, you can still answer c - f by thinking
carefully about the definitions of the terms used in the questions.
(3p) a. Prove that T is a linear transformation. (Do not simply cite your Calculus text!)
(*) T (p(x) + αq(x)) = (p(x) + αq(x)) 00 = p 00 (x) + αq 00 (x) = T (p(x)) + αT (q(x))
(3p) b. Write the matrix for T with respect to the basis B = {1, x, x2 , x3 }.
(*) Apply T to each of the four basis vectors and then write the coordinate vector for each
result. These four vectors give the columns of the matrix.
T (1) = 0
T (x) = 0
T (x2 ) = 2
T (x3 ) = 6x
→ (0, 0, 0, 0)t
→ (0, 0, 0, 0)t
→ (2, 0, 0, 0)t
→ (0, 6, 0, 0)t
The matrix is

0
 0

 0
0
0
0
0
0
2
0
0
0

0
6 

0 
0
(2p) c. Write a basis for the kernel of T . State its dimension.
(*) The kernel of T is the nullspace of the resulting matrix from part b. Solve the system
Ax = 0 where A is your matrix to get the first two columns (corresponding to the
coordinate for 1 and x) as free variables, and the last two variables (corresponding to the
coordinate for x2 and x3 equal to zero.
The alternative solution (which is probably the easiest solution) is to just note that
the second derivative of any polynomial of degree zero or one will yield precisely zero.
Hence, T maps any polynomial of the form Ax + b to zero. A basis is {1, x}. The dimension
is 2.
(2p) d. Write a basis for the range of T . State its dimension.
(*) The range of T corresponds to the column space of the matrix from part b. Solving the
exercise this way is more trouble than its worth. Simply note that T (Ax3 + Bx2 + Cx + D) =
6Ax+2B. I can write any polynomial of degree one or zero by applying T to the appropriate
polynomial in P3 . So the range is the set of all polynomials of degree one or less. This time
the range is equal to the nullspace. How about that. The basis is {1, x} and the dimension
is 2.
4
(3p) e. State the eigenvalues for the matrix you wrote in part (b).
(*) The matrix is triangular and hence, the eigenvalues are the diagonal elements (all zeroes). This should make sense because no polynomial is a non-zero multiple of its own
derivative. Hence, the alternative solution is to note that the only way to get T (p(x)) =
αp(x) is if α = 0.
(3p) f. Is the matrix you wrote in part (b) diagonalizable? Why or why not?
(*) The dimension of the nullspace is 2. Hence, the dimension of the zero eigenspace is 2.
This is a problem because 0 is the only eigenvalue (so it must have multiplicity four). The
dimension of the eigenspace and the multiplicity of its eigenvalue do not agree. Hence, the
matrix is not diagonalizable.
(16 pts) Problem 4. Let W be the subspace of P2 below
3 4
W = span{ + x, x2 }
5 5
and define an inner product on P2 in the following way
ha1 + b1 x + c1 x2 , a2 + b2 x + c2 x2 i = a1 a2 + b1 b2 + c1 c2
Also, let p = 1 + x + x2 .
(3p) a. Find all polynomials orthogonal to p.
(*) I want to find all polynomials ax2 + bx + c such that ha + bx + cx2 , 1 + x + x2 i = 0, or
a + b + c = 0. I can pick b and c freely. Then a is determined. a = −b − c. The general
form of the polynomial is (−b − c) + bx + cx2 .
(3p) b. Explain why { 35 + 54 x, x2 } is a basis for W.
(*) The vectors are orthogonal to each other. You can also check linear independence by
verifying that
3 4
c1 ( + x) + c2 x2 = 0
5 5
if and only if c1 = c2 = 0.
(2p) c. Determine if p is in W. If so, write the coordinate vector for p with respect to the basis for W.
(*) You can look for values ci such that c1 ( 35 + 54 x) + c2 x2 = p by setting up and solving
the system of equations. You won’t find any solutions because it is easily seen that for a
polynomial to be in W, the constant coefficient (a) must be 34 the value of the x coefficient
(b). (Because p is a linear combination of 53 + 45 x and x2 . p does not follow this rule.
5
(3p) d. Compute a basis for W ⊥ .
(*) We want to find all polynomials orthogonal to W. Call this polynomial f = a + bx + cx2 .
To satisfy this criteria, we require that f be orthogonal to both basis vectors. Hence,
3 4
+ xi = 0 and hf, x2 i = 0
5 5
Hence, I have two systems of equations.
hf,
3
5a
+ 45 b
=0
c =0
This system is easily solved. c = 0 and b = − 43 a. Hence, there’s only one free variable. All
polynomials in W ⊥ are of the form
3
a − ax
4
And so a basis is {1 − 34 x}.
(3p) e. Compute the projection u = projW p.
(*) Let w1 =
3
5
+ 45 x and w2 = x2 . Then
u
=
hp,w1 i
hw1 ,w1 i w1
+
hp,w2 i
hw2 ,w2 i w2
= 75 w1 + w2
(2p) f. Determine if u is in W. If so, write the coordinate vector for p with respect to the basis for W.
(*) Of course u is in W. Look at the equations in the above solution and notice I just wrote
hp,wi i
u as a linear combination of w1 and w2 . The coordinates are the values hw
.
i ,wi i
(14 pts) Problem 5. Give an example of each of the following. (2p each).
a. A two-dimensional subspace of P2 .
(*) span{1, x}.
b. An elementary matrix.
(*)

0
 1
0
1
0
0
c. A linearly dependent spanning set of M2×2 .
(*)
1 0
0 1
0
span
,
,
0 0
0 0
1
6

0
0 
1
0
0
0
,
0
0
1
0
,
1
1
0
d. A linear transformation T : R2 → R2 such that the kernel and range both have dimension one.
(*) T (v) = Av, where
0 1
A=
0 0
e. Three orthonormal vectors in R3 , none of which are either e1 , e2 , e3 .
(*) The hard part is writing three orthogonal vectors. Once you do that, normalize them
(divide each by its own norm) to get orthonormal vectors. Starting is easy. You can pick
just about any vector. The rest falls into place from that.
To start I’ll pick (1, 1, 1). This was chosen somewhat randomly. Next I need to pick
some vector that is orthogonal to this one. That is, (x, y, z) such that (1, 1, 1) · (x, y, z) = 0.
Any vector such that x + y + z = 0 will do. (2, −1, −1) was the first one I wrote, but again,
there are multiple choices.
Third I pick (x, y, z) such that (1, 1, 1) · (x, y, z) = 0 and (2, −1, 1) · (x, y, z) = 0. This
gives me a system of equations. Set up and solve the system to see that x must be zero,
and y = −z. Any vector such that x = 0 and y = −z will do. I chose (0, 1, −1).
Normalize the three vectors to finish.
f. A vector in R3 that is orthogonal to (1, 1, 1)t (Using dot product as the inner product).
(*) See the solution to part e.
g. Two non-trivial subspaces S, T of R3 such that any vector in R3 can be written as a sum of a vector
in S and a vector in T in precisely one way.
(*) In other words, I pick two subspaces such that the direct sum of S and T gives me
R3 . The trick is to remember that the direct sum of any subspace W, and its orthogonal
complement W ⊥ gives the entirety of the vector space W is a subspace of.
Here’s one solution. In the phrasing of the question in part e I reminded you that
e1 , e2 , e3 are all orthogonal to each other. Pick one or two of these three vectors to be the
spanning set for S. Pick the remaining vectors to be the spanning set for T . Since these
three vectors are orthogonal to each other, then S and T are orthogonal complements to
each other. So their direct sum must be R3 .
7
(16 pts) Problem 6. Pick four out of five to answer. Clearly indicate which four you wish to be graded. (4p
each).
a. State all values of k, if any exist, for which the matrix A below is invertible.

1
A= 0
1

0 1
k 2 
1 1
(*) A is invertible if and only if the determinant of A is not zero. Compute the determinant
of A to realize that the value of k has no bearing on the determinant. It is always −2. Hence,
A is invertible for all values of k.
b. Let A be a square matrix in vector space V. Show that the subset of all matrices in V which commute with A forms a subspace of V.
(*) Let B and C be matrices that commute with A. I want to show A(B + kC) = (B + kC)A.
A(B + kC)
= AB + kAC
= BA + kCA
= (B + kC)A
c. Let

1
A= 0
0
−1
2
0

−1
1 
1
Show that A is diagonalizable and write A = PDP−1 , where D is a diagonal matrix and P is an
invertible matrix.
(*) A has eigenvalues 1,2.
The eigenvectors corresponding to the eigenvalue 2 are (0, −1, 1) and (1, 0, 0). The
eigenvector corresponding to the eigenvalue 1 is (−1, 1, 0). Hence, P is the matrix with
these three eigenvectors as columns. The matrix D has the values 2, 1, 1 along the diagonal,
and zero elsewhere.
d. Let C = B−1 AB. Show that if v is an eigenvector of C corresponding to the eigenvalue λ, then the
vector Bv is an eigenvector of A corresponding to λ.
(*) Multiply both sides by B to get BC = AB. Then BCv = ABv
v is an eigenvector of C corresponding to λ, so Cv = λv. Then BCv = Bλv = λBv.
Then this means λ(Bv) = A(Bv).
corresponding to λ.
Since Bv is a vector, it is the eigenvector of A
8
e. Show that if A is a matrix such that At = A−1 , then det(A) = ±1.
(*) I hope this looks familiar! From the review session, the determinant of I is 1. But
I = AA−1 . Then
det(I)
=1
det(AA−1 )
=1
det(A)det(A−1 )
=1
det(A)det(At )
=1
det(A)det(A)
=1
det(A)2
=1
det(A)
= ±1
(16 pts) Problem 7. Respond True or False. If true, give a short explanation as to why (proof, citing theorems
from class, etc). If false, provide a counter-example. (2p each).
Warning. An explanation is required!
a. det(kA) = k det(A) where k is a scalar.
(*) False. The correct statement is det(kA) = kn det(A), where A is an n × n matrix.
b. If B is an ordered basis for vector space V, then the coordinate vector for v ∈ V with respect to B
must be unique.
(*) True. To find the coordinate vector x of v solve the system Bx = v where the columns of
B are the basis vectors. But since B is a basis and the columns are linearly independent, B
must be invertible. Hence, the solution is unique.
c. Let T be a linear transformation from R4 → M2×3 . Then the dimension of the null space can never
exceed 2.
(*) False. The dimension of the null space can never be less than 2.
d. Let W be the subset of 2 × 2 invertible matrices. Then W is a subspace of M2×2 .
1 0
0 1
are both invertible, but their sum,
(*) False. The matrices
and
0 1
1 0
1 1
, is not.
1 1
e. Consider P2 and define the map hp(x), q(x)i =
product.
(*) False. h0, 0i =
6 0.
9
R1
0
p(x) + q(x) + 1dx. Then this map defines an inner
f. Let hu, vi denote the usual dot product on vectors in R3 . Let S be any subspace and let
W = {w ∈ R3 | hw, si = 0 for all s ∈ S}
Then W is a subspace of V.
(*) True. This is the definition of the orthogonal complement of S.
g. A matrix M has orthonormal columns. Then M must be invertible.
(*) True. The columns are linearly independent (because they’re orthogonal), so the matrix
is full rank (M must row-reduce to the identity, so every column has a pivot). Hence, the
dimension of the null space is zero. So M is invertible.
h. An n × n matrix is diagonalizable if and only if its eigenvectors form a basis for Rn .
(*) True. That the eigenvectors form a basis for Rn implies there are n of them. This can
only happen if there are enough eigenvectors for each eigenvalue so that their number
matches the multiplicity of the eigenvalue.
10