Name: Math 308, Matrix Algebra Proof HW #4 Due 12/7/15 Proof

Name:
Math 308, Matrix Algebra
Proof HW #4
Due 12/7/15
Proof Homework 4
Note: Mathematical Induction uses the previous case to imply the next one, so if the first case holds, then
they all must hold. Consider the following proof to demonstrate. We will show that for all positive integers n,
1 + 2 + ··· + n =
n(n + 1)
.
2
First, we show it holds for n = 1. For n = 1, the above statement becomes 1+1
2 = 1, which is true. Now, we
will show that the previous result, for n, implies the next one, for n + 1. To do this, we assume the result
holds for n. That is, we assume
n(n + 1)
1 + 2 + ··· + n =
2
This is called the Induction Hypothesis (IH). Next, we prove the next case for (n + 1) as follows:
n(n + 1)
+n+1=
2
n2 + n 2n + 2
n2 + 3n + 2
(n + 1)(n + 2)
+
=
=
.
2
2
2
2
1 + 2 + · · · + n + (n + 1) = (1 + 2 + · · · + n) + n + 1 =
Notice that we used the induction hypothesis in the first step. The rest is just algebra. Also, we recognize
1 + 2 + · · · + n + (n + 1) = (n+1)(n+2)
as the statement with n replaced by n + 1. So, because it holds for
2
n = 1, and each result implies the next one, it is true for all positive integers n. This is how induction works.
FOR PROBLEMS 1a.) AND 2 a.), I RECOMMEND USING INDUCTION, BUT IT IS NOT REQUIRED.
1. Suppose that (λ, ~v ) is an eigenpair of matrix A.
(a) Show (λk , ~v ) is an eigenpair of Ak , for all positive integers k.
(b) Assume A is invertible. Why does this mean λ 6= 0? Then, show (λ−1 , ~v ) is an eigenpair of A−1 .
2.
(a) Show that for all square matrices P, D, with P invertible, and all positive integers k,
(P DP −1 )k = P Dk P −1 .
(b) Matrices A, B are called similar, denoted A ∼ B, if there exists an invertible matrix P so that
A = P BP −1 . Show that if A ∼ B and B ∼ C, then A ∼ C.
1
3. Suppose A is an n × n matrix whose distinct eigenvalues are λ1 , . . . , λk and whose corresponding
eigenspaces are E1 , . . . , Ek . Note k will be less than n if the characterisitic polynomial of A has repeated
roots. For example, the identity matrix In has only 1 eigenvalue λ1 = 1, and E1 is all of Rn . The means
each ~v ∈ E1 is an eigenvector of A with corresponding eigenvalue λ1 , and thus satisfies A~v = λ1~v , and
similarly for the rest. Let
v~1,1 , . . . , v1,m
~ 1 be a basis for E1 ,
...
v~k,1 , . . . , vk,m
~ k be a basis for Ek .
This means m1 = dim(E1 ), . . . , mk = dim(Ek ).
(a) Show that the list of all the basis vectors v~1,1 , . . . , v1,m
~ 1 , . . . , v~k,1 , . . . , vk,m
~ k is linearly independent.
Note: Use the theorem that eigenvectors with distinct eigenvalues are linearly independent, but notice
that v~1,1 , . . . , v1,m
~ 1 all have the same eigenvalue λ1 , and similarly for the rest.
(b) Conclude that A is diagonalizable if and only if m1 + · · · + mk = n.
2