Math 416 Homework 3. Solutions. 1. Show that any set of vectors

Math 416 Homework 3. Solutions.
1. Show that any set of vectors that contains the zero vector cannot be linearly independent.
Solution: Let us call this set of vectors S. Consider the linear combination of S where we
choose α 6= 0 times 0, and zero times all of the other vectors in the set. This is a linear
combination with not all coefficients equal to zero, and thus S is linearly dependent.
2. (q.v. Freidberg, Insel, Spence 1.5.3).
Solution: Let us denote these matrices as m1 , . . . , m5 .
We might notice that


1 1
m1 + m2 + m3 = m4 + m5 =  1 1  .
1 1
From this we know that
m1 + m2 + m3 − m4 − m5 = 0
and this is clearly a nontrivial linear combination of the mi .
If we did not make that observation, we can still proceed as follows. Let us write
0=
5
X
xi mi ,
i=1
which, written out, becomes

 

0 0
x1 + x4 x1 + x5
 0 0  =  x2 + x4 x2 + x5  ,
0 0
x3 + x4 x3 + x5
giving the system of six equations in five variables:
x1 + x4
x1 + x5
x2 + x4
x2 + x5
x3 + x4
x3 + x5
= 0,
= 0,
= 0,
= 0,
= 0,
= 0.
We can solve this many ways. But notice that the first, third, and fifth equations tell us that
x1 = x2 = x3 = −x4 and the remaining equations tells us that x1 = x2 = x3 = −x5 . Thus
x4 = x5 , and we see that we have one free variable in the solution of this system. Clearly the
solution is not unique and we have nontrivial solutions. More generally, we see that for any
x1 ∈ R, we have
x1 m1 + x1 m2 + x1 m3 − x1 m4 − x1 m5 = 0.
3. Recall that M3×3 (R) is a vector space. Define the diagonal 3 × 3 matrices to be those
matrices where only the terms on the diagonal are nonzero, i.e. A is diagonal if Aij = 0
whenever i 6= j. Show that the set of diagonal 3 × 3 matrices is a vector space. Write down a
basis for this space.
Solution: Clearly the set of diagonal matrices is a subset of the set of matrices, so we can use
Theorem 1.3. Thus we need only check that the zero matrix is diagonal, and that any linear
combination of diagonal matrices is also diagonal. This follows from the definitions, since if
Aij = Bij = 0 for all i 6= j, then
(αA + βB)ij = αAij + βBij = 0
for all i 6= j.
Now we need a basis for this space, and we claim that

 
 

0 0 0
0 0 0 
 1 0 0
 0 0 0 , 0 1 0 , 0 0 0 


0 0 0
0 0 0
0 0 1
is such a basis.
To see that this set is

1 0

x1 0 0
0 0
independent, notice that if





 
0 0 0
0
0 0 0
0 0 0
0  + x2  0 1 0  + x3  0 0 0  =  0 0 0  ,
0 0 0
0
0 0 0
0 0 1
then x1 = x2 = x3 = 0. Also, to see that they
can be written

x 0
 0 y
0 0
span, notice that every 3 × 3 diagonal matrix

0
0 ,
z
and clearly this is in the span of those three matrices.
4. (q.v. Freidberg, Insel, Spence 1.5.20). Recall that we define F(R, R) as the set of
functions whose domain and range is the real numbers. We showed in class that this is a
vector space. Let f, g ∈ F(R, R) where
f (t) = est ,
g(t) = ert .
Show that f and g are linearly independent if and only if r 6= s.
Solution: If r = s, then f (t) = g(t), so 1 · f (t) + (−1) · g(t) = 0, and they are dependent. So
let us assume that r 6= s, and look for x1 , x2 so that
x1 est + x2 ert = 0, for all t.
This means we have
x1
= −e(r−s)t .
x2
This means that the function on the right-hand side must be constant, but it is not, since
r 6= s.
5. (q.v. Freidberg, Insel, Spence 1.6.2).
Solution: In all of these cases, we form the matrix whose columns are given by the three
vectors, and see if it is row-reducible to the identity.
(a)

 
1 2 0
1
 0 5 −4  ,  −1
−1 1 3
0

 
1 2 0
1 2 0
 0 1 1 , 0 1 0
0 0 1
0 0 1
 
 
 

0
1 2 0
1 2 0
1 2 0
3 , 0 3 3 , 0 1 1 , 0 1 1 ,
−4
0 5 −4
0 5 −4
0 0 −9


1 0 0
, 0 1 0 .
0 0 1
2
1
5

So this is a basis!!!!
(b)

2
0
 −4 3
1 −1

1 0 3
 0 3 12
0 1 4
 
6
1
0  ,  −4
−1
1
 
1 0 3
, 0 1 4
0 3 12
0
3
3
0
−1 −1
 
1
, 0
0
 
 

1 0
3
1 0
3
 ,  0 3 12  ,  0 3 12  ,
1 −1 −1
0 −1 −4

0 3
1 4 ,
0 0
so it is not a basis.
(c)

 
1 1 2
1
 2 0 1 , 0
−1 2 1
−1

 
1 1
2
1
 0 1
0 , 0
0 −2 −3
0
 
 
1
2
1 1
2
−2 −3  ,  0 −2 −3  , 
2
1
0 3
3
 
 
1 2
1 1 2
1 0
1 0 , 0 1 0 , 0 1
0 −3
0 0 1
0 0

1 1
2
0 −2 −3  ,
0 1
0

0
0 ,
1
so it is a basis.
(d)

 
−1 2 −3
1 −2 3
 3 −4 8  ,  3 −4 8
1 −3 2
1 −3 2

 
1 −2 3
1 −2 3
 0 −1 −1  ,  0 1
1
0 2 −1
0 2 −1

 

1 −2 0
1 0 0
 0 1 0 , 0 1 0 ,
0 0 1
0 0 1
 
 

1 −2 3
1 −2 3
 ,  0 2 −1  ,  0 2 −1  ,
1 −3 2
0 −1 −1
 
 

1 −2 3
1 −2 3
, 0 1
1 , 0 1 1 ,
0 0 −3
0 0 1
so it is a basis.
(e)

 
 
 

1 −3 −2
1 −3 −2
1 −3 −2
1 −3 −2
 −3 1 −10  ,  0 −8 −16  ,  0 1
2 , 0 1
2 ,
−2 3 −2
0 −3 −6
0 1
2
0 0
0
so it is not a basis.
6. (q.v. Freidberg, Insel, Spence 1.6.9).(ish). Show that the vectors v1 = (0, 0, 0, 1), v2 =
(0, 0, 1, 1), v3 = (0, 1, 1, 1), v4 = (1, 1, 1, 1) form a basis for R4 . Find the constants a1 , a2 , a3 , a4
so that the arbitrary vector (x, w, y, z) ∈ R4 can be written
(x, w, y, z) =
4
X
ai v i .
i=1
Solution: The simplest way to solve this is to do the second part and show that the answer is
unique, and this will establish the first part. We have
a1 (0, 0, 0, 1) + a2 (0, 0, 1, 1) + a3 (0, 1, 1, 1) + a4 (1, 1, 1, 1) = (x, w, y, z),
or
(a4 , a3 + a4 , a2 + a3 + a4 , a1 + a2 + a3 + a4 ) = (x, w, y, z).T his
gives the equations
a4
a3 + a4
a2 + a3 + a4
a1 + a2 + a3 + a4
This gives an augmented matrix in the
 

1 1
1 1 1 1 z
 0 1 1 1 y   0 1

 
 0 0 1 1 w , 0 0
0 0
0 0 0 1 x


1 0 0 0 z−y
 0 1 0 0 y−w 


 0 0 1 0 w − x ,
0 0 0 1 x
= x,
= w,
= y,
= z.
form
1
1
1
0
0
0
0
1
z−x
y−x
w−x
x
 
1
  0
,
  0
0
1
1
0
0
0
0
1
0
0
0
0
1
Thus we have
a1 = z − y,
a2 = y − w,
a3 = w − x,
Since there is a unique solution, the vectors form a basis.
a4 = x.
z−w
y−w
w−x
x


,

7. (q.v. Freidberg, Insel, Spence 1.6.11).(ish.) Let u, v be distinct vectors of a vector space
V . Show that if {u, v} is a basis for V , and a, b are nonzero scalars, then both {u + v, αu}
and {αu, βv} are bases for V . What happens if one or more of the scalars is zero?
Finally, show that {αu, βu} is not a basis for V .
Solution: The last part is easiest, we start there. Notice that
(−β)(αu) + (α)(βu) = 0,
and this is a nontrivial linear combination.
Now we do the first part. Assume that {u, v} is a basis for V .
To show that {αu, βv}, we need to show that for any z ∈ V , there is a unique solution x1 , x2
such that
z = x1 (αu) + x2 (βv).
But we know that there is a unique solution y1 , y2 to the equation
z = y1 u + y2 v,
so if we choose x1 = y1 /alpha and x2 = y2 /β, this is a solution. To see that it must be unique,
we know that y1 = αx1 and y2 = βx2 , so if there were multiple solutions for x then there would
be multiple solutions for y.
Now we consider {u + v, αu}. If we have z ∈ V , then we need to solve the system
z = x1 (u + v) + x2 (αu) = (x1 + x2 α)u + x1 v.
We know that the equation
z = y1 u + y2 v
has a unique solution, and thus we have
x1 + x2 α = y1 , x1 = y2 .
From this we obtain x2 = (y2 − y1 )/α, and we can solve for xi in terms of yi . Moreover, we
can see by inspection that if x1 changes, then so does y2 , and if x2 changes with x1 fixed, then
so does y1 . Thus the solution is unique.
8. Using only theorems we know concerning solution sets of systems of equations, prove that if
we have v1 , v2 , . . . , vn ∈ Rm , and n > m, then the set {v1 , . . . , vn } is linearly dependent.
Solution: We first write the equation
x1 v1 + · · · + xm vm = 0.
If we consider the matrix A whose columns are the vectors vi , then this is the same as the
equation Ax = 0.
However, notice the dimensions of A must be m × n: each of the vectors has length m, so the
columns are m units high, and there are n columns.
But we know that any m × n homogeneous system has multiple solutions. As such, it must
have a nontrivial solution.
9. Let V be a vector space and S1 ⊆ S2 ⊆ V .
(a) Show that if S1 is linearly dependent, then S2 must be as well.
Hint: Note that a linear combination of vectors in S1 is also a linear combination of
vectors in S2 .
(b) Show that if S2 is linearly independent, then S1 must be as well.
Hint: There’s a hard way to do this, but also a short and clever way only using logical
calculus.
Solution:
(a) We assume that S1 is linearly dependent. This means that there is a linear combination of
vectors in S1 , with coefficients not all equal to zero, that gives the zero vector. But notice
that since each vector in S1 is a vector in S2 , we can extend this linear combination to a
linear combination of the vectors in S2 by just adding zero times each vector in S2 \ S1 .
Using equations: Let S1 = {v1 , . . . , vm } and S2 = {v1 , . . . , vn }, with n ≥ m. Choose
some αi , not all zero, so that
α1 v1 + . . . αm vm = 0.
Then
α1 v1 + . . . αm vm + 0vm+1 + · · · + 0vn = 0,
and not all of these coefficients are zero.
(b) Let P be the logical statement that S1 is linearly dependent, and Q the statement that S2
is linearly dependent. We proved P =⇒ Q. Notice that the second half of the problem
is to prove ¬Q =⇒ ¬P , but this the contrapositive of the first statement, and is thus
equivalent.
A more direct proof is as follows: Let S1 = {v1 , . . . , vm } and S2 = {v1 , . . . , vn }, with
n ≥ m. Since S2 is independent, this means that the only solution to
α1 v1 + . . . αn vn = 0
is the solution α1 = · · · = αn = 0. This implies that the only solution to
β1 v1 + . . . βm vm = 0
if β1 = · · · = βm = 0, since otherwise we could extend any nontrivial solution to the
second equation into a nontrivial solution of the first by appending zeros. This is a
contradiction.