Coordinates, Criteria for a set of vectors to be a basis and Transition

Coordinates, Criteria for a set of vectors to be a basis and Transition matrices
Coordinate vectors.
Suppose that V is a vector space, with a basis β = {v1 , . . . , vn }. Suppose that v ∈ V .
Then there is a unique expansion
v = c1 v1 + · · · + cn vn
with ci ∈ R. The coordinate vector of v with respect to the basis β is


c1


(1)
(v)β =  ...  ∈ Rn .
cn
Some standard bases.
1. The vector space Rn of n × 1 column vectors has the standard basis

 
 
 
1
0
0 





 0 
 1 
 .. 

 
 
 
β ∗ = e1 =  ..  , e2 =  ..  , . . . , en =  .  .

 . 
 . 
 0 





0
0
1 
2. The vector space Rn of 1 × n row vectors has the standard basis
β ∗ = {e1 = (1, 0, . . . , 0), e2 = (0, 1, 0, . . . , 0), . . . , en = (0, . . . , 0, 1)}.
3. The vector space Rm×n of m × n matrices has the standard basis
β ∗ = {e1,1 , e1,2 , . . . , em,n }
where eij is the m × n matrix whose entry in the i-th row and j-th column is a 1,
and all other entries are zero. As an example, we see that the standard basis of
R2×3 is


0 1 0
0 0 1
1 0 0



e =
, e1,2 =
, e1,3 =
, 




0 0 0
0 0 0
0 0 0
 1,1

∗
β =




0 0 0
0 0 0
0 0 0




e
=
,
e
=
,
e
=
 2,1

2,2
2,3
1 0 0
0 1 0
0 0 1
4. The vector space Pn of polynomials of degree < n has the standard basis
β ∗ = {1, x, x2 , . . . , xn−1 }.
Counting the number of elements in these bases, we see that
dim (Rn ) = n, dim (Rn ) = n, dim (Rm×n ) = mn, dim (Pn ) = n.
A problem on change of perspective.
In this section we consider the following problem. Suppose that we rotate the coordinate
system
1
0
e1 =
, e2 =
0
1
in R2 counterclockwise by 45 degrees, to obtain a new coordinate system {v1 , v2 }. The
fact that this gives us a coordinate system is the statement that β = {v1 , v2 } is a basis
1
of R2 . The coordinates of a vector ~v ∈ R2 , from the perspective of the new coordinate
system {v1 , v2 } are given by the coordinate vector (~v )β , as
c1
(~v )β =
c2
if ~v = c1 v1 + c2 v2 .
The coordinate vector (~v )β represents the way the point ~v will be perceived by a person
who turns counterclockwise an angle of 45 degrees. We have (the new coordinate vectors
should have length 1)
!
!
−1
1
√
v1 =
2
√1
2
√
2
√1
2
, v2 =
.
The mathematical problem that we must solve is then:
a) Show that β = {v1 , v2 } is a basis of R2 .
b) Compute the coordinate vector (~v )β of an arbitrary element
x
~v =
∈ R2 .
y
We will now give a solution to this problem, working directly from the definition of the
basis. Later in this note, we will develop some more techniques, which will allow us to
give simpler (but less conceptual) solutions to a) and b).
We establish a), by verifying that β satisfies the definition of a basis. We first show
that v1 , v2 are linearly independent. We must show that c1 = c2 = 0 is the only solution
to
!
! −1
√1
√
0
2
2
+ c2
=
.
c1 v1 + c2 v2 = c1
√1
√1
0
2
2
We first rewrite this equation as
√1
2
√1
2
!
− √12
√1
2
c1
c2
=
0
0
,
and then solve this system for c1 , c2 . We now calculate the RRE form of the matrix
!
√1
√1
−
2
2
√1
2
√1
2
and find that it is
1 0
0 1
.
Thus c1 = 0, c2 = 0 is the only solution, and so v1 , v2 are LI.
We will next show that span(v1 , v2 ) = R2 . To do this, we must establish that any vector
x
~u =
∈ R2
y
can be expressed as a linear combination
~u = c1 v1 + c2 v2
2
for some c1 , c2 ∈ R. That is, we must solve the equation
!
√1
√1
−
c1
x
2
2
=
,
(2)
√1
√1
c2
y
2
2
for c1 , c2 (with fixed x, y). The augmented matrix of this system is
!
√1
√1
−
|
x
2
2
,
√1
√1
| y
2
2
which has the RRE form
√
1 0 |
0 1 |
√
2
2
2√ x + 2√ y
− 22 x + 22 y
!
.
The unique solution to (2) is thus
√
√
√
√
2
2
2
2
x+
y, c2 = −
x+
y.
(3)
c1 =
2
2
2
2
this gives us a (unique) solution to ~u = c1 v1 + c2 v2 , so span(v1 , v2 ) = R2 , and thus β is a
basis of R2 , establishing a).
Our last calculation has already given us a solution to part b). We have shown that for
x
~v =
∈ R2 ,
y
the coordinate vector is
√
(~v )β =
√
2
2
x
+
2√
2√ y
− 22 x + 22 y
!
,
as follows from (3). We can rewrite the coordinate vector as a matrix multiplication:
√
√ !
(~v )β =
2
2√
2
2
−
2
√2
2
2
~v .
We will learn later in this note that the 2 × 2 matrix appearing in this formula is the
transition matrix from the standard basis β ∗ = {e1 , e2 } of R2 to the basis β = {v1 , v2 } of
R2 , written as
√
√ !
(4)
2
2√
∗
Mββ =
−
2
2
2
√2
2
2
.
Theorem 0.1. Suppose that β = {v1 , . . . , vn } is a set of vectors in a vector space V .
Suppose that one of the following holds:
1) V = Span(v1 , . . . , vn ).
2) dim V = n, the number of vectors in β.
Then β is a basis of V if and only if v1 , . . . , vn are Linearly Independent
1) of the theorem follows from the definition of a basis, since we are given that v1 , . . . , vn
span. Suppose that 2) holds. By 1) we have that β is a basis of Span(v1 , . . . , vn ) if and
only if v1 , . . . , vn are LI, which holds if and only if dim Span(v1 , . . . , vn ) = n. The subspace
3
Span(v1 , . . . , vn ) of V is then equal to V if and only if dim Span(v1 , . . . , vn ) = dim V by
2) of Theorem 4.10.
A criterion for a set of n vectors to be a basis in an n-dimensional vector space
Suppose that A = (A1 , . . . , An ) is an m × n matrix. From the identity (equation (1) of
Lecture Note 2)


c1


A  ...  = c1 A1 + · · · + cn An ,
cn
we see that the relations
c1 A1 + · · · + cn An = 0m
correspond to the nonzero solutions to


c1


A  ...  = 0m .
cn
Thus A1 , . . . , An ∈ Rm are linearly independent if and only if A~x = 0m has only the trivial
solution, where A is the m × n matrix A = (A1 , . . . , An ).
In the case when m = n, we have that A~x = 0n has only the trivial solution if and only
if Det(A) 6= 0 (Theorem 6 of Lecture Note 3).
Now recall Theorem 4.10 from Lecture Note 4, which tell us that n vectors v1 , . . . , vn
in an n-dimensional vector space V span V if they are LI, and thus are a basis of V . This
gives us the following criterion for a set of vectors in Rn to be a basis.
Theorem 0.2. Suppose that v1 , . . . , vn are elements of Rn . Then {v1 , . . . , vn } is a basis
of Rn if and only if Det(v1 , . . . , vn ) 6= 0.
We can rephrase this criterion in a form which is applicable to any finite dimensional
vector space that we know the dimension of.
Theorem 0.3. Suppose that V is an n-dimensional vector space and v1 , . . . , vn are elements of V . Suppose that β is a basis of V . Then {v1 , . . . , vn } is a basis of V if and only
if
Det((v1 )β , . . . , (vn )β ) 6= 0.
Warning: The method of Theorem 0.3 cannot be used if we do not know the dimension
of V . A necessary part of a solution using this theorem is to verify and state that the
number n of given vectors {v1 , . . . , vn } is equal to the dimension of the vector space V .
We now return to problem a) from the previous section. Theorem 0.2 gives us a simple
method to show that {v1 , v2 } is a basis of R2 .
{v1 , v2 } is a basis of R2 since dim R2 = 2 and
Det(v1 , v2 ) = Det
√1
2
√1
2
− √12
√1
2
Some Examples
4
!
=
1 1
+ = 1 6= 0.
2 2
Example Determine if

 
 
 
2
1
3 






1 , v2 =
1 , v3 =
2 
β = v1 =


3
1
4
is a basis of R3 .
Solution: Since we are given 3 = dim R3 vectors v1 , v2 , v3 , β is a basis of R3 if and only if
Det(v1 , v2 , v3 ) 6= 0. We compute (showing work)


2 1 3
Det  1 1 2  = 0.
3 1 4
Thus β is not a basis of R3 .
Example 0.4. Let
     
1
3 
 1
β =  0 , 2 , 1  .


0
0
1
a) Show that β is a basis of R3 .
b) Find the coordinate vector (~v )β if


3
~v =  4  .
1
c) Verify your answer to b) using the definition of the coordinate vector.
Solution to a):
dim R3 = 3 = the number of elements of β
and


1 1 3
Det  0 2 1  = 2 6= 0
0 0 1
so β is a basis of R3 (by the criterion of Theorem 0.2).
Solution to b): We must solve the equation
 
 

3
1
1
 4  = c1  0  + c2  2
1
 0
 0
c1
1 1 3
=  0 2 1   c2
0 0 1
c3


3
 + c3  1 
1

.
The augmented matrix of this system is


1 1 3 | 3
 0 2 1 | 4 ,
0 0 1 | 1
5

which has the RRE form
Thus

1 0 0 | − 32
 0 1 0 | 3 .
2
0 0 1 | 1

3
3
c1 = − , c2 = , c3 = a
2
2
 3 
−2
(~v )β =  23  .
1
and
Solution to c): We check that
 
     
1
1
3
3
3  3     
0
2
1
4
+
+
=
= ~v .
−
2
2
0
0
1
1
Example 0.5. Let
β = 1, 1 + 2x, 3 + x + x2 .
a) Show that β is a basis of P3 .
b) Find the coordinate vector (f )β if
f = 3 + 4x + x2 .
c) Verify your answer to b) using the definition of the coordinate vector.
Solution to a): Let β ∗ = {1, x, x2 } be the standard basis of P3 .
dim P3 = 3 = the number of elements of β
and


1 1 3
Det((1)β ∗ , (1 + 2x)β ∗ , (3 + x + x2 )β ∗ ) = Det  0 2 1  = 2 6= 0
0 0 1
so β is a basis of P3 (by the criterion of Theorem 0.3).
Solution to b): We must solve the equation
3 + 4x + x2 = c1 × 1 + c2 (1 + 2x) + c3 (3 + x + x2 )
= (c1 + c2 + 3c3 ) + (2c2 + c3 )x + c3 x2 .
That is, we must solve the system of equations
c1 + c2 + 3c3 = 3
2c2 + c3
= 4
c3
= 1
The augmented matrix of this system is


1 1 3 | 3
 0 2 1 | 4 ,
0 0 1 | 1
6
which has the RRE form

1 0 0 | − 32
 0 1 0 | 3 .
2
0 0 1 | 1

Thus
3
3
c1 = − , c2 = , c3 = 1
2
2
and

(f )β = 
− 32
3
2

.
1
Solution to c): We check that
− 32 × 1 + 23 (1 + 2x) + (3 + x + x2 )
= − 32 + 23 + 3x + 3 + x + x2
= 3 + 4x + x2 = f.
The transition matrix between bases. Suppose that V is a vector space, and β =
{v1 , . . . , vn }, β 0 = {w1 , . . . , wn } are bases of V . The transition matrix Mββ0 from the basis
β to the basis β 0 is the unique n × n matrix Mββ0 which has the property that
Mββ0 (v)β = (v)β 0
for all v ∈ V . It follows that
(5)
Mββ0 = ((v1 )β 0 , (v2 )β 0 , . . . , (vn )β 0 )
since the i-th column of Mββ0 is
Mββ0 (ei ) = Mββ0 (vi )β = (vi )β 0
for 1 ≤ i ≤ n.
We have that Mββ0 is invertible with inverse
0
Mββ = (Mββ0 )−1 ,
(6)
and if β 00 is a third basis of V , then
(7)
0
Mββ00 Mββ0 = Mββ00 .
An algorithm to compute a transition matrix. If β ∗ is a (standard) basis of V , then
the n × 2n matrix
((w1 )β ∗ , (w2 )β ∗ , . . . , (wn )β ∗ | (v1 )β ∗ , . . . , (vn )β ∗ )
is transformed by elementary row operations into the reduced row echelon form (In | Mββ0 ).
We can think of this as the second basis β 0 first then the first basis β.
7
To prove this algorithm, observe that the product of elementary matrices C such that
C((w1 )β ∗ , (w2 )β ∗ , . . . , (wn )β ∗ ) = In satisfies C = ((w1 )β ∗ , (w2 )β ∗ , . . . , (wn )β ∗ )−1 . Thus
0
∗
C = (Mββ∗ )−1 = Mββ0 by (5) and (6). We then have
∗
C((v1 )β ∗ , . . . , (vn )β ∗ ) = Mββ0 Mββ∗ = Mββ0
by (5) and (7).
Example 0.6. Use the Algorithm to compute a transition matrix to compute the
∗
transition matrix Mββ from the standard basis
1
0
∗
1
2
β = e =
,e =
0
1
of R2 to the basis
(
β=
!
√1
2
√1
2
u1 =
, u2 =
− √12
!)
√1
2
∗
of R2 . Use the transition matrix Mββ to determine (v)β if
2
v=
.
3
∗
The example asks you to compute the transition matrix Mββ of (4).
Solution: First recall that (w)
~ β∗ = w
~ for all w
~ ∈ R2 for all w
~ ∈ R2 (this is only true for
column vectors Rn ). Remembering to write the second basis first, then the first basis, we
write the matrix
√1
− √12 1 0
1
2
1 2
2
((u1 )β ∗ (u2 )β ∗ |(e )β ∗ (e )β ∗ ) = (u1 , u2 |e , e ) =
0 1 ,
√1
√1
2
2
=
√
5 2
√2
2
2
and compute (showing work!) that its RRE form is
√
√ !
2
1 0 2√2
√2
.
2
0 1 − 2
2
Thus
2
√
∗
Mββ
√
2
2√
=
−
2
2
2
√2
2
2
!
.
Finally, we compute
√
(v)β =
∗
Mββ
2
3
=
√
2
2√
−
2
2
2
√2
2
2
!
2
3
!
.
Example 0.7. Use the Algorithm to compute a transition matrix to compute the
∗
transition matrix Mββ from the standard basis
β ∗ = 1, x, x2
of P3 to the basis
β = f1 = 1, f2 = 1 + 2x, f3 = 3 + x + x2
of P3 . Compute the coordinate vector (f )β of f = 3 + 4x + x2 .
8
∗
The example asks you to compute the transition matrix Mββ of the bases in Example
0.5. We know from Example 0.5 that β is a basis of P3 .
Solution: We write the matrix (second basis first, then first basis)

1 0 0
1
1
3
(x2 )β ∗ =  0 2 1 0 1 0  ,
0 0 1 0 0 1

(1)β ∗ (1 + x)β ∗ (3 + x + x2 )β ∗ | (1)β ∗ (x)β ∗
and compute (showing work) that its RRE

1 0 0  0 1 0 0 0 1 form is
1 − 21
0 21
0 0

− 52
− 12  .
1
Thus
1 − 21
=  0 12
0 0

Mββ
∗
1 − 12

=
0 12
0 0

β∗
(f )β = Mβ (f )β ∗

− 52
− 12  .
1
   3 
− 25
3
−2
− 21   4  =  32  .
1
1
1
This agrees with our calculation of (f )β in Example 0.5.
The Wronskian method to determine linear independence of differentiable
functions. Recall that C n−1 [a, b] is the vector space of n − 1 times continuously differentiable functions on the closed interval [a, b]. Suppose that f1 (x), . . . , fn (x) ∈ C n−1 [a, b].
Define the Wronskian of f1 , . . . , fn to be the function
···
f1
f2
 df1
 dx
W (f1 , . . . , fn )(x) = Det  ..
 .
df2
dx
dfn
dx
dn−1 f2
dxn−1
dn−1 fn
dxn−1

dn−1 f1
dxn−1
fn



.

Theorem 0.8. Suppose that f1 (x), . . . , fn (x) ∈ C n−1 [a, b], and there exists x0 ∈ [a, b] such
that W (f1 , . . . , fn )(x0 ) 6= 0. Then f1 , . . . , fn are linearly independent.
Proof. Suppose that we have an expression
(8)
c1 f1 + c2 f2 + · · · + cn fn = 0
for some c1 , . . . , cn ∈ R. Here 0 means the zero function 0C ∞ (R) ; that is, for all a ∈ R,
(9)
c1 f1 (a) + c2 f2 (a) + · · · + cn fn (a) = 0.
9
Here 0 is the zero element 0R of R. In other words, the linear combination of (9) is
“identically zero” as a function. Taking derivatives, we have the system of equations
c1 f1 + c2 f2 + · · · + cn fn
dfn
1
2
c1 df
+ c2 df
dx
dx2 + · · · + cn dx 2
2
c1 ddxf21 + c2 ddxf22 + · · · + cn ddxf2n
n−1
n−1
= 0
= 0
= 0
..
.
n−1
f1
f2
fn
c1 ddxn−1
+ c2 ddxn−1
+ · · · + cn ddxn−1
= 0.
Evaluating at x0 , we have that
(10)
c1 f1 (x0 ) + c2 f2 (x0 ) + · · · + cn fn (x0 )
n
1
2
c1 df
(x0 ) + c2 df
(x0 ) + · · · + cn df
0)
dx
dx
dx (x
2
d 2 f1
d 2 f2
c1 dx2 (x0 ) + c2 dx2 (x0 ) + · · · + cn ddxf2n (x0 )
n−1
n−1
= 0
= 0
= 0
..
.
n−1
f2
fn
f1
(x0 ) + c2 ddxn−1
(x0 ) + · · · + cn ddxn−1
(x0 ) = 0.
c1 ddxn−1
Since we are given that



W (f1 , . . . , fn )(x0 ) = Det 

f1 (x0 )
df1
dx (x0 )
..
.
f2 (x0 )
df2
dx (x0 )
dn−1 f1
(x0 )
dxn−1
dn−1 f2
(x0 )
dxn−1
···
fn (x0 )
dfn
dx (x0 )
dn−1 fn
(x0 )
dxn−1



 6= 0,

the only solution to (10) is the trivial solution c1 = c2 = · · · = cn = 0, and thus c1 = c2 =
· · · = cn = 0 is the only solution to (9), from which it follows that f1 , . . . , fn are linearly
independent.
Example 0.9. Let U = span(cos(x), sin(x)) ⊂ C ∞ (R), where C ∞ (R) is the vector space
of infinitely differentiable functions. Show that β = {cos(x), sin(x)} is a basis of U .
Solution: We compute
cos(x) sin(x)
cos(x) sin(x)
Det d cos(x) d sin(x)
= Det
= cos2 (x) + sin2 (x) = 1.
−
sin(x)
cos(x)
dx
dx
Thus cos(x), sin(x) are linearly independent by the Wronskian criterion. Since cos(s), sin(x)
span U (by definition of U ), we have that β = {cos(x), sin(x)} is a basis of U .
10