MATH10212 Linear Algebra B • Proof Problems

MATH10212 Linear Algebra B • Proof Problems
05 June 2016
Each problem requests a proof of a simple statement.
Problems placed lower in the list may use the results of previous ones.
Matrices and determinants
1 If a, b ∈ Rn , the matrix
C = ab
T
Hence C is a degenerate matrix and det C = 0.
is a n × n matrix. Prove that if n > 1 then
det C = 0.
Proof: Let
3 In this problem we are working with compound 2n × 2n matrices of the form
A
C


a1
 a2 
 
a =  . .
 .. 
an
It is easy to see that rows of C are proportional
to bT with coefficients a1 , . . . , an , that is,


a1 bT
 a2 bT 


C= . 
 .. 
an bT
Therefore det C = 0 by properties of determinant. 2 Let A, B ∈ Mm×n are m × n matrices with
m > n. Set
C = AB T ;
where A, B, C, D ∈ Mn×n are n × n matrices.
(a) Prove that
O
det n
In
Proof: B T is a n × m matrix where m > n.
Therefore the system of homogeneous equations
BT x = 0
has more variables than equations and has a
non-trivial solution x. But then
Cx = AB T x
= A Btx
= A BT · 0
=
A·0
=
0.
In
= (−1)n ;
On
here, On and In are the zero matrix and
the identity matrix, correspondingly, of
size n × n.
(b) Express
O
det n
B
it is a m × m matrix. Prove that
det C = 0.
B
,
D
A
On
in terms of det A and det B.
Proof: (a) We know that if we swap two different rows in a determinant, the determinant
changes its sign. Our matrix
On
In
In
On
is obtained from the identity matrix
I2n
I
= n
On
On
In
2
MATH10212 • Linear Algebra B • Proof Problems
by moving each of n first rows n positions downwards. Obviously, this requires n2 swaps of rows
and
2
O
In
I
On
det n
= (−1)n det n
In On
On In
Proof: Since A is similar only to itself,
P −1 AP = A for all invertible matrices P . This
is equivalent to AP = P A. Denote
a b
A=
.
c d
2
=
(−1)n · 1
=
(−1)n .
Take
1
P =
0
(b) Denote
J=
then
On
B
On
In
The system
a
c
In
,
On
A
A
=
On
On
0
·J
B
of linear equations
b 1 0
1 0 a
=
d 0 2
0 2 c
b
d
yields
b = c = 0.
and
O
det n
B
0
.
2
A
On
0
· det J
B
=
A
det
On
=
det A · det B · (−1)n
=
(−1)n · det A · det B
by properties of determinants of block-diagonal
matrices. Take
1
P =
0
then the system
a b 1
c d 0
1
,
1
1
1
=
1
0
1 a
1 c
b
d
yields
4 Let A be a 2 × 2 matrix which is similar only
to itself and not to any other matrix. Prove that
A is a scalar matrix.
a = d.
Hence A is a scalar matrix.
Linear transformations
Let
defined by
X 7→ X A
T : V −→ V
be a linear transformation of a finite dimensional vector space V .
5 Prove that if U < V is a vector subspace
then
T (U ) = { T (u) : u ∈ U }
is a vector subspace of V .
6 Prove that if T is onto, it sends linearly independent sets of vectors to linearly independent
sets.
7 Prove that if T is invertible and U ≤ V then
is a linear transformation.
9 For given matrices A, B ∈ M2×2 we define
the following transformation
R : M2×2 −→ M2×2 ,
Check that R is a linear transformation. Prove
that if R is one-to-one, then both matrices A
and B are invertible.
Proof: Checking the linearity of R is a direct
expection:
R(X + Y )
= A(X + Y )B
=
dim T (U ) = dim U.
8 Let A be a n × n matrix and assume that
det A 6= 0.
R(X) = AXB.
(AX + AY )B
= AXB + AY B
= R(X) + R(Y );
R(cX)
= A(cX)B = (cAX)B = cAXB
= cR(X).
Prove that the map
Mn×n −→ Mn×n
Assume that one of the matrices A or B is
not invertible. Then it has determinant 0 and
3
MATH10212 • Linear Algebra B • Proof Problems
the product AXB has determinant 0 regardless
of the choice of X. Therefore all matrices in
the image of transformation R have determinant 0 and R cannot be onto. But R is a linear
transformation, and for linear transformations
on finite dimensional vector spaces being onto
and being one-to-one are equivalent properties.
Hence R cannot be one-to-one, a contradiction.
Another solution: Assume that R is one-to-one;
as by part (ii) R is a linear transformation of
a finite-dimensional vector space, this is equivalent to assuming that R is onto. Hence there
is a matrix X such that AXB = I. Then A is
invertible with inverse XB, and B is invertible
with inverse AX. Eigenvalues and eigenvectors
10 Prove that if a 2 × 2 matrix M has eigenvalues 0 and 1 then M can be written in the
form
a c d
M=
b
for some numbers a, b, c, d. Hint: Use a matrix which conjugates M to a diagonal matrix.
Proof: M is conjugate to the diagonal matrix
0 0
D=
0 1
Indeed,
D=
0 0
1
1 .
Any other such matrix M is similar to D, so is
equal to P −1 DP , hence
0 0 1 P.
M = P −1 DP = P −1
1
It remains to denote
P
−1
0
1
say,
A = P −1 DP.
Denote
P =
then
P −1 =
p
r
by
a
b
q
,
s
1
s
−r
det P
and
−q
p
M
=
=
=
s −q 0
−r p
0
qr qs
pr ps
q r s
p
1 P
c
d .
by
and
1
det P
1
det P
1
det P
0
0 p
1 r
q
s
is in a desired form.
Another solution (shorter, but more difficult):
We start by showing that the claim is true for
the most natural example of a matrix with eigenvalues 0 and 1, that is, for
0 0
D=
.
0 1
11 Problem 10 can be stated in a much more
general form and solved without involvement of
eigenvalues and egenvectors:
Prove that if 2 × 2 matrix M is degenerate then M can be written in
the form
a c d
M=
b
for some numbers a, b, c, d.
Prove this statement.
Symmetric and orthogonal matrices
12 Recall that A and B are n × n matrices
then
AB = B −1 AB.
orthogonal then AB is symmetric.
Prove that if A is a symmetric matrix and B is
Proof: By definition of symmetric and orthogo-
4
MATH10212 • Linear Algebra B • Proof Problems
nal matrices,
we can suggest that the determinant of an antisymmetric 3 × 3 matrix equals 0. A proof is
simple: we know that that
AT = A and B −1 = B T .
det A = det AT ,
Therefore
AB
T
=
=
T
B −1 AB
T
B T AB
=
B T AT B
=
B T AB
=
B −1 AB
=
AB .
and for an antisymmetric matrix A this becomes
det A = det (−A) = (−1)3 det A
T T
(we take out the scalar factor −1 from each row
of −A, that is, three times!), therefore
det A = − det A
Hence AB is a symmetric matrix.
and
det A = 0.
13 A n × n matrix A is antisymmetric if
AT = −A.
(b) It clear that what matters in the proof above
is that 3 is an odd number. So we wish to prove
that
Prove that if A is antisymmetric and B is a n×n
orthogonal matrix then AT is also antisymmetric.
if n is odd and A is an antisymmetric n × n matrix then det A = 0.
It is easy:
Proof: By definition of antisymmetric and orthogonal matrices,
AT = −A and B −1 = B T .
det A = det AT ,
and for an antisymmetric n × n matrix A this
becomes
det A = det (−A) = (−1)n det A
Therefore
AB
T
=
T
B −1 AB
T
B T AB
=
B T AT B T
=
T
det A = − det A
T
=
B (−A) B
=
−B T AB
=
−B −1 AB
− AB .
=
(we take out the scalar factor −1 from each of n
rows of −A, that is, n times!). Since n is odd,
(−1)n = −1 and
Hence AB is an antisymmetric matrix.
hence
det A = 0.
14 Write several 3 × 3 antisymmetric matrices
and compute their determinants.
(a) Make a conjecture about determinants of
antisymmetric 3×3 matrices and prove it.
(b) Can you generalise your observation to
matrices of larger size and prove it?
Proof:

0
−1
0
(a) From

1 0
0 1 ,
−1 1
a few simple examples

 
0
1 1
0
−1 0 1 , −1
−1 −1 1
−2
such as

1 1
0 1 ,
−1 1
15 Prove that if a matrix is simultaneously triangular and orthogonal then it is diagonal.
Proof: Assume that A is an upper triangular
matrix of size n × n, that is, all its entries below
the diagonal equal 0 (for lower triangular matrices, the proof is the same with slight change
of words). Diagonal entries of A are its eigenvalues; we know that orthogonal matrices are
invertible; since A is also orthogonal, A is invertible and its eigenvalues are not equal 0. We
conclude that the diagonal entries
a11 , a22 , . . . ann
are all non-zero. Since A is orthogonal, its
columns are orthogonal to each other. The first
5
MATH10212 • Linear Algebra B • Proof Problems
is not similar to a symmetric matrix.
column of A is
 
a11
 0 
 
 
a1 =  0  .
 .. 
 . 
Can A be similar to an antisymmetric matrix,
that is, a matrix B with the property
AT = −A?
0
Columns a2 , . . . , an can be orthogonal to a1 if
their topmost entries a12 , . . . a1n equal 0. Repeating the same argument for a2 , we see that
entries a23 , . . . , a2n also equal 0. Repeating this
process further, we see that all elements to the
right of the diagonal equal 0. But the A is diagonal. 16 Prove that the matrix

1 2 3
0 1 2

A=
0 0 1
0 0 0
0 0 0
4
3
2
1
0

5
4

3

2
1
Proof: Assume the contrary, let A is similar
to a symmetric matrix S. We know that every
symmetric matrix is similar (or conjugate) to
a diagonal matrix; hence A is similar to some
diagonal matrix D. Since A is triangular, all
eigenvalues of A are diagonal entries and equal
1. Hence D is a diagonal matrix with all diagonal entries equal 1, that is, the identity matrix.
But the identity matrix is similar only to itself;
that means that A is the identity matrix, an obvious contradiction.
A cannot be similar to an antisymmetric matrix, because det A = 1 but antisymmetric matrices of odd size have zero determinants.