CS1260 Mathematics for Computer Science

CS1260 Mathematics for Computer Science
Unit 9 -- More on Matrix Algebra
Some Properties of Inverse Matrices and Determinants
If A and B are invertible matrices of the same size then AB is also invertible and
(AB)–1 = B–1 A–1
Note the change in order! This follows since
(B–1 A–1) (A B) = B–1 (A–1 A) B = B–1 I B = B–1 B = I
If A is invertible, then so is AT and
(AT)–1 = (A–1)T
Thus the transpose of the inverse is the same as the inverse of the transpose. This follows
since
(A–1)T AT = (A A–1)T = IT = I
If A and B are square matrices of the same size then
det(A B) = det(A) det(B).
In particular, from this we may deduce that, if A is invertible, then
det(A–1) = 1 / det(A)
This follows since
det(A) det(A–1) = det(A A–1) = det(I) = 1.
If A is a square matrix then
det(AT) = det(A).
This follows if we expand det(A) by (say) the first row and det(AT )by the first column; the
same cofactor expansion is obtained.
Elementary Row Operations
Although we can calculate the inverse of an nn matrix by calculating its adjoint and then
dividing by the determinant, this method becomes very inefficient if n is large since we must
calculate n2 cofactors (each of which is a determinant of order n–1) and then divide each by
the determinant. However the number of arithmetic operations needed to calculate a
determinant rises very rapidly as the size of the determinant increases. Using this method
manually to find the inverse of even a 5matrix becomes very time-consuming. Even using
a computer the method becomes impracticable for matrices larger than about 10A more
efficient method must be found. This is provided by the method of elementary row
operations.
© A Barnes 2006
1
CS1260/L9
There are three types of elementary row operation:
1.
2.
3.
Interchange two rows of a matrix
Multiply one row through by any non-zero number
Add a multiple of one row to another row.
To find the inverse of a matrix A using elementary row operations we proceed as follows:
a)
apply a sequence of row operations to A to reduce it to the identity matrix I;
b) apply the same sequence of row operations to I to yield A–1;
c)
if at any stage of (a) a complete row of zeroes is obtained, the matrix A is singular.
Example 1
Find the inverse of the matrix:
1 2 3


A  4 5 6


7 8 8
The simplest way of applying the same row operations to A and I is to juxtapose them to
form a 3 6 matrix and apply the row operations to that matrix.
1 2 3 1 0 0
R2  R2  4  R1
4 5 6 0 1 0
R3  R3  7  R1
7 8 8 0 0 1
1
2
3
1
0
0
0
1
2
4
3
1
3
0
0
1
0 6 13 7
1 0 1
5
3
2
3
0 1
2
4
3
1
3
0 0
1 1 2
Thus
Check
83

103

1
83

1
A  103

1
8
3
13
3
2
© A Barnes 2006
1

2 

1
0
0
13
3
2
2
0 3
3
1
0 0
R2 
6 4 1 0
1
3
 R2
0 6 13 7 0 1
1 0 1
5
3
2
3
0
R3  R3  6  R2
0 1
4
3
1
3
0
R1  R1  R3
1 0 0
8
3
8
3
1
0 1 0
10
3
13
3
2
2
1
R1  R1  2  R2
R2  R2  2  R3
1
8
3
1
2
0 0 1 1
0 0 1 1
R3  R3
2 1
1

2 

1
1 2 3


4 5 6 =


7 8 8
1 0 0


0 1 0


0 0 0
2
CS1260/L9
Example 2
Show that the following matrix is singular.
1 2 3


A  4 5 6


7 8 9
The simplest way of applying the same row operations to A and I is to juxtapose them to
form a 3 6 matrix and apply the row operations to that matrix.
1 2 3 1 0 0
R2  R2  4  R1
4 5 6 0 1 0
R3  R3  7  R1
7 8 9 0 0 1
1
2
3
1
0
0
0
1
2
4
3
1
3
0
0 6 12 7 0
1
1
2
0 3
3
1
0 0
6
4 1 0
R2 
1
3
 R2
0 6 12 7 0 1
R1  R1  2  R2
R3  R3  6  R2
1 0 1
5
3
2
3
0
0 1
2
4
3
1
3
0
0 0
0
1
2 1
Since the third row of A is now zero, the matrix cannot be reduced to I by further row
operations and thus original matrix A is singular and has no inverse.
Solution of Simultaneous Equations via Row Operations
Suppose we want to solve the simultaneous equations below to find x, y and z:
x + y +z =5
2x + 3y + z = 10
2x – y + 3z = 6
In matrix form these equations are
1 1 1x   5 

   
2 3 1y   10

   
2 1 3z  6 
x 1 1 11 5 
  
  
y  2 3 1 10
  
  
z 2 1 3  6 
thus
We could find the inverse of the 33 matrix using row operations as in the above example
sand then multiply as indicated above to find the required solution. However there is a more
efficient way. To find the solution we proceed as follows:
a)
apply a sequence of row operations to A to reduce it to the identity matrix I;
b) apply the same sequence of row operations to B to yield the required solution A–1B;
c) if at any stage of (a) a complete row of zeroes is obtained, the equations
either have no solutions if the corresponding entry in B is non-zero,
or infinitely many solutions if the corresponding entry in B is zero.
The simplest way of applying the same row operations to A and B is to juxtapose them to
form a 3 4 augmented matrix and apply the row operations to that matrix.
© A Barnes 2006
3
CS1260/L9
1
1
1 5
2
3
1 10
2 1 3 6
1 0
2
0 1 1
R2  R2  2  R1
R3  R3  2  R1
5
0
1
1
1
0
1
1 0
0 3
1 0
R3 
1
2
R3
0 0 2 4
1 0 0 1
1
2
5
4
5
0 1 1 0
0 0
1
2
R1  R1  R2
R3  R3  3  R2
R1  R1  2R3
R2  R2  R3
0 1 0 2
0 0 1 2
Thus
x
1
 
 
y = 2
 
 
z
2
and so the required solution is x = 1, y = 2, z = 2.
This method of solution is known as Gaussian Elimination (after the famous German
mathematician Gauss).
An alternative method to full Gaussian elimination in which the matrix is reduce to the
identity matrix is to use row operations to reduce the matrix A to upper triangular form
(that is when all elements below the main diagonal are zero). The equations can then be
solved by back substitution. This method known as partial Gaussian elimination with
back substitution is slightly more efficient than full Gaussian elimination.
Example
We will solve the same system as above, but this time only using partial Gaussian elimination
followed by back substitution.
x + y +z =5
2x + 3y + z = 10
2x – y + 3z = 6
The 3 4 augmented matrix is as above:
1
1
1 5
2
3
1 10
2 1 3 6
1 1
1
0 1 1
R2  R2  2  R1
R3  R3  2  R1
1
1
1
0
1
1 0
0 3
1
5
R3  R3  3  R2
4
5
0 .
0 0 2 4
Thus the original set of equations have been reduced to the form:
x + y +z =5
y – z =0
–2z = –4
© A Barnes 2006
4
CS1260/L9
We now solve the equations in reverse order by 'back substitution'. From the third equation it
follows that z = 2. Then from the second equation it follows that
y – 2 =0
and thus y = 2. Finally from the first equation it follows that
x = 5 – y – z = 5 – 2 – 2 = 1.
and so the required solution is
x = 1,
y = 2,
z = 2.
Determinants and Row Operations
Use of row operations also provides an efficient means of calculating determinants of large
matrices.
As we have seen there are three types of elementary row operation. These have the following
effects on the determinant:
Interchanging two rows of a matrix changes the sign of the determinant
Multiplying one row through by any non-zero number n multiplies the determinant by n
Adding a multiple of one row to another row has no effect on the determinant.
To simplify the calculation of a determinant we use row operations to reduce the determinant
to upper triangular form as follows:
0 3 4
(R1  R3)
4 1 5

2 3 6
(R3  R3  35 R2)

2 3 6
(R2  R2  2R1)

4 1 5
0 3 4
2
3
6
 0 5 7
0
3
4
2 3 6

1 )
  2
 0 5 7  
2

(5)
(


5
0 0 15
Note that when a matrix is in upper triangular form, the determinant is simply the product of
the elements on the main diagonal. To see this simply expand the determinant (and the
resulting cofactors) by the first column.
© A Barnes 2006
5
CS1260/L9