MATHEMATICS: MODULE 12125

1
MATHEMATICS: MODULE 11122
CHAPTER 2: INTRODUCTION TO LINEAR ALGEBRA
2.0
INTRODUCTION
WHY LINEAR ALGEBRA (LA)?
(a)
is
LA allows expression of complex systems of equations in a simple way
(b)
LA provides shorthand method to determine whether solution exists before it
attempted
(c)
LA gives us the techniques to solve systems of equations
LA can only be applied to linear relationships BUT many economic relationships can
be (i) approximated by linear equations or (ii) converted into linear relationships, so
LA has wide applicability in both economic theory and econometrics.
Question : Why do relationships occur in systems?
Answer : Simultaneity
Example 2.1 Equilibrium levels in a National Income Model
National expenditure or income Y, comprises consumer expenditure C, investment I
and government expenditure G0 , such that
Y  C  I  G0
(1)
Plus, consumer expenditure depends on income, Y, and the interest rate, r :
C   r Y
(2)
and investment also depends on income, Y, and the interest rate, r :
I    r Y
(3)
Equations (1), (2), (3) comprise a system of simultaneous equations.
Note that G0 ,  ,  ,  ,  ,  are constants.
Example 2.2 Price equilibrium
Suppose p1 , p2 , p3 are the prices of three interdependent commodities and the
following system of equations apply when there is equilibrium :
2 p1  4 p2  p3  77
4 p1  3 p2  7 p3  114
2 p1  p2  3 p3  48
Find the equilibrium prices of the three products.
Using LA we can
2
(a) Express these systems in the simple form A x  b
(b) See if a unique solution exists by determining whether A 1 is defined.
(c) Solve the system using LA techniques
x  A 1 b
 Key point is that LA techniques work for any size of system of equations
2.1
Some Definitions
2.1.1 A scalar is a single real number such as 2, 3 .
2.1.2 A matrix is an (m x n) rectangular array of scalars, where m = no. of rows and
n = no. of columns
a11 a12 a1n 
 a a a 
2n 
 21 22
A
 [a ],
 
 
ij


a m1 a m 2 a mn 
a i j is the element in the i
th
fori  1,2,..., m;
and j  1,2,..., n.
row and j th column of A.
2.1.3 A vector is an array of n scalars, or elements, arranged in either a row or
a column and denoted
 x1 
x 
2
x  = [x1, x2 ,..., xn ] for a (1 x n) row vector and x    for a (n x 1) column vector.

 
 xn 
3
2.2 Matrices and Matrix Operations
2.2.1 Suppose that A and B are two (m x n) matrices,
 a11  a1n 
A =      and B =
am1  amn 
 b11  b1n 
    


bm1  bmn 
then the sum of the matrices is defined as
where ci j  ai j  bi j
Note.
Example 2.3
for i = 1, ... , m and j = 1, ... , n
A+B=B+A
(A+B) + D = A + (B + D)
If
C=A+B
 3 4
A = 1 0 and B =
 0 5
(commutative law)
(associative law)
 4 7 
13 3 ,


 5 0 
find A + B .
2.2.2 Scalar multiplication of a matrix
Let  be a scalar and A an (m x n) matrix. We define
C = A, so that ci j   ai j
(A + B) = A + B = A + B
( + )A = A + A
for i = 1, ... , m and j = 1, ... , n
(distributive law)
for scalars  and  and matrix A.
4
Example 2.4
If
 3 4
A = 1 0 and B =
 0 5
 4 7 
13 3 ,


 5 0 
find 2A - 5B.
2.2.3 Matrix Product
Let A be an (m x n) matrix and B an (n x p) matrix.
Then their product C = AB is an (m x p) matrix with elements:
n
ci j   ai k bk j
k 1
for i = 1, ... , m and j = 1, ... , p
Matrices are said to be conformable for multiplication if the number of columns in the
first matrix A equals the number of rows in the second matrix B.
Note (i)
(AB)C = A(BC) for matrices A, B and C.
(ii)
A(B + C) = AB + AC
(iii)
AB = AB for scalar .
It is important to note that BA  AB except in special cases.
Example 2.5
3 2
Let A = 
 and B =
1 2
4 2 
1 3


0 1
Calculate where possible the following : (i) AB (ii) BA
5
2.2.4 The transpose of a matrix A, denoted A, is derived by interchanging the rows and
columns of the matrix A such that row i of A becomes column i of A, i.e.:
 a11  a1n 
a11  a m1 


A       
If A      
then
( mxn )
( nxm )
a m1  a mn 
a1n  a mn 
so that if A is (m x n) then A
Useful results:
is (n x m).
(i) (A) = A
(ii) (A + B) = A + B
(iii) (AB) = BA
(iv) (ABC) = C BA
2.2.5 The square matrix A is said to be symmetric if A = A. Symmetry is only
possible for square matrices.
2.2.6 A square matrix A is said to be idempotent if
A = AA = A2
Example 2.6 For the matrices A and B in Example 2.5 :
(a) Calculate where possible the following (i) A B
(b) Verify that (BA) = A B
(c) Is A a symmetric or idempotent matrix?
(ii) A + BB (iii) B + AA (iv) A2
6
7
2.2.7
The null matrix O is a matrix whose whose elements are all zeroes i.e.
0
0

O = 0


0
0 0  0
0 0  0

0 0  0

   
0 0  0
Note that if AB = O, the null matrix, it does not necessarily imply that either A = O or
B = O.
2.2.8 The identity matrix is written as
1
0

I n = 0


0
0 0  0
1 0  0

0 1  0

   
0 0  1
and must be square (i.e. m = n). So the identity matrix of order three is:
1 0 0
I 3 = 0 1 0 


0 0 1
(The subscript of I denotes the number of rows or columns).
N.B:
If A is (m x n) then I m A = A I n = A.
2.2.9
Matrices A and B are said to be equal if they have the same number of rows
and columns and ai j  bi j all i and j.
2.2.10 A diagonal matrix is a square matrix which has entries only on the main
(or leading) diagonal and zeroes elsewhere, i.e.:
0
 d 11 0
 0 d 22 0

0 d 33
D=  0



 
 0
0
0





0
0

0

 
dnn 
8
2.2.11
The trace of a square matrix is the sum of the elements on the main diagonal
i.e. if A is (n x n) then
n
trace(A) =
a
i 1
ii
.
It follows that:
(i)
trace(A + B) = trace(A) + trace(B)
(ii)
trace(A) = trace(A)
(iii)
trace(A) = trace(A), for scalar 
(iv)
trace (AB) = trace (BA) if the matrices are conformable for
multiplication in the order AB and BA..
3 2
1 3
Example 2.7 Given matrices A = 
and B = 

 , verify the above
1 2
 0 4
results using the two matrices.
Solution

(i) (A + B) = 

trace(A) =
 
= 
 

 so trace(A + B) =

and trace(B) =
so trace(A) + trace(B) =
i.e. trace(A + B) = trace(A) + trace(B)

(ii) Now A = 


 and trace(A) =

so trace(A) = trace(A)

(iii) A = 


 so trace(A) =

trace(A) = 5 so trace(A) = trace(A)
9
3 2 1 3 31  2 0 33  2 4
(iv) AB = 
=
 . 
 = 
1 2  0 4 11  2 0 13  2 4
3 17 
 1 11


13  31 12  32
1 3 3 2
0 4
BA = 
.
= 
= 




4 8 
 0 4 1 2
 0 3  4 1 0 2  4 2 
( Notice that AB  BA which is usually the case)
trace (AB) = -3 + 11 = 8 and trace (BA) = 0 + 8 = 8 so
trace (AB) = trace (BA)
2.2.12 A system of linear equations can be written as
Ax = b
where A is an (m x n) matrix, x an (n x 1) column vector and b an (m x 1)
column vector.
Example 2.8 Write equations (1), (2) and (3) in Example 2.1 in the form A x = b
given you are solving the equations for Y, C and I i.e Y, C and I are endogenous variables.
(see Chapter 3 later for more on endogenous variables)
2.3 Determinants and the inverse of matrix
(only defined for square matrices)
2.3.1. The determinant of a square matrix A, written as det (A) or A , is a
unique scalar associated with that matrix.
2 x 2 matrices
If A =
 a11
a
 21
a12 
then
a 22 
det(A) = A = a11 a22  a12 a21 .
10
3 x 3 matrices
a11
Suppose A = a 21
a 31
a12
a 22
a 32
a13 
a 23  then det(A) can be evaluated
a 33 
by expanding along any row or column and is best illustrated by an example.
If there are any zeroes in any row or column of the determinant , then it is easier and
hence quicker if the determinant is expanded about this particular row or column.
1 0 3
Example 2.9 Suppose A = 2 5 4 find det(A)
 3 1 2 
(i) expanding along row 1
(ii) expanding along column 3.
2.3.2
A matrix is said to be non-singular if and only if det(A)  0.
If det(A) = 0 , then A is said to be a singular matrix.
2.3.3
If A is an (n x n) non-singular matrix, there exists a unique matrix denoted
A 1 called the inverse of A such that A 1 A = A A 1 = I n where I n is the
identity matrix ( see 2.2.7).
2 x 2 matrices
 a11
If A = 
a 21
a12 
a 22 
then A 1 =
1  a 22

A  a 21
 a12 
a11 
11
3 x 3 and larger matrices
It is easy to invert certain matrices such as a diagonal matrix or the identity
matrix whatever their size.
Example (a)
Suppose D is an ( n x n) diagonal matrix i.e.
0
 d 11 0
 0 d 22 0

0 d 33
D=  0



 
 0
0
0





0
0

0  then D 1 =

 
dnn 
1
 d 11
 0

 0

 
 0

1
0
0

d 22
0
0

1

d 33 


0
0

0 

0 

0 

 
1 
d nn 
(b) If we invert the identity matrix I n , the inverse matrix is the identity matrix
i.e. the identity matrix is unchanged by inversion, I n1 = I n .
Generally though it is not so easy to invert matrices of a higher order than 2 x
2
without using a computer package. You can use the cofactor method but these
days most people do use a computer package such as Microsoft Excel or Stata
to invert a matrix of size bigger than a 2 x 2. You will cover this in the first of
two computing exercises for this module.
3 2
Example 2.10 (i) Invert the matrix A = 
.
1 2
 7 3 3
0  is the inverse of matrix
(ii) Verify that  1 1
 1 0
1 
1 3 3
1 4 3 .


1 3 4
12
2.4. Solution of systems of equations
2.4.1 Inverse matrix method.
From section 2.0 it says we can use linear algebra to
(a) Express these systems in the simple form A x  b
(b) See if a unique solution exists by determining whether A 1 is defined.
(c) Solve the system using LA techniques
x  A 1 b
If A is a singular matrix ( i.e. det(A) = 0 ) then there is not a unique solution to the
system of equations but if A is non-singular, then A 1 exists and the unique solution
is
given by x  A 1 b .
Example 2.11 Determine the equilibrium prices of three interdependent
commodities which satisfy
p1  3 p2  3 p3  32
p1  4 p2  3 p3  37
p1  3 p2  4 p3  35
using the inverse matrix method.
13
2.4.2 Cramer’s rule
Example 2.12
Solve the system of equations in Example 2.8 given by
Y  C  I  G0
 Y  C
r
 Y
 I    r
using Cramer’s rule.
Cramer’s rule states that the elements of the solution vector x can be found by
xi 
det  Ai 
where x  =  x1 , x 2 ,..., x n 
det  A
and Ai ( i = 1,2,......n) is the matrix in which the i th column of A is replaced by the
vector b .
First write in the form A x  b with x  = Y , C, I 
Solution
So b  = G0 ,  r ,    r 
Then find det (A). Now expanding along row 3,
1 1 1
det A =  
1

0
0 =
1
x1  Y 
To find Y or x1
det  A1 
det  A
Expanding the determinant A1 along row 2.

G0
det  A1  =  r
  r
1 1
1
0
0 =
1
14
Notice that the first column of A has been replaced by the b vector.
So x1  Y 
det  A1 
det  A
To find C or x 2
=
x2  C 
det  A2 
det  A
,
1
det  A2  =  

G0
r
  r
1
0 =
1
expanding the determinant A2 along column 3.
Notice that the second column of A has been replaced by the b vector.
det  A2  =
So x2  C 
det  A2 
det  A
=
x3 
Finally to find I or x3
det  A3 
det  A
Expanding the determinant A3 along row 3,
det  A3 

1  1 G0
r
=  1
=

0   r
Notice that the third column of A has been replaced by the b vector.
So x3  I 
det  A3 
det  A
=
15
2.5
Principal minors and positive and negative definiteness
Suppose we have a 2 x 2 matrix
 a11
A =
a 21
a12 
a 22 
Then the first and second principal minors of matrix A are a11 and det (A)
respectively.
a11
Suppose we have a 3 x 3 matrix A = a 21
a 31
a12
a 22
a 32
a13 
a 23  then the first, second and
a 33 
third
principal minors of matrix A are a11 ,
a11 a12
a 21 a 22
and det(A) respectively.
Positive definiteness A symmetric square matrix A is positive definite if all its
principal minors are positive.
Negative definiteness A symmetric square matrix A is negative definite if its
principal minors oscillate in sign with the first principal minor being negative.
For example if A is 3 x 3 then the first principal minor is negative, the second is
positive and the third is negative.
 1 1 0.5
Example 2.13 If A =  1 2 0  show that A is positive definite.
0.5 0 1 
16
2.6.
Useful properties of determinants and inverse matrices (for information only)
2.6.1
Properties of det (A)
(a) f any single row (column) is multiplied by a scalar  , then the determinant is
also
multiplied by  .
(b) det (A) = det (A).
(c) Interchanging any two rows ( or two columns) reverses the sign of the determinant
but does not change its numerical value.
(d) If B is an ( n x n) matrix, then if C = AB, det (C) = det (A) . det(B) .
(e) If  is a scalar, then det (  A) =  n det (A).
i n
(f) If A is a diagonal matrix (see 2.2.10), then det (A) =
a
ii
i.e. det (A) is the
i 1
product of the diagonal elements of A.
A particular example of this is the identity matrix I n ( see 2.2.8)
so det ( I n ) = 1. 1. 1.......... 1 = 1n = 1.
Properties of the inverse matrix A 1
2.6.2
(a) If A is ( n x n ) then A 1 is also ( n x n ) .
(b)
A 
1 1
= A.
(c) If B is also non-singular ( i.e. det (B)  0 ) and ( n x n ) and if C = AB then ;
1
C 1   AB  B 1 A 1

1
(d)  A  A1 i.e. the operations of transposing a matrix and inverting a matrix
 
are interchangeable.
(e) Using the result in 2.6.1 parts (d) and (f), and above


 
 
det A A 1  det  A .det A 1  det  I n   1  det A 1 
1
.
det  A
(f) If A is also diagonal, then A 1 is also diagonal with its diagonal elements
being the reciprocals of the elements of A.
Example
2 0 0
Suppose A = 0 4 0 then A 1 =
0 0 3
Associated reading
 21 0 0
 0 1 0
4

.
1
0 0 3 
Bradley and Patton Chapter 9, sections 9.2 - 9.5
17
C. Osborne October 2001