Functions

NICE LOOKING MATRICES
By now it shoud be clear that matrices are the
backbone (computationally at least) of the
attempt to solve linear systems, or, even more
precisely, the attempt to decide which one of the
three possibilities outlined previously obtains.
Recall them
A linear system may
1
have no solutions at all.
2
have exactly one solution
3
have infinitely many solutions
(By the way, the textbook says that if 2 or 3
obtain the system is said to be consistent, if 1
obtains the system is called
(duh!) inconsistent !
Let’s see how far our three simple elementary
row operations can take us.
What I will do is use a program I wrote some time
ago and show you (here) screenshots of the run,
but in real life I will show you the run.
I will in fact reproduce the algorithm shown on
pp. 15-17 of the textbook.
First, however, we need to learn the technical (in
context) meaning of a few words. We consider a
matrix
æ• • • • .... •
æ• • • • .... •
æ
æ...................
æ• • • • .... •
æ
æ
æ
æ
æ
æ
æ
(each bullet• is an entry.) We define, for any row
Leading term to mean the leftmost non-zero
entry in that row.
So, the leading term of the 3rd row of the matrix
æ2 0 -1 4 0 æ
æ0 0 3 -2 2æ
æ
æ
æ4 5 0 0 0 æ
Is 4 ,
while the 2nd row has leading term 3.
Challenge: What does it mean to say a row has no
leading term?
Right, the row consists entirely of zeros, we call
such anomaly a zero row. (They do happen)
One more technical word.
A matrix M is said to be “right-on” (no need to
throw a highly intimidating word at you, first we
understand the concept, then the Sunday word.)
if it obeys the following conditions:
1 Every non-zero row is above every zero row.
2 The leading term of every row R is strictly to
the right of every leading term of any row
above R.
Your textbook has a third condition, namely
3. All entries in a column below a leading term
are zeros.
For extra credit (1 out of 100 at the end of the
semester) give me an argument that shows that
conditions 1 and 2 imply 3.
Here are two matrices, one is “right-on”, the
other not. Which is which? Why?
blue
æ2
æ0
æ
æ0
æ0
æ
0 -1 4 0 æ
0 3 -2 2æ
æ
5 0 0 6æ
æ
0 1 2 3æ
or
green?
æ2
æ0
æ
æ0
æ0
æ
1 -1 4 0 æ
3 3 -2 2æ
æ
0 0 5 6æ
æ
0 0 0 3æ
One last word! A matrix M is said to be “really
right-on” if it is right-on and also
3. Every entry in the same column and above a
leading term is zero.
Both matrices shown are right-on, ony one is
really right-on; which one?, Why?
red
or
green
æ2
æ0
æ
æ0
æ0
æ
1 0 0 0æ
3 3 0 0æ
æ
0 0 5 0æ
æ
0 0 0 3æ
æ2
æ0
æ
æ0
æ0
æ
0 5 0 0æ
3 3 0 0æ
æ
0 0 5 0æ
æ
0 0 0 3æ
For another extra credit point rephrase 1, 2, 3 for
an alien up in space who (that ?, do aliens have
gender?) knows numbers but has no idea what
you mean by “above” or “strictly to the right”.
Extra credit due Monday, 1/23. Time to translate
common English into impressive English.
Common
Impressive
right-on
in (row) echelon form
really right-on
in reduced (row) echelon form
If the augmented matrix of a linear system is in
reduced echelon form, then the solutions are easy
to read off (we’ll do many examples). The beauty
of our row operations is in the following
Theorem. Let A denote the augmented matrix of
a linear system. Then
A. Elementary row operations do not change the
solution set of the system.
B. An appropriate sequence of elementary row
operations produces a matrix NewA that is
in reduced echelon form.
(Note that B. says you solved the system !)
We will provide a “hands-on” proof of the
theorem by providing the steps needed to achieve
B. for any augmented matrix.
We need one last word (for today, promise!)
A position (i , j ) in the matrix A is called a
pivot if it is a leading term in the reduced echelon
form NewA of A.
Returning to the theorem, the proof of A. is trivial,
we did it when we defined each elementary row
operation.
To prove B. we take an augmented matrix such as
the one exhibited in the textbook on p. 15 and
follow the steps as shown in the textbook. The
program I am using will be available to you online
soon. Here we go.
Start:
Next:
Now use (1,1) as pivot, replace row2 with
(-1)xRow1 + row2. You will get
the following display:
next use (2,2) as a pivot and (aiming for reduced
echelon form) get
Writing things as a linear system we get
Which tells us that all the solutions are given by
x3
and
x4
are called free variables
and can have any value.
Study the previous example carefully, it contains
all the aspects of solving linear systems but one:
What happens if the last column in the
augmented matrix ends up with a non-zero
leading term?
This means that you have a zero row of coefficients set equal to a non-zero number, like
Conclusion?
Right, inconsistent system.
Your book summarizes all this beautifully as
Theorem 2, p.21. Learn it !
We will do one more example from the book.
Exercise 11, p. 22