Matrix Multiplication and Elimination Learning Goals: The matrix form of elimination and row swaps is developed. One of our pictures of matrix multiplication was to find AB by taking linear combinations of rows of B weighted by entries in the rows of A. This leads to a great idea. Let’s say that we ⎡2 1 1 1⎤ have the augmented matrix for a system: ⎢⎢ 4 1 0 −2 ⎥⎥ . The first elimination step is to ⎢⎣ −2 2 1 7 ⎥⎦ subtract two times the first row from the second. But that is really just a linear combination of the rows of this matrix. It can be carried out by applying a matrix multiplication: ⎡ 1 0 0⎤ ⎡ 2 1 1 1 ⎤ ⎡ 2 1 1 1 ⎤ ⎢ −2 1 0 ⎥ ⎢ 4 1 0 −2 ⎥ = ⎢ 0 −1 −2 −3⎥ which is just the matrix we get when ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢⎣ 0 0 1 ⎥⎦ ⎢⎣ −2 2 1 7 ⎥⎦ ⎢⎣ −2 2 1 7 ⎥⎦ applying the first elimination step. The matrix on the left will be called E21. “E” for elementary (or elimination), and the subscript because row 2 is affected by row 1 (or the only different between this matrix and I is that there is a non-zero entry in the 21-position). In fact, any elimination step can be carried out by an appropriate E. Just alter I by putting the multiplier in the appropriate spot. Alternately, if we need to know what matrix will cause some particular effect, just apply that effect to I, for EI = E. Instead of thinking about the augmented matrix, we could have just worked with the original matrix equation Ax = b. If we multiply both sides by E, we get EAx = Eb. If x makes the first equation true, then the second is naturally also true. But we don’t create any new solutions to the system, either, because if x makes the new system true, then we could multiply by E-1, which is just E with its off-diagonal entry changed in sign. This adds the row we just subtracted, or undoes E, returning us to our original system Ax = b, so any new solution is also an old solution. We will have more on inverse matrices later. What if we wanted to do the other basic operations, swapping rows or multiplying a row by a constant? Just as we did for the E’s, if we want to create some effect, we just do that effect to I. Let’s find the matrix P23 that swaps rows 2 and 3. If we do this to I, we note that P23I = P23, so ⎡1 0 0 ⎤ P23 is just I with the second and third rows swapped: P23 = ⎢⎢ 0 0 1 ⎥⎥ . ⎢⎣ 0 1 0 ⎥⎦ Similarly, to multiply a row through by a constant, do the same to I. For instance, the ⎡1 0 0 ⎤ matrix that multiplies the third row by 7 is ⎢⎢ 0 1 0 ⎥⎥ . ⎢⎣ 0 0 7 ⎥⎦ Since we can also think of matrix multiplication as taking linear combinations of the column of the matrix on the left weighted by entries in the matrix on the right, we can also use our elementary matrices to add a multiple of one column to another, swap two columns, or multiply a single column through by a number. This time, though, we multiply by the elementary matrix on the right instead of on the left. Why we might want to do this will have to wait. There is a cute example in the text about elimination and block multiplication, showing how entire columns can be eliminated at once. Reading: 2.3, 2.4 Problems: 2.3: 1 – 5, 8, 12, 13, 14, 17, 18, 19 – 23, 25, 26, 28, 30 (5, 14, 17, 22) 2.4: 1, 2, 5, 6, 7, 9, 10, 11, 13, 14, 17, 18, 19, 20 – 24, 26, 27, 32, 33, 34, 37
© Copyright 2026 Paperzz