# Row Reduction

Row reduction (or Gaussian elimination) is the process of using row operations to reduce a matrix to row reduced echelon form. This procedure is used to solve systems of linear equations, invert matrices, compute determinants, and do many other things.

There are three kinds of row operations. (Actually, there is some redundancy here --- you can get away with two of them.)

(a) You may swap two rows.

Here is a swap of rows 2 and 3. I'll denote it by .

(b) You may multiply (or divide) a row by a nonzero number.

Here is row 2 multiplied by . I'll denote this operation by .

(c) You may add a multiple of a row to another row.

I'll subtract 4 times row 1 from row 2. Notation: .

Notice that row 1 was not affected by this operation. Likewise, if you do , row 17 changes and row 31 does not.

Example. In each case, tell whether the operation is a valid row operation. If it is, say what it does (in words).

(a) .

This operation swaps row 5 and row 3.

(b) .

This isn't a valid row operation. You can't add or subtract a number from the elements in a row.

(c) .

This adds times row 17 to row 3 (and replaces row 3 with the result). Row 17 is not changed.

(d) .

This isn't a valid row operation, though you could accomplish it using two row operations: First, multiply row 6 by 5; next, add 11 times row 2 to the new row 6.

(e) and .

This is two row operations, not one. The only row operation that changes two rows at once is swapping two rows.

Matrices can be used to represent systems of linear equations. Row operations are intended to mimic the algebraic operations you use to solve a system. Row-reduced echelon form corresponds to the "solved form" of a system.

A matrix is in row reduced echelon form if the following conditions are satisfied:

(a) The first nonzero element in each row (if any) is a "1" (a leading coefficient).

(b) Each leading coefficient is the only nonzero element in its column.

(c) All the all-zero rows (if any) are at the bottom of the matrix.

(d) The leading coefficients form a "stairstep pattern" from northwest to southeast:

In this matrix, the leading coefficients are in positions , , , ... .

Here some more matrices in row-reduced echelon form. Notice the "stairstep pattern" made by the leading coefficients. The *'s indicate that the numbers in those positions can be anything.

Example. ( Row-reduced echelon form) This matrix is not in row reduced echelon form:

The first nonzero element in row 2 is a "7", rather than a "1".

This matrix is not in row reduced echelon form:

The leading coefficient in row 3 is not the only nonzero element in its column.

This matrix is not in row reduced echelon form:

There is an all-zero row above a nonzero row.

This matrix is not in row reduced echelon form:

The leading coefficient in row 2 is not the only nonzero element in its column.

This matrix is not in row reduced echelon form:

The leading coefficients do not form a "stairstep pattern" from northwest to southeast.

The following matrices are in row-reduced echelon form. You should go through the definition and check that all the properties are satisfied.

The last one is called the identity matrix, because it acts like an identity element for matrix multiplication.

Row reduction is the process of using row operations to transform a matrix into a row reduced echelon matrix. As the algorithm proceeds, you move in stairstep fashion through different positions in the matrix. In the description below, when I say that the current position is , I mean that your current location is in row i and column j. The current position refers to a location, not the element at that location (which I'll sometimes call the current element). The current row means the row of the matrix containing the current position and the current column means the column of the matrix containing the current position.

Some notes:

1. There are many ways to arrange the algorithm. For instance, another approach gives the -decomposition of a matrix.

2. Trying to learn to row reduce by following the steps below is pretty tedious, and most people will want to learn by doing examples. The steps are there so that, as you're learning to do this, you have some idea of what to do if you get stuck.

3. There are shortcuts you can take which don't follow the steps below. But you can get very confused if you focus on shortcuts before you've really absorbed the sense of the algorithm. The test is whether you can reliably and accurately row reduce a matrix!

4. There's no point in doing row reductions by hand forever, and for larger matrices (as would occur in real world applications) it's impractical. At some point, you'll use a computer. However, I think it's importnat to do enough examples by hand that you understand the algorithm.

Algorithm: Row Reducing a Matrix

Step 2. Test the element at the current position. If it's nonzero, go to Step 2(a); if it's 0, go to Step 2(b).

Step 2(a). If the element at the current position is nonzero, then:

(i) Divide all the elements in the current row by the current element. This makes the current element 1.

(ii) Add or subtract multiples of the current row from the other rows in the matrix so that all the elements in the current column (except for the current element) are 0.

(iii) Move the current position to the next row (down) and the next column (right). If doing either of these things would take you out of the matrix, then stop: The matrix is in row-reduced echelon form. Otherwise, return to the beginning of Step 2.

Step 2(b). If the element at the current position is 0, then look at the elements in the current column below the current element. There are two possibilities.

(i) If all the elements below the current element are 0, then move the current position to the next column (in the same row). If doing this would take you out of the matrix, then stop: The matrix is in row-reduced echelon form. Otherwise, return to the beginning of Step 2.

(ii) If some element below the current element is nonzero, then swap the current row and the row containing the nonzero element. Then return to the beginning of Step 2.

Example. In each of the following cases, assume that the current position is .

(a)

The element in the current position is nonzero. So I divide the first row by 2:

Next, I subtract 3 times row 1 from row 2, and I add row 1 to row 3. This makes the other elements in the first column equal to 0.

Finally, I move the current position to the next row and the next column and return to the start of Step 2:

(b)

The element in the current position is 0. I look below it and see a nonzero element in the same column in row 3. So I swap row 1 and row 3; the current position remains the same, and I return to the start of Step 2.

(c)

The element in the current position is 0. There are no nonzero elements below it in the same column. I don't perform any row operations; I just move the current position to the next column (in the same row) and return to the start of Step 2:

There are two questions which arise with this algorithm:

• Why does the algorithm terminate? (Could you ever get stuck and never finish?)
• When the algorithm does terminate, why is the final matrix in row-reduced echelon form?

The first question is easy to answer. As you execute the algorithm, the current position moves through the matrix in "stairstep" fashion:

The cases in Step 2 cover all the possibilities, and in each case, you perform a finite number of row operations (no larger than the number of rows in the matrix, plus one) before you move the current position. Since you're always moving the current position to the right or to the right and down, and since the matrix has only finitely many rows and columns, you must eventually reach the edge of the matrix and the algorithm will terminate.

As for the second question, I'll give an informal argument using the matrix with the "stairstep" path pictured above.

First, if you moved the current position down and to the right, the previous current element was a 1, and every other element in its column must be 0. In the matrix with the "stairstep" path I gave above, this means that each spot where a curved arrow starts must be a 1, and all the other elements in the column with a 1 must be 0. Hence, the matrix must look like this:

(The *'s stand for elements which I don't know.)

Next, notice that if you moved the current position to the right (but not down), then the previous current element and everything below it must have been 0. In terms of the picture, every spot where a right arrow starts must be a 0, and all the elements below it must be 0. Now I know that the matrix looks like this:

Notice that this matrix is in row-reduced echelon form.

Row reduction is a key algorithm in linear algebra, and you should work through enough examples so that you understand how it works. Once you understand the algorithm, for more complex problems you may use a computer to carry out the row reduction.

Example. Row reduce the following matrix with real entries:

Notice that the first operation --- swapping the first two rows --- isn't really in accord with the algorithm. So I cheated a little, but not too much: It accomplished the aim of creating a "1" in the position, and following the algorithm by dividing the given first row by 3 would have created a lot of ugly fractions.

Example. ( Row reduction over ) Row reduce the following matrix over :

In the computation that follows, remember that in

Example. Row reduction is used to solve systems of linear equations. Consider the following system of linear equations over :

Form the auxiliary matrix of coefficients and row reduce:

The final matrix yields the equations

Thus, and .

Example. It's possible for a system of linear equations to have no solutions. Such a system is said to be inconsistent. Consider the following system of equations over :

Form the coefficient matrix and row reduce:

The corresponding equations are

The last equation says . Hence, the system has no solutions.

Example. Consider the case of a system over the real numbers with matrix

Suppose the variables are a, b, c, d, and e. The corresponding equations are

a and c correspond to leading coefficients. b, d, and e are called free variables. In this case, you get a parametrized solution:

Each assignment of numbers to the parameters b, d, and e produces a solution. For example, if , , and ,

The solution (in this case) is . Since you can assign any real number to each of b, d, and e, there are infinitely many solutions.

Example. ( Solving a system of equations over ) Solve the following system over :

The corresponding equations are

Set and solve for w, x, and y (remembering that ):

Remark. A system of linear equations over the real numbers can have no solutions, a unique solution, or infinitely many solutions. (That is, such a system can't have exactly 3 solutions.)

On the other hand, if n is prime, a system of linear equations over will have solutions, for , or no solutions. For example, a system over can have no solutions, one solution, 5 solutions, 25 solutions, ...

Example. ( Inverting a matrix) I'll discuss matrix inversion in more detail later. However, it's easy to describe how row reduction provides a systematic way to find the inverse of a matrix.

To invert a matrix, adjoin a copy of the identity matrix and row reduce the augmented matrix. When the block corresponding the original matrix becomes the identity, the block corresponding to the identity will have become the inverse.

For example, suppose you want to invert

Adjoin a copy of the identity matrix, then do the following row reduction:

The inverse is

Of course, there is a formula for inverting a matrix. But this procedure works with (square) matrices of any size. To explain why this algorithm works, I'll need to examine the relationship between row operations and inverses more closely.

Example. Find the line of intersection of the planes

Think of the equations as a system. Write down the coefficient matrix and row reduce:

The solution is , . The parametric solution is

These are the parametric equations of the line of intersection.

The solution of simultaneous linear equations dates back 2000 years to the {\it Jiuzhang Suanshu}, a collection of problems compiled in China. The systematic algebraic process of eliminating variables seems to be due to Isaac Newton. Matrix notation developed in the second half of the 1800's; what we call Gaussian elimination, applied to matrices, was developed in the first half of the 1900's by various mathematicians. Grcar [1] contains a nice historical account.

[1] Joseph Grcar, Mathematics of Gaussian elimination, Notices of the American Mathematical Society, 58(6)(2011), 782--792.