* Definition.* Let A be an matrix.

(a) The * row vectors* of A are the vectors in
corresponding to the rows of A. The *
row space* of A is the subspace of spanned by the row vectors of A.

(b) The * column vectors* of A are the vectors in
corresponding to the columns of A. The * column space* of A is the subspace of spanned by the column vectors of A.

* Example.* Consider the real matrix

The row vectors are , , and . The row space is the subspace of spanned by these vectors. Since the first two vectors are the standard basis vectors for , the row space is .

The column vectors are and . The column space is the subspace of spanned by these vectors. Thus, the column space consists of all vectors of the form

* Lemma.* If E is an elementary row operation
and A is a matrix, then has the same row
space as A.

* Proof.* If E is an operation of the form , then and A have the same
rows (except for order), so it's clear that their row vectors have
the same span.

If E is an operation of the form , then A and agree except in the i-th row. Since

any element of the row space of A is in the row space of , and vice versa.

Finally, suppose E is a row operation of the form . Then

which shows that the row space of A is contained in the row space of .

Conversely,

so the row space of is contained in the row space of A.

* Definition.* Two matrices are * row equivalent* if one can be obtained from the
other via elementary row operations.

Since row operations preserve row space, row equivalent matrices have the same row space. In particular, a matrix and its row reduced echelon form have the same row space.

* Lemma.* Let be a row
reduced echelon matrix with nonzero rows , ..., . Suppose the leading coefficients
of R occur at

If and

then .

In other words, this lemma describes the components of a vector in the row space of R.

* Proof.*

But the only nonzero element in column is . Therefore, the only nonzero term in the sum is .

* Example.* Here is a row reduced echelon
matrix over :

Note that , , and . Consider the following element of the row space:

Then

* Corollary.* The nonzero rows of a row reduced
echelon matrix are independent.

* Proof.* Suppose R is a row reduced echelon
matrix with nonzero rows , ..., . Suppose the leading coefficients of R occur at , where . If

then the lemma implies that for all k. Therefore, the are independent.

* Corollary.* The nonzero rows of a row reduced
echelon matrix form a basis for the row space of the matrix.

* Proof.* The nonzero rows span the row space,
and are independent, by the preceding corollary.

* Algorithm.* Let V be a finite-dimensional
vector space, and let be vectors in V. The
object is to find a basis for , the subspace spanned by the .

Let M be the matrix whose i-th row is . The row space of M is W. Let R be a row-reduced echelon matrix which is row equivalent to M. Then R and M have the same row space W, and the nonzero rows of R form a basis for W.

* Example.* Consider the vectors , , and
in . I'll find a basis for the subspace spanned by the vectors. Construct a matrix
with the as its rows and row reduce:

The vectors and form a basis for .

* Example.* Determine the dimension of the
subspace of spanned by , , and .

The subspace has dimension 3, since the row reduced echelon matrix has 3 nonzero rows.

* Definition.* The * rank*
of a matrix is the dimension of its row space.

* Example.* Consider the following matrix over
:

Row reduce it:

The row reduced echelon matrix has 2 nonzero rows. Therefore, the original matrix has rank 2.

I'll need the following fact about matrix multiplication for the proof of the next lemma. Consider the following multiplication:

In doing the multiplication, each a multiplies the corresponding row r. Here's the picture:

Therefore,

If instead of a single row vector on the left I have an entire matrix, here's what I get:

Here's the point: * The rows of the product are
linear combinations of the rows , , ... .*

* Lemma.* Let M and N be matrices over a field
F which are compatible for multiplication. Then

* Proof.* The preceding discussion shows that
the rows of are linear combinations of the rows of N.
Therefore, the rows of are all contained in the row space
of N.

The row space of N is a subspace, so it's closed under taking linear combinations of vectors. Hence, any linear combination of the rows of is in the row space of N. Therefore, the row space of is contained in the row space of N.

From this, it follows that the dimension of the row space of is less than or equal to the dimension of the row space of N --- that is, .

I already have one algorithm for testing whether a set of vectors in
is independent. That algorithm involves constructing
a matrix with the vectors as the * columns*, then
row reducing. The algorithm will also produce a linear combination of
the vectors which adds up to the zero vector if the set is dependent.

If all you care about is whether or not a set of vectors in is independent --- i.e. you don't care about a
possible dependence relation --- the results on rank can be used to
give an alternative algorithm. In this approach, you construct a
matrix with the given vectors as the * rows*.

* Algorithm.* Let V be a finite-dimensional
vector space, and let be vectors in V. The
object is to determine whether the set is independent.

Let M be the matrix whose i-th row is . Let R be a row reduced echelon matrix which is row equivalent to M. If R has m nonzero rows, then is independent. Otherwise, the set is dependent.

If R has p nonzero rows, then R and M have rank p. (They have the same rank, because they have the same row space.) Suppose . Since spans, some subset of is a basis. However, a basis must contain elements. Therefore, must be independent.

Any independent subset of the row space must contain elements. Hence, if , must be dependent.

* Example.* The vectors , , and
in are dependent:

The row reduced echelon matrix has only two nonzero rows.

I already know that every matrix can be row reduced to a row reduced
echelon matrix. The next result completes the discussion by showing
that the row reduced echelon form is * unique*

* Proposition.* Every matrix can be row reduced
to a unique row reduced echelon matrix.

* Proof.* Suppose M row reduces to R, a row
reduced echelon matrix with nonzero rows . Suppose the leading coefficients of R
occur at , where .

Let W be the row space of R and let

be an element of W. In component form, .

* Claim:* The first nonzero component of v must
occur in column , for .

Suppose is the first which is nonzero. The sum looks like

The first nonzero element of is a 1 at . The first nonzero element in lies to the right of column . Thus, for , and . Evidently, this is the first nonzero component of v. This proves the claim.

This establishes that if a row reduced echelon matrix is row equivalent to M, its leading coefficients must lie in the same columns as those of R. For the rows of are elements of W, and the claim applies.

Next, I'll show that the nonzero rows of are the same as the nonzero row of R.

Consider, for instance, the first nonzero rows of R and . Their first nonzero components are 1's lying in column . Moreover, both and have zeros in columns , , ... .

Suppose . Then is a nonzero vector in W whose first nonzero component is not in column , , ..., which is a contradiction.

The same argument applies to show that for all k. Therefore, .

I showed earlier that you can *add vectors to an independent set
to make a basis*. Here's how it works in a particular case.

* Example.* Add vectors to the following set to
make a basis for :

If I make a matrix with these vectors as rows and row reduce, the row reduced echelon form will have the same row space (i.e. the same span) as the original set of vectors:

Since there are four nonzero rows, and since these rows are clearly independent vectors, the original set of vectors is independent.

By examining the row reduced echelon form, I see that the vector will not be a linear combination of the others. (I'm choosing a standard basis vector with a "1" in a position not occupied by a leading coefficient.) Therefore, I can add it to the set and get a new independent set:

There are 5 vectors in this set, so it is a basis for .

I also showed earlier that you can *remove vectors from a spanning
set to get a basis*. You can do this using the same algorithm
that gives a basis for the column space of a matrix.

First, here's a reminder about matrix multiplication. If A is an matrix and , then the product is a linear combination of the columns of A.

In fact, if is the i-th column of A and ,

* Lemma.* Let A be a matrix, and let R be the
row reduced echelon matrix which is row equivalent to A. Suppose the
leading coefficients of R occur in columns , where , and
let denote the i-th column of A. Then is independent.

* Proof.* Suppose that

Form the vector , where

The equation above implies that .

It follows that v is in the solution space of the system . Since has the same solution space, . Let denote the i-th column of R. Then

However, since R is in row reduced echelon form, is a vector with 1 in the k-th row and 0's elsewhere. Hence, is independent, and .

* Example.* Consider the real matrix

It row reduces to

The leading coefficients occur in the first three columns. Hence, the first three columns of A are independent:

In fact, they form a basis for the column space of A.

* Example.* Find a subset of the following set
of vectors which forms a basis for .

Make a matrix with the vectors as columns and row reduce:

The leading coefficients occur in columns 1, 2, and 4. Therefore, the corresponding columns of the original matrix are independent, and form a basis for :

* Proposition.* Let A be a matrix. Then

* Proof.* Let R be the row reduced echelon
matrix which is row equivalent to A. Suppose the leading coefficients
of R occur in columns , where , and let denote the i-th column of A. By the preceding lemma,
is independent. There
is one vector in this set for each leading coefficient, and the
number of leading coefficients equals the row rank. Therefore,

Now consider . This is A with the rows and columns swapped, so

Applying the first part of the proof to ,

Therefore,

* Remark.* The proof provides an algorithm for
finding a basis for the column space of a matrix. Specifically, row
reduce the matrix A to a row reduced echelon matrix R. If the leading
coefficients of R occur in columns , then
consider the columns
of A. These columns form a basis for the column space of A.

* Example.* Consider the real matrix

It row reduces to

The leading coefficients occur in columns 1 and 2. Therefore, and form a basis for the column space of A.

* Example.* If A and B are row equivalent, they
don't necessarily have the same column space. For example,

However, all the elements of the column space of the second matrix have their second component equal to 0; this is obviously not true of elements of the column space of the first matrix.

* Proposition.* Let A, B, P and Q be matrices,
where P and Q are invertible. Suppose . Then

* Proof.* I showed earlier that . This was row rank; a similar proof shows
that

Since row rank and column rank are the same, .

Now

But , so repeating the computation gives . Therefore, .

* Definition.* The * null
space* (or * kernel*) of a matrix A is the
set of vectors such that . The dimension of the null space of A is
called the * nullity* of A, and is denoted .

* Remark.* The null space is the same as the
solution space of the system of equations . I showed earlier that if A is an matrix, then the solution space is a subspace of . Thus, the null space of a matrix is a subspace of
.

* Example.* Consider the real matrix

The vector is in the null space of A, since

The vector is not in the null space of A, since

* Algorithm.* Let A be an matrix. The object is to find a basis for the null
space of A.

Let . Solve the following system by row reducing A to row-reduced echelon form:

In the row reduced echelon form, suppose that are the variables corresponding to the leading coefficients, and suppose that are the free variables. Note for future reference that .

As usual, put the solution in parametric form, writing in terms of :

Plug the expressions for into the general solution vector , expressing it in terms of . Schematically, the result looks like this:

In the last expression, the vectors which are being multiplied by , , ..., form a basis for the null space.

The vectors span the null space, because the equation above has expressed an arbitrary vector in the null space as a linear combination of the vectors.

The vectors are independent, because (ignoring the 's) the 1's and 0's in the , , ..., components form a q-dimensional standard basis.

This description is probably hard to understand with all the subscripts flying around, but you'll see in the examples below that it's very easy to implement. Before giving an example, here's an important result that comes out of this discussion. Notice that p, the number of leading coefficient variables, is the rank of A. Notice also that q, the number of free variables, is the same as the number of vectors in the basis for the null space. That is, . Finally, I observed earlier that .

* Theorem.* Let A be an matrix. Then

* Example.* Find the nullity and a basis for
the null space of the real matrix

Row reduce the matrix to row-reduced echelon form:

I'll use w, x, y, and z as my solution variables. Thinking of the last matrix as representing equations for a homogeneous system, I have

Thus,

is a basis for the null space. The nullity is 2.

Notice that the rank is 2, the number of columns is 4, and .

* Example.* Consider the following matrix over
:

Find bases for the row space, column space, and null space.

Row reduce the matrix:

is a basis for the row space.

The leading coefficients occur in columns 1 and 3. Taking the first and third columns of the original matrix, I find that is a basis for the column space.

Using a, b, c, and d as variables, I find that the row reduced matrix says

Thus,

Therefore, is a basis for the null space.

Send comments about this page to: Bruce.Ikenaga@millersville.edu.

Copyright 2013 by Bruce Ikenaga