The Null Space of a Matrix

Definition. The null space (or kernel) of a matrix A is the set of vectors x such that $Ax = 0$ . The dimension of the null space of A is called the nullity of A, and is denoted $\nullity(A)$ .

The null space is the same as the solution space of the system of equations $A x = 0$ . I showed earlier that if A is an $m \times n$ matrix, then the solution space is a subspace of $F^n$ . Thus, the null space of a matrix is a subspace of $F^n$ .

Example. Consider the real matrix

$$A = \left[\matrix{3 & -1 & 1 \cr -1 & 1 & 1 \cr}\right].$$

(a) Is $(1, 2, -1)$ in the null space of A?

(b) Is $(1, 1, 1)$ in the null space of A?

(a)

$$\left[\matrix{3 & -1 & 1 \cr -1 & 1 & 1 \cr}\right] \left[\matrix{1 \cr 2 \cr -1 \cr}\right] = \left[\matrix{0 \cr 0 \cr}\right].$$

Hence the vector $(1, 2, -1)$ is in the null space of A.

(b)

$$\left[\matrix{3 & -1 & 1 \cr -1 & 1 & 1 \cr}\right] \left[\matrix{1 \cr 1 \cr 1 \cr}\right] = \left[\matrix{3 \cr 1 \cr}\right] \ne \left[\matrix{0 \cr 0 \cr}\right].$$

Hence, the vector $(1, 1, 1)$ is not in the null space of A.

Algorithm. Let A be an $m \times n$ matrix. Find a basis for the null space of A.

Let $x = (x_1, x_2, \ldots, x_n)$ . Solve the following system by row reducing A to row-reduced echelon form:

$$A x = 0.$$

In the row reduced echelon form, suppose that $\{x_{i_1}, x_{i_2}, \ldots, x_{i_p}\}$ are the variables corresponding to the leading entries, and suppose that $\{x_{j_1}, x_{j_2}, \ldots, x_{j_q}\}$ are the free variables. Note that $p + q = n$ .

Put the solution in parametric form, writing the leading entry variables $\{x_{i_1}, x_{i_2},
   \ldots, x_{i_p}\}$ in terms of the free variables (parameters) $\{x_{j_1}, x_{j_2}, \ldots, x_{j_q}\}$ :

$$\eqalign{ x_{i_1} & = f_{i_1} (x_{j_1}, x_{j_2}, \ldots, x_{j_q}) \cr x_{i_2} & = f_{i_2} (x_{j_1}, x_{j_2}, \ldots, x_{j_q}) \cr & \vdots \cr x_{i_p} & = f_{i_p} (x_{j_1}, x_{j_2}, \ldots, x_{j_q}) \cr}$$

Plug these expressions into the general solution vector $x = (x_1, x_2, \ldots, x_n)$ for the $x_i$ components: So $x_{i_1} = f_{i_1} (x_{j_1}, x_{j_2},
   \ldots, x_{j_q})$ for $x_{i_1}$ , then $x_{i_2} =
   f_{i_2} (x_{j_1}, x_{j_2}, \ldots, x_{j_q})$ , for $x_{i_2}$ , and so on. Leave the $x_j$ components (those with just $\{x_{j_1}, x_{j_2}, \ldots, x_{j_q}\}$ ) alone. Schematically, the result looks like this:

$$\left[\matrix{ x_1 \cr x_2 \cr \vdots \cr x_n \cr}\right] = \left[\matrix{ (f_{i_k}\,'s) \cr x_{j_1} \cr (f_{i_k}\,'s) \cr x_{j_2} \cr (f_{i_k}\,'s) \cr \vdots \cr (f_{i_k}\,'s) \cr x_{j_q} \cr (f_{i_k}\,'s) \cr}\right] = x_{j_1} \left[\matrix{ \ast \cr 1 \cr \ast \cr 0 \cr \ast \cr \vdots \cr \ast \cr 0 \cr \ast \cr}\right] + x_{j_2} \left[\matrix{ \ast \cr 0 \cr \ast \cr 1 \cr \ast \cr \vdots \cr \ast \cr 0 \cr \ast \cr}\right] + \cdots + x_{j_q} \left[\matrix{ \ast \cr 0 \cr \ast \cr 0 \cr \ast \cr \vdots \cr \ast \cr 1 \cr \ast \cr}\right].$$

The $\ast$ 's represent the stuff that's left after factoring $x_{j_1}$ , $x_{j_2}$ , ... out of the f-terms.

In the last expression, the vectors which are being multiplied by $x_{j_1}$ , $x_{j_2}$ , ..., $x_{j_q}$ form a basis for the null space.

First, the vectors span the null space, because the equation above has expressed an arbitrary vector in the null space as a linear combination of the vectors.

Second, the vectors are independent. Suppose the linear combination above is equal to the zero vector $(0, 0, \ldots 0)$ :

$$x_{j_1} \left[\matrix{ \ast \cr 1 \cr \ast \cr 0 \cr \ast \cr \vdots \cr \ast \cr 0 \cr \ast \cr}\right] + x_{j_2} \left[\matrix{ \ast \cr 0 \cr \ast \cr 1 \cr \ast \cr \vdots \cr \ast \cr 0 \cr \ast \cr}\right] + \cdots + x_{j_q} \left[\matrix{ \ast \cr 0 \cr \ast \cr 0 \cr \ast \cr \vdots \cr \ast \cr 1 \cr \ast \cr}\right] = \left[\matrix{ 0 \cr 0 \cr \vdots \cr 0 \cr}\right].$$

Then

$$\left[\matrix{ (f_{i_k}\,'s) \cr x_{j_1} \cr (f_{i_k}\,'s) \cr x_{j_2} \cr (f_{i_k}\,'s) \cr \vdots \cr (f_{i_k}\,'s) \cr x_{j_q} \cr (f_{i_k}\,'s) \cr}\right] = \left[\matrix{ 0 \cr 0 \cr \vdots \cr 0 \cr}\right].$$

We see that $x_{j_1} = x_{j_2} =
   \cdots = x_{j_q} = 0$ .

This description is probably hard to understand with all the subscripts flying around, but I think the examples which follow will make it clear.

Before giving an example, here's an important result that comes out of the algorithm.

Theorem. Let A be an $m \times n$ matrix. Then

$$n = \rank A + \nullity A.$$

Proof. In the algorithm above, p, the number of leading entry variables, is the rank of A. And q, the number of free variables, is the same as the number of vectors in the basis for the null space. That is, $q =
   \nullity(A)$ . Finally, I observed earlier that $p + q = n$ . Thus, $n = \rank A + \nullity A$ .

This theorem is a special case of the First Isomorphism Theorem, which you'd see in a course in abstract algebra.

Example. Find the nullity and a basis for the null space of the real matrix

$$\left[\matrix{ 1 & 2 & 0 & 3 \cr 1 & 2 & 1 & -2 \cr 2 & 4 & 1 & 1 \cr}\right].$$ Let's follow the steps in the algorithm. First, row reduce the matrix to row-reduced echelon form:

$$\left[\matrix{ 1 & 2 & 0 & 3 \cr 1 & 2 & 1 & -2 \cr 2 & 4 & 1 & 1 \cr}\right] \quad \to \quad \left[\matrix{ 1 & 2 & 0 & 3 \cr 0 & 0 & 1 & -5 \cr 0 & 0 & 0 & 0 \cr}\right]$$

I'll use w, x, y, and z as my solution variables. Thinking of the last matrix as representing equations for a homogeneous system, I have

$$w + 2 x + 3 z = 0, \quad\hbox{or}\quad w = -2 x - 3 z,$$

$$y - 5 z = 0, \quad\hbox{or}\quad y = 5 z.$$

I've expressed the leading entry variables in terms of the free variables. Now I substitute for w and y in the general solution vector $(w, x, y, z)$ :

$$\left[\matrix{w \cr x \cr y \cr z \cr}\right] = \left[\matrix{-2 x - 3 z \cr x \cr 5 z \cr z \cr}\right] = x \cdot \left[\matrix{-2 \cr 1 \cr 0 \cr 0 \cr}\right] + z \cdot \left[\matrix{-3 \cr 0 \cr 5 \cr 1 \cr}\right].$$

After substituting, I broke the resulting vector up into pieces corresponding to each of the free variables x and z.

The equation above shows that every vector $(w, x, y, z)$ in the null space can be written as a linear combination of $(-2, 1, 0, 0)$ and $(-3, 0, 5, 1)$ . Thus, these two vectors span the null space. They're also independent: Suppose

$$x \cdot \left[\matrix{-2 \cr 1 \cr 0 \cr 0 \cr}\right] + z \cdot \left[\matrix{-3 \cr 0 \cr 5 \cr 1 \cr}\right] = \left[\matrix{0 \cr 0 \cr 0 \cr 0 \cr}\right].$$

Then

$$\left[\matrix{-2 x - 3 z \cr x \cr 5 z \cr z \cr}\right] = \left[\matrix{0 \cr 0 \cr 0 \cr 0 \cr}\right].$$

Looking at the second and fourth components, you can see that $x = 0$ and $z = 0$ .

Hence, $\{(-2, 1, 0, 0), (-3, 0, 5,
   1)\}$ is a basis for the null space. The nullity is 2.

Notice also that the rank is 2, the number of columns is 4, and $4 = 2 + 2$ , which confirms the preceding theorem.

Example. Consider the following matrix over $\integer_3$ :

$$A = \left[\matrix{ 1 & 1 & 0 & 2 \cr 2 & 2 & 1 & 2 \cr 1 & 1 & 1 & 0 \cr}\right].$$

Find bases for the row space, column space, and null space.

Row reduce the matrix:

$$\left[\matrix{ 1 & 1 & 0 & 2 \cr 2 & 2 & 1 & 2 \cr 1 & 1 & 1 & 0 \cr}\right] \quad \to \quad \left[\matrix{ 1 & 1 & 0 & 2 \cr 0 & 0 & 1 & 1 \cr 0 & 0 & 0 & 0 \cr}\right]$$

$\{(1, 1, 0, 2), (0, 0, 1, 1)\}$ is a basis for the row space.

The leading entries occur in columns 1 and 3. Taking the first and third columns of the original matrix, I find that $\{(1, 2, 1), (0, 1, 1)\}$ is a basis for the column space.

Using a, b, c, and d as variables, I find that the row reduced matrix gives the equations

$$a + b + 2 d = 0, \quad\hbox{or}\quad a = 2 b + d,$$

$$c + d = 0, \quad\hbox{or}\quad c = 2 d.$$

Thus,

$$\left[\matrix{a \cr b \cr c \cr d \cr}\right] = \left[\matrix{2 b + d \cr b \cr 2 d \cr d \cr}\right] = b \cdot \left[\matrix{2 \cr 1 \cr 0 \cr 0 \cr}\right] + d \cdot \left[\matrix{1 \cr 0 \cr 2 \cr 1 \cr}\right].$$

Therefore, $\{(2, 1, 0, 0), (1, 0,
   2, 1)\}$ is a basis for the null space.


Contact information

Bruce Ikenaga's Home Page

Copyright 2023 by Bruce Ikenaga