In order to show that there's only one determinant function on , I'm going to derive another formula for the
determinant. It involves * permutations* of the
rows, so I'll give a brief introduction to permutations first.

* Definition.* A *
permutation* of the set is an invertible
function . The
set of all permutations of is denoted
and is called the * symmetric group on n
letters*.

A * transposition* is a permutation in which two
elements are swapped, and everything else doesn't move.

Recall that a function has an inverse if and only if it is * bijective*: Different inputs produce different
outputs (* injective*), and everything in the
target set is an output of the function (*
surjective*). So a permutation of is
just a way of "scrambling" the numbers from 1 to n.

* Example.* There are 6 permutations of :

The last two in the first row and the first one in the second row are transpositions. In each case, two elements are swapped, and the other element doesn't move.

For example, the one in the bottom right corner of the diagram is the function

Now many permutations are there of ?

There are n places that 1 can be mapped to. Once you know where 1 goes, there are places remaining where 2 can be mapped to. (1 and 2 can't go to the same place because permutations are injective.) Continuing in this way, you find that the total numbers of possibilities is

Therefore, there are permutations of .

Permutations are * multiplied* by composing them
as functions, i.e. by doing one permutation followed by another. The
following result says that every permutation can be built out of
transpositions.

* Theorem.* (a) Every permutation can written
as a product of transpositions.

(b) No permutation can be written as a product of both an even and an odd number of transpositions.

You'll probably see these results proved in a course in abstract algebra. You can see that the first part reasonable: It says that any way of "scrambling" the elements in could be accomplished by swapping two elements at a time, one swap after another.

The second part of the theorem is important, because it justifies the following definition.

* Definition.* A permutation is * even* if it can be written as a product of an even
number of transpositions. A permutation is * odd*
if it can be written as a product of an odd number of transpositions.

If is a permutation, the * sign* of
is

* Theorem.* If
is linear in each row and is 0 on matrices with equal rows, then

* Proof.* Write

Observe that

where is the j-th standard basis vector (equivalently, the j-th row of the identity matrix).

By linearity,

Now do the same with row 2:

Do this for all n rows, using as the summation indices:

Since D is 0 on matrices with equal rows, I only need to consider terms where all the rows are different --- i.e. terms where are distinct. This means that is a permutation of , and I'm summing over :

Now

is just the identity matrix with its rows permuted by . Every permutation is a product of transpositions. By an earlier lemma, since D is linear on the rows and is 0 for matrices with equal rows, a transposition (i.e. a row swap) multiplies the value of D by -1. Hence,

Therefore,

Since the determinant function defined by expansion by cofactors is alternating and linear in each row, it satisfies the conditions of the theorem. Since , the formula in the theorem becomes

I'll call this the * permutation formula for the
determinant*.

* Example.* Compute the following real
determinant

using the permutation formula.

I have to choose 3 entries from the matrix at a time, in such a way that each choice involves exactly one element from each row and each column. For each such choice, I take the product of the three elements and multiply by the sign of the permutation of the elements, which I'll describe below. Finally, I add up the results.

In order to do this systematically, focus on the first column. I can choose 1, 4, or 7 from column 1.

If I choose 1 from column 1, I can either choose 5 from column 2 and 9 from column 3, or 8 from column 2 and 6 from column 3. (Remember that I can't have two elements from the same row or column.)

If I choose 4 from column 1, I can either choose 2 from column 2 and 9 from column 3, or 8 from column 2 and 3 from column 3.

Finally, if I choose 7 from column 1, I can either choose 2 from column 2 and 6 from column 3, or 5 from column 2 and 3 from column 3.

This gives me 6 products:

Next, I have to attach a sign to each product. To do this, I count the number of row swaps I need to move the 1's in the identity matrix into the same positions as the numbers in the product. I'll illustrate with two examples.

It took 2 row swaps to move the 1's into the same positions as 4, 8, and 3. Since 2 is even, the sign of is .

It took 1 row swap to move the 1's into the same positions as 7, 5, and 3. Since 1 is odd, the sign of is -1.

Continuing in this fashion, I get

Notice how ugly the computation was! While the permutation formula can be used for computations, it's easier to use row or column operations or expansion by cofactors.

* Corollary.* There is a unique determinant
function .

* Proof.* A determinant function D is linear in
each row, is 0 on matrices with equal rows, and satisfies . The theorem then shows that is completely determined by A.

In other words, the determinant function I defined by cofactor
expansion is *the only* determinant function on matrices.

* Remark.* Another way to express the proof of
the Corollary is: If is alternating
and linear in each row, then

where denotes the determinant function on .

* Corollary.* If , then .

* Proof.*

I got the next-to-the-last equality by letting .

* Remark.* Since the rows of A are the columns
of and vice versa, the Corollary implies that you can
also use column operations to compute determinants. The allowable
operations are swapping two columns, multiplying a column by a
number, and adding a multiple of a column to another column. They
have the same effects on the determinant as the corresponding row
operations.

As I noted earlier, this result also means that you can compute
determinants using cofactors of *rows* as well as columns.

The uniqueness of determinant functions is useful in deriving properties of determinants.

* Theorem.* Let . Then
.

* Proof.* Fix B, and define

Let denote the i-th row of A. Then

Now is alternating, so interchanging two rows in the determinant above multiplies by -1. Hence, D is alternating.

Next, I'll show that D is linear:

This proves that D is linear in each row.

By the remark following the uniqueness theorem,

In words, this important result says that the determinant of a
product is the product of the determinants. Note that the same thing
does *not* work for sums.

* Example.* It's not true that . For example, in ,

But

* Theorem.* Let F be a field, and let . A is invertible if and only if .

* Proof.* If A is invertible, then

This equation implies that (since would yield " ").

Conversely, suppose that . Suppose that A row reduces
to the row reduced echelon matrix R, and consider the effect of
elementary row operations on . Swapping two rows multiplies the
determinant by -1. Adding a multiple of a row to another row leaves
the determinant unchanged. And multiplying a row by a
*nonzero* number multiplies the determinant by that
*nonzero* number. Clearly, no row operation will make the
determinant 0 if it was nonzero to begin with. Since , it follows that .

Since R is a row reduced echelon matrix with nonzero determinant, it can't have any all-zero rows. An row reduced echelon matrix with no all-zero rows must be the identity, so . Since A row reduces to the identity, A is invertible.

* Corollary.* Let F be a field, and let . If A is invertible, then

* Proof.* I showed in proving the theorem that
, so
.

Here's an easy application of this result.

* Corollary.* Let F be a field, and let . If P is invertible, then .

* Proof.*

(The matrices and A are said to be *
similar*. This will come up later when I talk about changing
coordinates in vector spaces.)

* Definition.* Let R be a commutative ring with
identity, and let . The * classical
adjoint* is the matrix whose i-j-th entry is

In other words, is the transpose of the matrix of cofactors.

* Example.* Compute the adjoint of

First, I'll compute the cofactors. The first line shows the cofactors of the first row, the second line the cofactors of the second row, and the third line the cofactors of the third row.

The adjoint is the *transpose* of the matrix of cofactors:

I'll need the following little lemma later on.

* Lemma.* Let R be a commutative ring with
identity, and let . Then

* Proof.* Consider the elements of the matrices on the two sides of the
equation.

The signs and are the same; what about the other terms? is the determinant of the matrix formed by deleting the row and the column from A. is the determinant of the matrix formed by deleting the row and column from . But the row of A is the column of , and the column of A is the row of . So the two matrices that remain after these deletions are transposes of one another, and hence they have the same determinant. Thus, . Hence, .

* Theorem.* Let R be a commutative ring with
identity, and let . Then

* Proof.* Expand by cofactors of row i:

Take , and construct a new matrix by replacing row k of A with row i. Since the new matrix has two equal rows, its determinant is 0. On the other hand, expanding by cofactors of row k,

In words, this means that if you expand by cofactors of a row using
the cofactors from *another* row, you get 0. You only get if you use the "right" cofactors --- i.e.
the cofactors for the given row. In terms of the Kronecker delta
function,

Interpret this equation as a matrix equation, where the two sides represent the i-k-th entries of their respective matrices. What {\it are} the respective matrices? The right side is obviously the i-k-th entry of . For the left side,

Therefore,

I can use the theorem to obtain an important corollary. I already
know that a matrix over a *field* is invertible if and only if
its determinant is nonzero. The next result explains what happens
over a commutative ring with identity, and also provides a formula
for the inverse of a matrix.

* Corollary.* Let R be a commutative ring with
identity, and let . A is invertible if and only if
is invertible in R, in which case

* Proof.* First, suppose A is invertible. Then
, so

Therefore, is invertible in R.

Conversely, suppose is invertible in R. I have

Take the last equation and replace A with :

Use the facts that and (from the earlier lemma) that :

Use the transpose rule :

Finally, transpose both sides, using the transpose rules and :

Equations (1) and (2) show that multiplies A to I on both the left and right. Therefore, A is invertible, and

The last result can be used to find the inverse of a matrix. It's not very good for big matrices from a computational point of view: The usual row reduction algorithm uses fewer steps. However, it's not too bad for small matrices --- say or smaller.

* Example.* Compute the inverse of the
following real matrix using cofactors:

First, I'll compute the cofactors. The first line shows the cofactors of the first row, the second line the cofactors of the second row, and the third line the cofactors of the third row.

The adjoint is the *transpose* of the matrix of cofactors:

Since , I have

Another consequence of the formula is * Cramer's rule*, which gives
a formula for the solution of a system of linear equations. As with
the adjoint formula for the inverse of a matrix, Cramer's rule is not
computationally efficient: It's better to use row reduction to solve
systems.

* Corollary.* (* Cramer's
rule*) If A is an invertible matrix, the unique
solution to is given by

where is the matrix obtained from A by replacing its i-th column by y.

* Proof.*

What is the last sum? It is a cofactor expansion of A along column i, where instead of the elements of A's column i I'm using the components of y. This is exactly .

* Example.* Use Cramer's Rule to solve the
following system over :

In matrix form, this is

I replace the successive columns of the coefficient matrix with , in each case computing the determinant of the resulting matrix and dividing by the determinant of the coefficient matrix:

This looks pretty simple, doesn't it? But notice that you need to
compute *four* determinants to do this. It
becomes more expensive to solve systems this way as the matrices get
larger.

Copyright 2008 by Bruce Ikenaga