* Determinants* are functions which take matrices
as inputs and produce numbers. They are of enormous importance in
linear algebra, but perhaps you've also seen them in other courses.
They're used to define the * cross product* of
two 3-dimensional vectors. They appear in *
Jacobians* which occur in the *
change-of-variables formula for multiple integrals*.

In this section, I'll define determinants as functions satisfying
three axioms. I'll show that there is at least one determinant
function --- namely, one defined by * cofactor
expansion*. I'll show how you can use row operations and cofactor
expansion to compute determinants.

Later, I'll give a second formula for determinants based on
permutations. This will allow me to show that there is *only
one* determinant function satisfying the axioms. I'll also prove
many important properties of determinants, such as the multiplication
rule.

Determinants take as inputs matrices with entries in R, where R is a commutative ring with identity. The set of such matrices is denoted .

* Definition.* A * determinant
function* is a function such that:

- D is a linear function in each row. That is, if and ,

- A matrix with two equal rows has determinant 0:

- , where I is the identity matrix.

* Example.* (* Linearity*)
The first axiom allows you to add or subtract, or move constants in
and out, *in a single row, assuming that all the other rows stay
the same*. Here's a addition example, with all the
action taking place in row 3:

Here's a subtraction example, with all the action taking place in row 2:

Here's a multiplication example, with all the action taking place in row 1:

Here's an example where you "take apart" a determinant using linearity. Notice that the first two rows are the same in all the matrices; all the action takes place in the third row:

Finally, here's a numerical example with entries in :

You might suspect that a matrix with an all-zero row has determinant 0, and it's easy to prove using linearity:

* Example.* (* Equal rows*)
The second axiom says that if a matrix has two equal rows, then its
determinant is 0. Here's a example:

* Lemma.* If is a
function which is linear in the rows (Axiom 1) and is 0 when a matrix
has equal rows (Axiom 2), then swapping two rows multiplies the value
of D by -1:

* Proof.* The proof will use the first and
second axioms repeatedly. The idea is to swap rows i and j by adding
or subtracting rows.

In the diagrams below, all the rows except the and the remain unchanged.

Notice that in each addition or subtraction step, only one row changes at a time.

* Remarks.* (a) I'll show later that it's
enough to assume (instead of Axiom 2) that vanishes whenever
two {\it adjacent} rows of A are equal.

(b) Suppose that is a function satisfying Axioms 1
and 3, and suppose that swapping two rows multiplies the value of D
by -1. Must D satisfy Axiom 2? In other words, is "swapping
multiplies the value by -1" *equivalent* to "equal
rows means determinant 0"?

Assuming that swapping two rows multiplies the value of D by -1, I have

(I swapped the two equal x-rows, which is why the matrix didn't change. But by assumption, this useless swap multiplies D by -1.)

Hence, .

If R is , , , or for n prime and not equal to 2, then implies . However, if , then for all x. Hence, , no matter what is. Therefore, Axiom 2 need not hold. You can see, however, that it will hold if R is a field of characteristic other than 2.

Fortunately, since I take "equal rows means determinant 0"
as an *axiom* for determinants, and since the lemma shows that
this implies that "swapping rows multiplies the determinant by
-1", I know that *both* of these properties will hold for
determinant functions.

* Example.* (* Computing
determinants using the axioms*) Suppose that is a determinant function and

Compute

I want to contruct a determinant function for matrices. I'll begin by constructing one for matrices; you may have seen this in (say) a multivariable calculus course.

* Proposition.* Define by

Then is a determinant function.

* Proof.* First,

Therefore, Axiom 2 holds.

Next,

Finally,

* Example.* On ,

* Example.* (* Determinants and
elementary row operations*) How are determinants affected by
elementary row operations?

The lemma I proved earlier shows that swapping two rows multiplies the determinant by -1. By linearity, multiplying a row by a multiplies the determinant by a.

Consider the operation of adding a multiple of a row to another row. Suppose, for example, I'm performing the operation . Let

Then

Therefore, this kind of row operation leaves the determinant unchanged.

For example,

If I swap the rows, I get

The determinant was multiplied by -1.

If I multiply the second row of the original matrix by 5, I get

This is 5 times the original determinant.

Finally, suppose I subtract 3 times row 1 from row 2. I get

The determinant was unchanged.

* Example.* (* Computing a
determinant using row operations*) Show that

I'll prove the result by performing elementary row operations. Recall that adding a multiple of a row to another row doesn't chance the determinant. So

When you define a mathematical object by axioms, two questions arise immediately:

1. (* Existence*) Are there any objects which
satisfy the axioms?

2. (* Uniqueness*) Is there more than one object
which satisfies the axioms?

I defined above a determinant function on matrices. This
solves the *existence* problem for determinant functions on
, but I want to show that there is a determinant
function on . I'll do this inductively, using * Expansion by cofactors*.

Later, I'll show that the determinant function defined in this way is
the *only* determinant function on . This solves the
{\it uniqueness} problem for determinants on .

Mathematicians often proceed in this way: Define an object by
identifying *properties* which characterize it, rather than
simply writing down a formula. I could have defined the determinant
by cofactor expansion, but it says more about what determinants
*really* are to *characterize* them by the axioms given
above.

Moreover, the axiomatic approach will provide a clean proof of properties such as multiplicativity. Try proving that from the cofactor expansion!

Earlier I discussed the connection between "swapping two rows multiplies the determinant by -1" and "when two rows are equal the determinant is 0". The next lemma is another piece of this picture. It says for a function which is linear in the rows, if "when two {\it adjacent} rows are equal the determinant is 0", then "swapping two rows multiplies the determinant by -1". (Axiom 2 does not require that the two equal rows be adjacent.)

* Lemma.* Let
be a function which is linear in each row and satisfies whenever two *adjacent* rows are equal. Then
swapping (any) two rows multiplies the value of f by -1.

* Proof.* First, I'll show that swapping two
adjacent rows multiplies the value of f by -1. I'll show the required
manipulations in schematic form:

To complete the proof, I must show that swapping two non-adjacent rows multiplies the value of f by -1. Since swapping adjacent rows multiplies the determinant by -1, it will suffice to show that swapping any two rows can be accomplished by an odd number of adjacent row swaps.

Without loss of generality, then, I may suppose the rows to be swapped are rows 1 and n. I'll indicate how to do the swaps by just displaying the row numbers. First, I do adjacent row swaps to move row 1 into the n-th position:

Now I do more swaps to move (the old) row n into the first position:

I've done a total of swaps, an odd number. This proves the result.

* Definition.* Let . Let be the matrix obtained
by deleting the i-th row and j-th column of A. If D is a determinant
function, then is called the * minor* of A.

The * cofactor* of A is
times the minor, i.e. .

* Example.* Consider the real matrix

To find the minor, strike out the row and the column (i.e. the row and column containing the element):

Take the determinant of what's left:

To get the cofactor, multiply this by . I get .

The easy way to remember whether to multiply by or -1 is to make a checkboard pattern of +'s and -'s:

Use the sign in the position. For example, there's a minus sign in the position, which agrees with the sign I computed using .

The next result says that I can use cofactors to extend a determinant function on matrices to a determinant function on matrices.

* Theorem.* Let C be a determinant function on
. For any , define

Then D is a determinant function on .

Notice that the summation is on i, which is the row index. That means
you're moving down a column as you sum. Consequently, this is a * cofactor expansion by columns*.

* Proof.* I need to show that D is linear in
each row, D is alternating, and .

* Linearity:* I'll prove linearity in row k. Let
, . I want to prove that

(All the action is taking place in the k-th row --- the one with the x's and y's --- and the other rows are the same in the three matrices).

Label the three matrices above:

Expand each of the D's in the equation using the cofactor summation. Since C is a determinant function, C is linear. For , then, the terms of the summation on the two sides clearly agree.

Consider the terms generated on both sides when . On the left, I have

On the right, I have

However,

because P, Q, and R only differ in row k, which is being deleted.

Therefore, the terms on the left and right are the same for all i, and D is linear.

* Alternating:* I have to show that if two rows
of A are the same, then . First, I'll show that if two
*adjacent* rows of A are the same, then .

Suppose without loss of generality that rows 1 and 2 are equal. ("Without loss of generality" means that the same argument will work for any two adjacent rows.) All the terms in the cofactor expansion are 0 except the first and second, since in all other cases the determinant function C will be applied to a matrix with two equal rows.

The first and second terms are

Now , since the first and second rows are equal. Likewise, since the rows are equal, I get the same matrix by deleting one or the other. Therefore, . The only things left are the signs, and and have opposite parity. Therefore, the cofactor expansion is equal to 0, and , as I wished to prove.

By applying the preceding lemma, I know that swapping two rows multiplies the value of D by -1.

Next, suppose two rows of A (which are not necessarily adjacent) are
equal. I can swap rows until I get a matrix B which has two equal
rows that *are* adjacent --- let's say that k swaps are needed
to do this. Then

This completes the proof that if A has two equal rows.

* The identity has determinant 1:* Suppose . Notice that unless . Therefore, the cofactor expansion of reduces to

* Notation.* If A is an matrix, the determinant of A (as defined by the
cofactor expansion) will be denoted or .

* Example.* (* Computing a
determinant by cofactors*) You can use cofactor expansion to
compute determinants. I'll show later on that . Since transposing sends rows to columns, this means
that *you can expand by cofactors of rows as well as columns*.

You can expand along any row or column you want, but it's usually good to pick one with lots of 0's. For example, consider the following real matrix. Expanding along the second column, I get

* Example.* (* Computing a
determinant using row operations and cofactors*) You can use row
operations to simplify a matrix before expanding by cofactors. For
example, I'll compute the determinant of the following matrix in :

Again anticipating the later result that , it
follows that *you can perform elementary column operations on a
matrix when computing its determinant*. But be careful! Column
operations are {\it not} permitted in most other situations.

Copyright 2008 by Bruce Ikenaga