# Determinants

The determinant of an square matrix is a number which is computed from the entries of the matrix. If A is a square matrix, the determinant of A is denoted by or .

For matrices, the formula is

Think of the following picture:

Example. Compute the determinant of .

For matrices which are or larger, you can compute the determinant using a recursive algorithm called expansion by cofactors.

Example. Compute the determinant of

First, pick any row or column. It's usually good to pick a row or column with lots of zeros. I'll use column 2.

Go down column 2 one element at a time. For each element:

1. Cross out the row and column containing the element to leave a matrix.

2. Find the product of the element, the determinant of the matrix, and a plus or minus sign. The sign is determined by a "checkboard pattern":

The determinant is the sum of the products.

I'll work through the steps one element at a time. Cross out the row and column containing -1:

Compute the determinant:

Multiply the element, the determinant, and a minus sign:

Cross out the row and column containing 5:

Compute the determinant:

Multiply the element, the determinant, and a plus sign:

Cross out the row and column containing 0:

Compute the determinant:

Multiply the element, the determinant, and a minus sign:

The total is .

As an exericse, try expanding the determinant of this matrix using another row or column and see if you get the same answer.

Example. Compute .

I'll expand by cofactors of the first row:

Notice that expansion by cofactors reduces the computation of an determinant to the computation of n determinants.

There are other approaches you can use to compute determinants. For instance, in linear algebra you learn about row reduction, which provides a fairly efficient way to compute determinants for larger matrices.

Here are some properties of determinants.

Proposition. (a) If A has two equal rows, then .

(b) If two rows of A are swapped, the value of the determinant is multiplied by -1:

(c) The determinant of a sum is the sum of the determinants one row at a time:

(The parts of the matrices labelled "FOO" and "BAR" are the same in all 3 matrices: They don't change. The sum occurs in a single row.)

(d) A number may be factored out of one row of a determinant at a time:

(e) The determinant of a product is the product of the determinants:

(f) The determinant of the identity matrix is 1:

Proof. The proofs that these properties hold for arbitrary matrices are fairly involved; you'd see them in a course in linear algebra.

I'll verify that a couple of the properties hold in some special cases.

As an example of (a), here's the determinant of a matrix with two equal rows, which I'm computing by expanding by cofactors of row 1:

Here's an example of (e) with matrices,

Some of these properties are illustrated in the following examples.

Example. Suppose that

Compute:

(a) .

(b) .

(c) .

(a) Swapping two rows multiplies the determinant by -1:

(b) Factor 3 out of the first row, then factor 2 out of the second row:

(c) Break up the determinant using a sum of and in the first row. Factor 4 out of the first determinant; the second determinant is 0 because the matrix has two equal rows.

Example. Give specific matrices A and B for which

Contact information