Determinants - Properties

In this section, we'll derive some properties of determinants. Two key results: The determinant of a matrix is equal to the determinant of its transpose, and the determinant of a product of two matrices is equal to the product of their determinants.

We'll also derive a formula involving the adjugate of a matrix. We'll use it to give a formula for the inverse of a matrix, and to derive Cramer's rule, a method for solving some systems of linear equations.

The first result is a corollary of the permutation formula for determinants which we derived earlier.

Corollary. Let R be a commutative ring with identity, and let $A \in
   M(n, R)$ . Then $|A| = |A^T|$ .

Proof. We'll use the permutation formula for the determinant, beginning with the determinant of $A^T$ .

$$|A^T| = \sum_{\sigma \in S_n} \sgn(\sigma) \prod_{i = 1}^n A_{i \sigma(i)}^T = \sum_{\sigma \in S_n} \sgn(\sigma) \prod_{i = 1}^n A_{\sigma(i)i} =$$

$$\sum_{\sigma \in S_n} \sgn(\sigma) \prod_{j=1}^n A_{j \sigma^{-1}(j)} = \sum_{\sigma \in S_n} \sgn\left(\sigma^{-1}\right) \prod_{j = 1}^n A_{j \sigma^{-1}(j)} =$$

$$\sum_{\sigma^{-1} \in S_n} \sgn\left(\sigma^{-1}\right) \prod_{j = 1}^n A_{j \sigma^{-1}(j)} = \sum_{\tau \in S_n} \sgn\left(\tau\right) \prod_{j = 1}^n A_{j \tau(j)} = |A|.$$

In the fourth equality, I went from summing over $\sigma$ in $S_n$ to $\sigma^{-1}$ in $S_n$ . This is valid because permutations are bijective functions, so they have inverse functions which are also permutations. So summing over all permutations in $S_n$ is the same as summing over all their inverses in $S_n$ --- you will get the same terms in the sum, just in a different order.

I got the next-to-the-last equality by letting $\tau = \sigma^{-1}$ . This just makes it easier to recognize the next-to-last expression as the permutation formula for $|A|$ .

Remark. We've used row operations as an aid to computing determinants. Since the rows of A are the columns of $A^T$ and vice versa, the Corollary implies that you can also use column operations to compute determinants. The allowable operations are swapping two columns, multiplying a column by a number, and adding a multiple of a column to another column. They have the same effects on the determinant as the corresponding row operations.

This also means that you can compute determinants using cofactors of rows as well as columns.

In proving the uniqueness of determinant functions, we showed that if D is a function on $n
   \times n$ matrices which is alternating and linear on the rows, then $D(M) = (\det M) D(I)$ . We will use this to prove the product rule for determinants.

Theorem. Let R be a commutative ring with identity, and let $A, B
   \in M(n, R)$ . Then $|A B| = |A| |B|$ .

Proof. Fix B, and define

$$D(A) = |A B|.$$

I will show that D is alternating and linear, then apply a result I derived in showing uniqueness of determinant functions.

Let $r_i$ denote the i-th row of A. Then

$$D(A) = \left|\matrix{ \leftarrow & r_1B & \rightarrow \cr \leftarrow & r_2B & \rightarrow \cr & \vdots & \cr \leftarrow & r_nB & \rightarrow \cr}\right|.$$

Now $|\cdot|$ is alternating, so interchanging two rows in the determinant above multiplies $D(A)$ by -1. Hence, D is alternating.

Next, I'll show that D is linear:

$$D\left[\matrix{ & \vdots & \cr \leftarrow & k x + y & \rightarrow \cr & \vdots & \cr}\right] = \left|\matrix{ & \vdots & \cr \leftarrow & (k x + y) B & \rightarrow \cr & \vdots & \cr}\right| =$$

$$k \cdot \left|\matrix{ & \vdots & \cr \leftarrow & x B & \rightarrow \cr & \vdots & \cr}\right| + \left|\matrix{& \vdots & \cr \leftarrow & yB & \rightarrow \cr & \vdots & \cr}\right| = k \cdot D\left[\matrix{ & \vdots & \cr \leftarrow & x & \rightarrow \cr & \vdots & \cr}\right] + D\left[\matrix{ & \vdots & \cr \leftarrow & y & \rightarrow \cr & \vdots & \cr}\right].$$

This proves that D is linear in each row.

Since D is a function on $M(n, R)$ which is alternating and linear in the rows, the result I mentioned earlier shows

$$D(A) = |A| D(I).$$

But $D(A) = |A B|$ and $D(I) = |I B| = |B|$ , so we get

$$|A B| = D(A) = |A| D(I) = |A| |B|.\quad\halmos$$

In other words, the determinant of a product is the product of the determinants. A similar result holds for powers.

Corollary. Let R be a commutative ring with identity, and let $A
   \in M(n, R)$ . Then for every $m \ge 0$ ,

$$|A^m| = |A|^m.$$

Proof. This follows from the previous result using induction. The result is obvious for $m = 0$ and $m
   = 1$ (note that $A^0 = I$ , the identity matrix), and the case $m = 2$ follows from the previous result if we take $B = A$ .

Suppose the result is true for m, so $|A^m| = |A|^m$ . We need to show that the result holds for $m + 1$ . We have

$$|A^{m + 1}| = |A^m A| = |A^m| |A| = |A|^m |A| = |A|^{m + 1}.$$

We used the case $m =
   2$ to get the second equality, and the induction assumption was used to get the third equality. This proves the result for $m + 1$ , so it holds for all $m \ge 0$ by induction.

While the determinant of a product is the product of the determinants, the determinant of a sum is not necessarily the sum of the determinants.

Example. Give a specific example of $2 \times 2$ real matrices A and B for which $\det (A + B) \ne \det A +
   \det B$ .

$$\det \left[\matrix{1 & 0 \cr 0 & 1 \cr}\right] = 1 \quad\hbox{and}\quad \det \left[\matrix{-1 & 0 \cr 0 & -1 \cr}\right] = 1.$$

But

$$\det\left(\left[\matrix{1 & 0 \cr 0 & 1 \cr}\right] + \left[\matrix{-1 & 0 \cr 0 & -1 \cr}\right]\right) = \det \left[\matrix{0 & 0 \cr 0 & 0 \cr}\right] = 0.\quad\halmos$$

The rule for products gives us an easy criterion for the invertibility of a matrix. First, I'll prove the result in the special case where the entries of the matrix are elements of a field.

Theorem. Let F be a field, and let $A \in M(n, F)$ .

A is invertible if and only if $|A| \ne 0$ .

Proof. If A is invertible, then

$$|A| |A^{-1}| = |A A^{-1}| = |I| = 1.$$

This equation implies that $|A| \ne 0$ (since $|A| = 0$ would yield "$0 = 1$ ").

Conversely, suppose that $|A| \ne 0$ . Suppose that A row reduces to the row reduced echelon matrix R, and consider the effect of elementary row operations on $|A|$ . Swapping two rows multiplies the determinant by -1. Adding a multiple of a row to another row leaves the determinant unchanged. And multiplying a row by a nonzero number multiplies the determinant by that nonzero number. Clearly, no row operation will make the determinant 0 if it was nonzero to begin with. Since $|A|
   \ne 0$ , it follows that $|R| \ne 0$ .

Since R is a row reduced echelon matrix with nonzero determinant, it can't have any all-zero rows. An $n \times n$ row reduced echelon matrix with no all-zero rows must be the identity, so $R = I$ . Since A row reduces to the identity, A is invertible.

Corollary. Let F be a field, and let $A \in M(n, F)$ . If A is invertible, then

$$|A^{-1}| = |A|^{-1}.$$

Proof. I showed in proving the theorem that $|A| |A^{-1}| = 1$ , so $|A^{-1}| = |A|^{-1}$ .

We'll see below what happens if we have a commutative ring with identity instead of a field.

The next example uses the determinant properties we've derived.

Example. Suppose A, B, and C are $n \times n$ matrices over $\real$ , and

$$|A| = 18, \quad |B| = 5, \quad |C| = 3.$$

Compute $|A^T B^2
   C^{-1}|$ .

We have $|A^T| = |A| =
   18$ and $|C^{-1}| = \dfrac{1}{|C|} =
   \dfrac{1}{3}$ . Using the product rule for determinants,

$$|A^T B^2 C^{-1}| = |A^T| |B|^2 |C^{-1}| = 18 \cdot 5^2 \cdot \dfrac{1}{3} = 150.\quad\halmos$$

Definition. Let R be a commutative ring with identity. Matrices $A,
   B \in M(n, R)$ are similar if there is an invertible matrix $P \in M(n, R)$ such that $P A P^{-1} = B$ .

Similar matrices come up in many places, for instance in changing bases for vector spaces.

Corollary. Let R be a commutative ring with identity. Similar matrices in $M(n, R)$ have equal determinants.

Proof. Suppose A and B are similar, so $P A P^{-1} = B$ for some invertible matrix P. Then

$$|B| = |P A P^{-1}| = |P| |A| |P^{-1}| = |P| |P^{-1}| |A| = |P P^{-1}| |A| = |I| |A| = |A|.$$

In the third equality, I used the fact that $|P^{-1}|$ and $|A|$ are numbers --- elements of the ring R --- and multiplication in R is commutative. That allows me to commute $|P^{-1}|$ and $|A|$ .

Definition. Let R be a commutative ring with identity, and let $A
   \in M(n, R)$ . The adjugate $\adj A$ is the matrix whose i-j-th entry is

$$(\adj A)_{i j} = (-1)^{i + j} |A(j \mid i)|.$$

In other words, $\adj
   A$ is the transpose of the matrix of cofactors.

Remark. In the past, $\adj A$ was referred to as the adjoint, or the classical adjoint. But the term "adjoint" is now used to refer to something else: The conjugate transpose, which we'll see when we discuss the spectral theorem. So the term "adjugate" has come to replace it for the matrix defined above. One advantage of the word "adjugate" is that you can use the same abbreviation "adj" as was used for "adjoint"!

Example. Compute the adjugate of

$$A = \left[\matrix{ 1 & 0 & 3 \cr 0 & 1 & 1 \cr 1 & -1 & 2 \cr}\right].$$

First, I'll compute the cofactors. The first line shows the cofactors of the first row, the second line the cofactors of the second row, and the third line the cofactors of the third row.

$$+\left|\matrix{1 & 1 \cr -1 & 2 \cr}\right| = 3, \quad -\left|\matrix{0 & 1 \cr 1 & 2 \cr}\right| = 1, \quad +\left|\matrix{0 & 1 \cr 1 & -1 \cr}\right| = -1,$$

$$-\left|\matrix{0 & 3 \cr -1 & 2 \cr}\right| = -3, \quad +\left|\matrix{1 & 3 \cr 1 & 2 \cr}\right| = -1, \quad -\left|\matrix{1 & 0 \cr 1 & -1 \cr}\right| = 1,$$

$$+\left|\matrix{0 & 3 \cr 1 & 1 \cr}\right| = -3, \quad -\left|\matrix{1 & 3 \cr 0 & 1 \cr}\right| = -1, \quad +\left|\matrix{1 & 0 \cr 0 & 1 \cr}\right| = 1.$$

The adjugate is the transpose of the matrix of cofactors:

$$\adj A = \left[\matrix{ 3 & -3 & -3 \cr 1 & -1 & -1 \cr -1 & 1 & 1 \cr}\right].\quad\halmos$$

The next result shows that adjugates and tranposes can be interchanged: The adjugate of the transpose equals the transpose of the adjugate.

Proposition. Let R be a commutative ring with identity, and let $A
   \in M(n, R)$ . Then

$$(\adj A)^T = \adj A^T.$$

Proof. Consider the $(i, j)^{\rm th}$ elements of the matrices on the two sides of the equation.

$$[(\adj A)^T]_{i j} = (\adj A)_{j i} = (-1)^{j + i} |A(i \mid j)|,$$

$$[\adj A^T]_{i j} = (-1)^{i + j} |A^T(j \mid i)|.$$

The signs $(-1)^{j +
   i}$ and $(-1)^{i + j}$ are the same; what about the other terms? $|A(i \mid j)|$ is the determinant of the matrix formed by deleting the $i^{\rm th}$ row and the $j^{\rm th}$ column from A. And $|A^T(j \mid i)|$ is the determinant of the matrix formed by deleting the $j^{\rm
   th}$ row and $i^{\rm th}$ column from $A^T$ . But the $i^{\rm th}$ row of A is the $i^{\rm th}$ column of $A^T$ , and the $j^{\rm th}$ column of A is the $j^{\rm th}$ row of $A^T$ . So the two matrices that remain after these deletions are transposes of one another, and hence they have the same determinant. Thus, $|A(i \mid j)| = |A^T(j \mid i)|$ . Hence, $[(\adj A)^T]_{i j} = [\adj A^T]_{i j}$ .

The next theorem is very important, but the proof is a little tricky. So I'll discuss the main point in the proof first by giving an example.

Suppose we compute the following determinant over $\real$ using expansion by cofactors on the $3^{\rm rd}$ row:

$$\left|\matrix{ 1 & 2 & 4 \cr 1 & -1 & 0 \cr 2 & 3 & -2 \cr}\right| = (2) \left|\matrix{2 & 4 \cr -1 & 0 \cr}\right| - (3) \left|\matrix{1 & 4 \cr 1 & 0 \cr}\right| + (-2) \left|\matrix{1 & 2 \cr 1 & -1 \cr}\right| =$$

$$(2)(4) - (3)(-4) + (-2)(-3) = 8 + 12 + 6 = 26.$$

As usual, I multiplied the cofactors of the $3^{\rm rd}$ row by the elements of the $3^{\rm rd}$ row.

Now suppose I make a mistake: I multiply the cofactors of the $3^{\rm rd}$ row by elements of the $1^{\rm st}$ row (which are 1, 2, 4). Here's what I get:

$$(1) \left|\matrix{2 & 4 \cr -1 & 0 \cr}\right| - (2) \left|\matrix{1 & 4 \cr 1 & 0 \cr}\right| + (4) \left|\matrix{1 & 2 \cr 1 & -1 \cr}\right| =$$

$$(1)(4) - (2)(-4) + (4)(-3) = 4 + 8 - 12 = 0.$$

Or suppose I multiply the cofactors of the $3^{\rm rd}$ row by elements of the $2^{\rm nd}$ row (which are 1, -1, 0). Here's what I get:

$$(1) \left|\matrix{2 & 4 \cr -1 & 0 \cr}\right| - (-1) \left|\matrix{1 & 4 \cr 1 & 0 \cr}\right| + (0) \left|\matrix{1 & 2 \cr 1 & -1 \cr}\right| =$$

$$(1)(4) - (-1)(-4) + (0)(-3) = 4 - 4 + 0 = 0.$$

These examples suggest that if I try to do a cofactor expansion by using the cofactors of one row multiplied by the elements from another row, I get 0. It turns out that this is true in general, and is the key step in the next proof.

Theorem. Let R be a commutative ring with identity, and let $A
   \in M(n, R)$ . Then

$$|A| \cdot I = A \cdot \adj A.$$

Proof. This proof is a little tricky, so you may want to skip it for now.

We expand $|A|$ by cofactors of row i:

$$|A| = \sum_j (-1)^{i + j} A_{i j} |A(i \mid j)|.$$

First, suppose $k \ne
   i$ . Construct a new matrix B by replacing row k of A with row i of A. Thus, the elements of B are the same as those of A, except that B's row k duplicates A's row i.

$$\hbox{\epsfysize=1in \epsffile{determinants-properties-1.eps}}$$

In symbols,

$$B_{l j} = \cases{ A_{l j} & if $l \ne k$ \cr A_{i j} & if $l = k$ \cr}$$

Suppose we compute $\det B$ by expanding by cofactors of row k. We get

$$\sum_{j = 1}^n (-1)^{k + j} B_{k j} |B(k \mid j)| = \sum_{j = 1}^n (-1)^{k + j} A_{i j} |A(k \mid j)|.$$

Why is $|B(k \mid j)|
   = |A(k \mid j)|$ ? To compute $|B(k \mid j)|$ , you delete row k and column j from B. To compute $|A(k \mid j)|$ , you delete row k and column j from A. But A and B only differ in row k, which is being deleted in both cases. Hence, $|B(k \mid j)| = |A(k \mid
   j)|$ .

$$\hbox{\epsfysize=2.5in \epsffile{determinants-properties-2.eps}}$$

On the other hand, B has two equal rows --- its row i and row k are both equal to row i of A --- so the determinant of B is 0. Hence,

$$\sum_{j = 1}^n (-1)^{k + j} A_{i j} |A(k \mid j)| = 0.$$

This is the point we illustrated prior to stating the theorem: if you do a cofactor expansion by using the cofactors of one row multiplied by the elements from another row, you get 0. The last equation is what we get for $k \ne
   i$ . In case $k = i$ , we just get the cofactor expansion for $|A|$ :

$$\sum_{j = 1}^n (-1){i + j} A_{i j} |A(i \mid j)| = |A|.$$

We can combine the two equations into one using the Kronecker delta function:

$$\sum_j (-1)^{k + j} A_{i j} |A(k \mid j)| = \delta_{i k} |A| \quad\hbox{for all}\quad i, k.$$

Remember that $\delta_{i k} = 1$ if $i = k$ , and $\delta_{i k} = 0$ if $i \ne k$ . These are the two cases above.

Interpret this equation as a matrix equation, where the two sides represent the $(i,
   k)$ -th entries of their respective matrices. What are the respective matrices? Since $\delta_{i k}$ is the $(i, k)$ -th entry of the identity matrix, the right side is the $(i, k)$ -th entry of $|A| \cdot I$ .

The left side is the $(i, k)$ -th entry of $A \cdot \adj A$ , because

$$(A \cdot \adj A)_{i k} = \sum_j A_{i j} (\adj A)_{j k} = \sum_j A_{i j} (-1)^{j + k} |A(k \mid j)|.$$

Therefore,

$$|A| \cdot I = A \cdot \adj A.\quad\halmos.$$

I can use the theorem to obtain an important corollary. I already know that a matrix over a field is invertible if and only if its determinant is nonzero. The next result explains what happens over a commutative ring with identity, and also provides a formula for the inverse of a matrix.

Corollary. Let R be a commutative ring with identity. A matrix $A
   \in M(n, R)$ is invertible if and only if $|A|$ is invertible in R, in which case

$$A^{-1} = |A|^{-1} \adj A.$$

Proof. First, suppose A is invertible. Then $A A^{-1} = I$ , so

$$|A||A^{-1}| = |A A^{-1}| = |I| = 1.$$

Therefore, $|A|$ is invertible in R.

Since $|A|$ is invertible, I can take the equation $|A| \cdot I = A \cdot \adj
   A$ and multiply by $|A|^{-1}$ to get

$$I = A \cdot |A|^{-1} \adj A.$$

This implies that $A^{-1} = |A|^{-1} \adj A$ .

Conversely, suppose $|A|$ is invertible in R. As before, I get

$$I = A \cdot |A|^{-1} \adj A.$$

Again, this implies that $A^{-1} = |A|^{-1} \adj A$ , so A is invertible.

As a special case, we get the formula for the inverse of a $2 \times 2$ matrix.

Corollary. Let R be a commutative ring with identity. Suppose $a,
   b, c, d \in R$ , and $a d - b c$ is invertible in R. Then

$$\left[\matrix{a & b \cr c & d \cr}\right]^{-1} = (a d - b c)^{-1} \left[\matrix{d & -b \cr -c & a \cr}\right].$$

Proof.

$$\det \left[\matrix{a & b \cr c & d \cr}\right] = a d - b c \quad\hbox{and}\quad \adj \left[\matrix{a & b \cr c & d \cr}\right] = \left[\matrix{d & -b \cr -c & a \cr}\right].$$

Hence, the result follows from the adjugate formula.

To see the difference between the general case of a commutative ring with identity and a field, consider the following matrices over $\integer_6$ :

$$\left[\matrix{5 & 3 \cr 1 & 1 \cr}\right] \quad\hbox{and}\quad \left[\matrix{2 & 1 \cr 1 & 3 \cr}\right]$$

In the first case,

$$\det \left[\matrix{5 & 3 \cr 1 & 1 \cr}\right] = 2.$$

2 is not invertible in $\integer_6$ --- do you know how to prove it? Hence, even though the determinant is nonzero, the matrix is not invertible.

$$\det \left[\matrix{2 & 1 \cr 1 & 3 \cr}\right] = 5.$$

5 is invertible in $\integer_6$ --- in fact, $5 \cdot 5 = 1$ . Hence, the second matrix is invertible. You can find the inverse using the formula in the last corollary.

The adjugate formula can be used to find the inverse of a matrix. It's not very good for big matrices from a computational point of view: The usual row reduction algorithm uses fewer steps. However, it's not too bad for small matrices --- say $3 \times 3$ or smaller.

Example. Compute the inverse of the following real matrix using the adjugate formula.

$$A = \left[\matrix{ 1 & -2 & -2 \cr 3 & -2 & 0 \cr 1 & 1 & 1 \cr}\right].$$

First, I'll compute the cofactors. The first line shows the cofactors of the first row, the second line the cofactors of the second row, and the third line the cofactors of the third row. I'm showing the "checkerboard" pattern of pluses and minuses as well.

$$+\left|\matrix{-2 & 0 \cr 1 & 1 \cr}\right| = -2, \quad -\left|\matrix{3 & 0 \cr 1 & 1 \cr}\right| = -3, \quad +\left|\matrix{3 & -2 \cr 1 & 1 \cr}\right| = 5,$$

$$-\left|\matrix{-2 & -2 \cr 1 & 1 \cr}\right| = 0, \quad +\left|\matrix{1 & -2 \cr 1 & 1 \cr}\right| = 3, \quad -\left|\matrix{3 & -2 \cr 1 & 1 \cr}\right| = -3,$$

$$+\left|\matrix{-2 & -2 \cr 2 & 0 \cr}\right| = -4, \quad -\left|\matrix{1 & -2 \cr 3 & 0 \cr}\right| = -6, \quad +\left|\matrix{1 & -2 \cr 3 & -2 \cr}\right| = 4.$$

The adjugate is the transpose of the matrix of cofactors:

$$\adj A = \left[\matrix{ -2 & 0 & -4 \cr -3 & 3 & -6 \cr 5 & -3 & 4 \cr}\right].$$

I'll let you show that $\det A = -6$ . So I have

$$A^{-1} = -\dfrac{1}{6} \left[\matrix{ -2 & 0 & -4 \cr -3 & 3 & -6 \cr 5 & -3 & 4 \cr}\right].\quad\halmos$$

Another consequence of the formula $|A| \cdot I = A \cdot \adj
   A$ is Cramer's rule, which gives a formula for the solution of a system of linear equations.

Corollary. ( Cramer's rule) If A is an invertible $n \times n$ matrix, the unique solution to $A x = y$ is given by

$$x_i = \dfrac{|B_i|}{|A|},$$

Here $B_i$ is the matrix obtained from A by replacing its i-th column by y.

Proof.

$$\eqalign{ A x & = y \cr (\adj A) A x & = (\adj A) y \cr}$$

Hence,

$$|A| x_i = \sum_j (\adj A)_{i j} y_j = \sum_j (-1)^{i + j} y_j |A(j \mid i)|.$$

But the last sum is a cofactor expansion of A along column i, where instead of the elements of A's column i I'm using the components of y. This is exactly $|B_i|$ .

Example. Use Cramer's Rule to solve the following system over $\real$ :

$$\matrix{ 2 x & + & y & + & z & = & 1 \cr x & + & y & - & z & = & 5 \cr 3 x & - & y & + & 2 z & = & -2 \cr}$$

In matrix form, this is

$$\left[\matrix{ 2 & 1 & 1 \cr 1 & 1 & -1 \cr 3 & -1 & 2 \cr}\right] \left[\matrix{x \cr y \cr z \cr}\right] = \left[\matrix{1 \cr 5 \cr -2 \cr}\right].$$

I replace the successive columns of the coefficient matrix with $(1, 5, -2)$ , in each case computing the determinant of the resulting matrix and dividing by the determinant of the coefficient matrix:

$$x = \dfrac{\left|\matrix{ {\bf 1} & 1 & 1 \cr {\bf 5} & 1 & -1 \cr {\bf -2} & -1 & 2 \cr}\right|} {\left|\matrix{ 2 & 1 & 1 \cr 1 & 1 & -1 \cr 3 & -1 & 2 \cr}\right|} = \dfrac{-10}{-7} = \dfrac{10}{7}, \quad y = \dfrac{\left|\matrix{ 2 & {\bf 1} & 1 \cr 1 & {\bf 5} & -1 \cr 3 & {\bf -2} & 2 \cr}\right|} {\left|\matrix{ 2 & 1 & 1 \cr 1 & 1 & -1 \cr 3 & -1 & 2 \cr}\right|} = \dfrac{-6}{-7} = \dfrac{6}{7}, \quad z = \dfrac{\left|\matrix{ 2 & 1 & {\bf 1} \cr 1 & 1 & {\bf 5} \cr 3 & -1 & {\bf -2} \cr}\right|} {\left|\matrix{ 2 & 1 & 1 \cr 1 & 1 & -1 \cr 3 & -1 & 2 \cr}\right|} = \dfrac{19}{-7} = -\dfrac{19}{7}.$$

This looks pretty simple, doesn't it? But notice that you need to compute four $3 \times 3$ determinants to do this (and I didn't write out the work for those computations!). It becomes more expensive to solve systems this way as the matrices get larger.

As with the adjugate formula for the inverse of a matrix, Cramer's rule is not computationally efficient: It's better to use row reduction to solve large systems. Cramer's rule is not too bad for solving systems of two linear equations in two variables; for anything larger, you're probably better off using row reduction.


Contact information

Bruce Ikenaga's Home Page

Copyright 2022 by Bruce Ikenaga