Most of linear algebra involves mathematical objects called matrices. A matrix is a finite rectangular array of numbers:
In this case, the numbers are elements of (or ). In general, the entries will be elements of some commutative ring or field.
I'll explain operations with matrices in the following examples. I'll discuss and prove some of the properties of these operations later on.
Example. ( Dimensions of matrices) An matrix is a matrix with m rows and n columns. Sometimes this is expressed by saying that the dimensions of the matrix are .
A matrix is called an n-dimensional row vector. For example, here's a 3-dimensional row vector:
Likewise, an matrix is called an n-dimensional column vector. Here's a 3-dimensional column vector:
Two matrices are equal if they have the same dimensions and the corresponding entries are equal. For example, if
then , , and .
Definition. If R is a commutative ring, then is the set of matrices with entries in R.
For example, is the set of matrices with real entries. is the set of matrices with entries in .
Example. ( Addition and subtraction) In this example, I'll assume that the matrices have entries in .
You can add (or subtract) matrices by adding (or subtracting) corresponding entries.
If you are adding several matrices, you can group them any way you wish:
Note that matrix addition is commutative:
Symbolically, if A and B are matrices with the same dimensions, then
Of course, matrix subtraction is not commutative.
You can only add or subtract matrices with the same dimensions.
Example. ( Adding or subtracting matrices over ) You add or subtract matrices over by adding or subtracting corresponding entries, but all the arithmetic is done in .
For example, here are some matrix computations over :
Note that in the second example, there were some negative numbers in the middle of the computation, but the final answer was expressed entirely in terms of elements of .
Example. A zero matrix is a matrix all of whose entries are 0. If you add the zero matrix to another matrix A, you get A:
In symbols, if is a zero matrix and A is a matrix of the same size, then
A zero matrix is said to be an identity element for matrix addition.
Example. You can multiply a matrix by a number by multiplying each entry by the number. Here is an example with real numbers:
Things work in the same way over , but all the arithmetic is done in . Here is an example over :
Example. To compute the product of two matrices, take the dot products of the rows of A with the columns of B. In this example, assume all the matrices have real entries.
In order for the multiplication to work, the matrices must have compatible dimensions: The number of columns in A should equal the number of rows of B. Thus, if A is an matrix and B is an matrix, will be an matrix.
Here are two more examples, again using matrices with real entries:
Here is an example with matrices in . Remember that all the arithmetic is done in .
Example. If you multiply a matrix by a zero matrix, you get a zero matrix:
In symbols, if is a zero matrix and A is a matrix compatible with it for multiplication, then
(The zero matrices here may have different sizes.)
Example. Write the system of linear equations
as a matrix multiplication equation.
Example. Write the system of equations which correspond to the matrix equation
Multiply out the left side:
Equate corresponding entries:
Example. ( Identity matrices) There are special matrices which serve as identities for multiplication: The identity matrix is the square matrices with 1's down the main diagonal --- the diagonal running from northwest to southeast --- and 0's everywhere else. For example, the identity matrix is
If I is the identity and A is a matrix which is compatible for multiplication with A, then
Example. Matrix multiplication obeys some of the algebraic laws you're familiar with. For example, matrix multiplication is associative: If A, B, and C are matrices and their dimensions are compatible for multiplication, then
However, matrix multiplication is not commutative in general. That is, for all matrices A, B.
One trivial way to get a counterexample is to let A be and let B be . Then is while is .
However, it's easy to come up with counterexamples even when and have the same dimensions. For example, consider the following matrices in :
Example. If A is a matrix, the transpose of A is obtained by swapping the rows and columns of A. For example,
Notice that the transpose of an matrix is an matrix.
Example. Consider the following matrices with real entries:
(a) Compute .
(b) Compute .
Example. The inverse of an matrix A is a matrix which satisfies
There is no such thing as matrix division in general, because some matrices do not have inverses. But if A has an inverse, you can simulate division by multiplying by . This is often useful in solving matrix equations.
For example, if
is a matrix, the inverse of A --- if there is one --- turns out to be
To show that this is the inverse of A, I have to check that and :
This proves that the formula gives the inverse of a matrix.
Here's an example for a matrix with real entries:
If , you can't use this formula; in fact, a matrix which satisfies this condition does not have an inverse. A matrix which does not have an inverse is called singular.
Example. For what values of x is the following real matrix singular?
The matrix is singular --- not invertible --- if
Solve for x:
The matrix is singular for and for .
Example. ( Solving a system of equations) Here's how to use inverses and matrix multiplication to solve a system of equations. Suppose I want to solve the following system over for x and y:
Write the equation in matrix form:
Multiply both sides by the inverse of the square matrix:
On the left, the square matrix and its inverse cancel, since they multiply to I. On the right,
The solution is , .
Bruce Ikenaga's Home Page
Copyright 2016 by Bruce Ikenaga