Let be a linear transformation of finite dimensional vector spaces. Choose ordered bases for V and for W.
For each j, . Therefore, may be written uniquely as a linear combination of elements of :
The numbers are uniquely determined by f. The matrix is the matrix of f relative to the ordered bases and . I'll use to denote this matrix. Here's how to find it.
I'll use std to denote the standard basis for .
Example. Here are two bases for :
Suppose is a linear transformation such that
Then
Read the description of preceding this example and verify that was constructed by following the steps in the description.
Example. (a) Define by
Find .
Apply f to the elements of the standard basis for , and write the results in terms of the standard basis for :
Take the coefficients in the linear combinations and use them to make the columns of the matrix:
Note that in matrix form,
In other words, is the same matrix as the one you'd usually use to represent f by matrix multiplication.
(b) Let . Define by
Find .
Apply f to the elements of the standard basis for , and write the results in terms of :
Take the coefficients in the linear combinations and use them to make the columns of the matrix:
Here is a description of in words:
If you keep this in mind, change of coordinates will make much more sense.
I'll verify the claim above for one of the basis elements . In terms of ,
Then
This is correct, since , and the representation of in terms of the basis is
The matrix of a linear transformation is like a snapshot of a person --- there are many pictures of a person, but only one person. Likewise, a given linear transformation can be represented by matrices with respect to many choices of bases for the domain and range.
In the last example, finding turned out to be easy, whereas finding the matrix of f relative to other bases is more difficult. Here's how to use change of basis matrices to make things simpler.
Suppose you have bases and and you want .
1. Find . Usually, you can find this from the definition.
2. Find the change of basis matrices and . (Take the basis elements written in terms of the standard bases and use them as the columns of the matrices.)
3. Find .
4. Then
Do you see why this works? Reading from right to left, an input vector written in terms of is translated to the standard basis by . Next, takes the standard vector, applies f, and writes the output as a standard vector. Finally, takes the standard vector output and translates it to a vector.
I'll illustrate this in the next example.
Example. Define by
The matrix above is the matrix of f relative to the standard bases of and .
Next, consider the following bases for and , respectively:
I'll find the matrix of f relative to and . Here's how:
This matrix translates vectors in from to the standard basis:
This matrix translates vectors in from to the standard basis:
Hence, the inverse matrix translates vectors from the standard basis to :
Therefore,
Example. (a) Suppose satisfies
What is ?
Write as a linear combination of and :
The numbers are simple enough that I could figure out the linear combination by inspection.
Apply T:
(b) is a basis for . Suppose satisfies
What is ?
Consider the equations for T above. T is applied to the elements of , and the results are written in terms of the standard basis. Thus,
Since I'm applying T to the standard vector , I have to translate this to a vector to use the T-matrix I found.
Therefore,
Hence,
Example. Suppose is given by
Let
(a) Find .
(b) Find .
First,
Hence,
Then
(c) Compute .
This means: Apply T to the vector and write the result in terms of .
Example. Here are two bases for :
Suppose is defined by
(a) Find .
Make a matrix using these coordinate vectors as the columns:
(b) Find .
Find the translation matrices:
Therefore,
Send comments about this page to: Bruce.Ikenaga@millersville.edu.
Copyright 2012 by Bruce Ikenaga