Math 322

These problems are intended to help you study. The presence of a problem on this sheet does not mean that a similar problem will occur on the test. And the absence of a problem from this sheet does not mean that a similar problem is not on the test.

1. Determine whether the set is a subspace. Check each axiom for a subspace. If the axiom holds, prove it. If the axiom doesn't hold, give a specific counterexample.

(a) The subset of the real vector space given by

(b) The subset of the real vector space given by

(c) The subset of the real vector space given by

(d) For a fixed real matrix A, the subset of the real vector space given by

2. Let denote the vector space of polynomials with real coefficients, regarded as a vector space over .

(a) Prove or disprove:

(b) Prove or disprove:

3. Let U and V be subspaces of a vector space W. Show that the intersection is a subspace of W.

4. (a) Complete the definition of * independent
set*: A set S of vectors in a vector space V over a field F is
* independent* if ....

(b) Complete the definition of * spanning set*: A
set S of vectors in a vector space V over a field F *
spans* V if ....

5. Bonzo McTavish says: "A set of vectors in is independent if when you make a matrix with the vectors, the determinant of the matrix is nonzero." What is wrong with this?

6. Silas Hogwinder says: "A set of vectors in is independent if when you make a matrix with the vectors, it row reduces to the identity." What is wrong with this?

7. (a) Explain why the following set of vectors in is not independent.

(b) Explain why the following set of vectors in is not independent.

(c) Explain why the following set of vectors in does not span .

8. Determine whether the following vectors are independent in . If they are dependent, find a nontrivial linear combination of the vectors which equals .

9. Let V be a vector space, and suppose is a dependent set of vectors in V. Let

(The a's, b's, and c's are scalars.) Prove that is dependent.

10. Write the vector in as a linear combination of the vectors

11. Determine whether the following vectors span . If they don't, find the dimension of the subspace they span.

12. Consider the following subspace of the real vector space :

Find a basis for W. Prove that your set is a basis by showing that it spans W and is independent.

13. Let

(a) Show that V is a subspace of .

(b) If the following set of matrices in V spans V, prove it. If it doesn't span V, find a specific element of V which is not in the span of the set

14. Find bases for the row space, the column space, and the null space for the following real matrix:

15. Find bases for the row space, the column space, and the null space for the following matrix over :

16. Find bases for the row space, the column space, and the null space for the following real matrix:

17. Find bases for the row space, the column space, and the null space for the following real matrix:

18. Show that the following set of matrices is an independent subset of .

19. Show that the set is an independent set in the real vector space of differentiable functions on .

20. Find a standard basis vector such that the following set forms a basis for :

21. Find standard basis vectors such that the following set forms a basis for .

22. (a) Find a basis for the subspace of spanned by the set

(b) Find a subset of the following set S which forms a basis for the subspace of spanned by S.

23. Consider the following real matrices:

Determine whether the row space of A is contained in the row space of B. If it is, prove it. If it isn't, find a vector in the row space of A which is not in the row space of B.

24. Consider the following real matrix, where :

(a) Find a value of x for which .

(b) Find a value of x for which .

25. In a real matrix A, the 5 rows are independent. What is ?

26. (a) Let , where F is a field. Prove that

(b) Let , where F is a field. Suppose that A is invertible. Prove that

(In (a) and (b), begin by writing down what it means for a vector x to be in the null space of B or the null space of .)

(c) Find a matrix such that

(In words, you want to find a matrix A so that multiplies more vectors to than A does.)

(d) Find a matrix such that

(How can you choose A so that you can "build less stuff" with the rows of than with the rows of A?)

27. For the given function, check each axiom for a linear transformation. If the axiom holds, prove it. If the axiom does not hold, give a specific counterexample.

(a) The function given by

(b) The function given by

(c) The function given by

(d) The function given by

28. Let A be a *fixed* matrix in . Define by

Prove that T is a linear transformation.

29. Define by

is called the * trace function*.

(a) Show that is a linear transformation.

(b) Find a basis for the null space of .

30. Define by

(a) Compute the derivative .

(b) Show that there no values for which is the zero matrix.

31. Find a linear transformation that takes the parallelogram determined by the vectors and to the parallelogram determined by the vectors and .

32. Construct a linear transformation which reflects points in across the line .

33. Find an affine transformation which takes the unit square to the parallelogram with vertices , , , in such a way that the origin goes to A, the vector goes to , and the vector goes to .

34. Consider the following bases for the real vector space :

(a) Construct the translation matrices

(b) Write the vector in terms of .

(c) Write the vector in terms of the standard basis.

(d) Write the vector in terms of .

(e) Define by

Find .

35. Let be given by

Consider the following bases for and .

(a) Find , , and .

(b) Express relative to the basis .

(c) Express relative to the standard basis.

(d) Find .

36. What is ?

37. Simplify .

38. Compute .

- Hint: Express as for a certain value of , then apply DeMoivre's formula.

39. Find the inverse of the following matrix. (Multiply any constants outside the matrix into the matrix and simplify.)

1. Determine whether the set is a subspace. Check each axiom for a subspace. If the axiom holds, prove it. If the axiom doesn't hold, give a specific counterexample.

(a) The subset of the real vector space given by

(b) The subset of the real vector space given by

(c) The subset of the real vector space given by

(d) For a fixed real matrix A, the subset of the real vector space given by

(a) The set is not a subspace, since the zero vector is not contained in the set. I'll check the axioms, anyway.

since . since . But

since . But

(b) is contained in B, since , and is contained in B, since . However,

is contained in B, since . But

(c) Let . By definition, this means that

Consider the sum of the two vectors:

Then

The sum vector satisfies the defining condition, so the sum is contained in C.

Let and let k be a number. By definition, .

Consider the product

Then

The product vector satisfies the defining condition, so the product is contained in C.

Therefore, C is a subspace.

(d) Let . Then

Therefore, .

(Warning: Don't write things like " ". Do you understand why this is wrong?)

Let . Then

Therefore, .

Hence, D is a subspace.

2. Let denote the vector space of polynomials with real coefficients, regarded as a vector space over .

(a) Prove or disprove:

(b) Prove or disprove:

(a) By the Root Theorem (from precalculus), to say that is the same as saying that is divisible by ; that is, , for some .

Let . Then and for some . Hence,

Since the last expression is divisible by , it follows that .

Let , so for some . Let . Then

The last expression is divisible by , so .

Since V is closed under addition and scalar multiplication, V is a subspace.

(b) The zero polynomial 0 is not in W, since the zero polynomial does not map 2 to 3. But every subspace contains the zero vector, so W is not a subspace.

3. Let U and V be subspaces of a vector space W. Show that the intersection is a subspace of W.

Let . I must show that .

Since , . Since U is a subspace, .

Since , . Since V is a subspace, .

Since and , it follows that .

Next, let and let k be a scalar. I must show that .

Since , . Since U is a subspace, .

Since , . Since V is a subspace, .

Since and , it follows that .

Since is closed under addition and scalar multiplication, is a subspace.

4. (a) Complete the definition of * independent
set*: A set S of vectors in a vector space V over a field F is
* independent* if ....

(b) Complete the definition of * spanning set*: A
set S of vectors in a vector space V over a field F *
spans* V if ....

(a) A set S of vectors in a vector space V over a field F is * independent* if whenever and
,

If you said something about making a matrix with the vectors, you
didn't get the definition right. How you *check* whether
*certain kinds of vectors* are independent is *not* the
*definition* of independence.

(b) A set S of vectors in a vector space V over a field F * spans* V if for every , there are
numbers and vectors
such that

If you said something about making a matrix with the vectors, you
didn't get the definition right.) How you *check* whether
*certain kinds of vectors* form a spanning set is *not*
the *definition* of spanning.

5. Bonzo McTavish says: "A set of vectors in is independent if when you make a matrix with the vectors, the determinant of the matrix is nonzero." What is wrong with this?

Consider the set

The set is independent, but the matrix formed by the vectors (either as the rows or the columns) isn't square, so you can't take the determinant.

Don't confuse the *procedure* you use for checking
independence in a special case (e.g. when you have n vectors in )
with what the concept of *independence* means.

6. Silas Hogwinder says: "A set of vectors in is independent if when you make a matrix with the vectors, it row reduces to the identity." What is wrong with this?

Consider the set

The set is independent, but the matrix formed by the vectors (either as the rows or the columns) isn't square, so it doesn't row reduce to the identity.

Don't confuse the *procedure* you use for checking
independence in a special case (e.g. when you have n vectors in )
with what the concept of *independence* means.

7. (a) Explain why the following set of vectors in is not independent.

(b) Explain why the following set of vectors in is not independent.

(c) Explain why the following set of vectors in does not span .

(a) A set containing the zero vector is dependent. (Can you prove it?)

(b) has dimension 3 (since, for instance, the standard basis for has 3 elements). Therefore, any set of vectors in with more than 3 vectors must be dependent.

(c) has dimension 3 (since, for instance, the standard basis for has 3 elements). Therefore, any set of vectors in with fewer than 3 vectors cannot span .

8. Determine whether the following vectors are independent in . If they are dependent, find a nontrivial linear combination of the vectors which equals .

Let

The set is independent if this implies that .

The equation above is equivalent to the matrix equation

Row reduce the augmented matrix:

The corresponding equations are and . Setting gives and . Therefore,

Hence, the vectors are dependent.

9. Let V be a vector space, and suppose is a dependent set of vectors in V. Let

(The a's, b's, and c's are scalars.) Prove that is dependent.

Let be the span of in V. Since is dependent, I can express one of the vectors in terms of the others. Without loss of generality, suppose w can be expressed as a linear combination of u and v. Then

If is independent, then it's a basis for W. If it's dependent, then I can express one of u, v in terms of the other. Again without loss of generality, suppose v is a multiple of u. Then

If u is nonzero, then is independent, and it's a basis for W. The only other possibility is that . In that case, , so .

Considering all of these cases, I've shown that the dimension of W is 2, 1, or 0. Since is a set of 3 vectors in W, it follows that must be dependent.

10. Write the vector in as a linear combination of the vectors

I want to find a, b, c such that

This gives the matrix equation

Form the augmented matrix and row reduce:

The last matrix says , , and . Thus,

11. Determine whether the following vectors span . If they don't, find the dimension of the subspace they span.

Construct a matrix with the vectors as rows and row reduce:

The row reduced echelon matrix has the same row space as the original matrix and the nonzero rows of the row reduced echelon matrix form a basis for the row space. Since there are two nonzero rows, the given vectors do not span ; they span a subspace of dimension 2.

12. Consider the following subspace of the real vector space :

Find a basis for W. Prove that your set is a basis by showing that it spans W and is independent.

Let

spans W:

To show is independent, let

Then

Equating entries gives . Equating entries gives . Equating entries gives ; since , I get . This proves that the set is independent.

Hence, is a basis for W.

13. Let

(a) Show that V is a subspace of .

(b) If the following set of matrices in V spans V, prove it. If it doesn't span V, find a specific element of V which is not in the span of the set

(a) Let

Then

Also, if , then

Therefore, V is a subspace of .

(b) Take an arbitrary matrix in V and try to write it as a linear combination of the elements of S:

Equating corresponding entries, I obtain four equations:

The last two equations give

If I plug these into the first two equations, they are satisfied. This means that I can find a linear combination of the elements of S which is equal to any element of V. Specifically,

Therefore, S spans V.

14. Find bases for the row space, the column space, and the null space for the following real matrix:

Row reduce:

A basis for the row space is given by the nonzero rows of the row reduced echelon matrix:

The leading coefficients occur in the first, third, and fifth
columns. Therefore, the first, third, and fifth columns *of the
original matrix* form a basis for the column space of the
original matrix:

To find a basis for the null space, use a, b, c, d, and e as variables, and regard the row-reduced echelon matrix as the coefficient matrix for a homogeneous system. The corresponding equations are

Solve for the leading coefficient variables:

The parametric solution is

Hence,

Therefore, a basis for the null space is

15. Find bases for the row space, the column space, and the null space for the following matrix over :

Row reduce the matrix:

A basis for the row space is given by the nonzero rows of the row reduced echelon matrix:

The leading coefficients occur in the first and third columns.
Therefore, the first and third columns *of the original
matrix* form a basis for the column space of the original matrix:

To find a basis for the null space, use a, b, c, and d as variables, and regard the row-reduced echelon matrix as the coefficient matrix for a homogeneous system. The corresponding equations are

Solve for the leading coefficient variables:

The parametric solution is

Therefore,

Hence, a basis for the null space is given by

16. Find bases for the row space, the column space, and the null space for the following real matrix:

Row reduce the matrix:

A basis for the row space is given by the nonzero rows of the row reduced echelon matrix:

The leading coefficients occur in the first, third, and fourth
columns. Hence, the first, third, and fourth columns *of the
original matrix* form a basis for the column space:

To find a basis for the null space, use a, b, c, d, and e as variables, and regard the row-reduced echelon matrix as the coefficient matrix for a homogeneous system. The corresponding equations are

Solve for the leading coefficient variables:

The parametric solution is

Hence,

Therefore, a basis for the null space is

17. Find bases for the row space, the column space, and the null space for the following real matrix:

The matrix is in row reduced echelon form.

A basis for the row space is given by the nonzero rows of the matrix:

The leading coefficients occur in the second and third columns.
*Since the given matrix is in row reduced echelon form*, its
second and third columns form a basis for the column space:

To find a basis for the null space, use a, b, c, and d as variables, and regard the matrix as the coefficient matrix for a homogeneous system. The corresponding equations are

Solve for the leading coefficient variables:

The parametric solution is

Hence,

Therefore, a basis for the null space is

18. Show that the following set of matrices is an independent subset of .

Suppose that

I must show that . I have

Equating entries gives

Plugging into gives . Plugging into gives . Hence, the set is independent.

19. Show that the set is an independent set in the real vector space of differentiable functions on .

I'll use the Wronskian. I have

When , I have . Therefore, the set is independent.

20. Find a standard basis vector such that the following set forms a basis for :

If I make a matrix with these vectors as rows, the row reduced echelon form will have the same row space.

The nonzero rows of a row reduced echelon matrix are independent. There are four rows here, so I need a fifth vector which is independent of these four.

The row reduced echelon matrix has leading coefficients in columns 1
through 4. So I can get a vector which is independent of these rows
by using the standard basis vector which has a 1 in the
*fifth* position.

Therefore, the following set is a basis for :

21. Find standard basis vectors such that the following set forms a basis for .

Construct a matrix with the given vectors as rows and row reduce:

The row reduced echelon matrix has leading coefficients in the first and fourth columns. Therefore, I add standard basis vectors having 1's in the second the third columns. The basis is

22. (a) Find a basis for the subspace of spanned by the set

(b) Find a subset of the following set S which forms a basis for the subspace of spanned by S.

(a) Note that I'm not required to use any of the original vectors. Construct a matrix with the vectors as rows and row reduce:

is a basis for the subspace spanned by the original set of vectors.

(b) In this case, I'm required to use (some of) the original vectors.
Hence, I construct a matrix with the vectors as *columns* and
row reduce:

The row reduced echelon matrix has leading coefficients in the first and second columns. Therefore, the first and second columns of the original matrix --- i.e. the first and second vectors in the original set of vectors ---forms a basis for the subspace of spanned by the set of vectors. The basis is

23. Consider the following real matrices:

Determine whether the row space of A is contained in the row space of B. If it is, prove it. If it isn't, find a vector in the row space of A which is not in the row space of B.

I will do this in two ways: A short way and a long way.

First, I'll do the short way. Row reducing A and B gives

Since A and B have the same row-reduced echelon form, and since a matrix and its row-reduced echelon form have the same row space, the row spaces of A and B are in fact the same. (So trivially, the row space of A is contained in the row space of B.)

Next, here's a longer way which uses the definition of row space.

The row space of a matrix M is the subspace spanned by the rows of the matrix: that is, the set of all linear combinations of the rows of the matrix.

Hence, to show that the row space of A is contained in the row space of B I have to show that every linear combination of the rows of A is a linear combination of the rows of B.

*It's enough to show that every row of* A *is a linear
combination of rows of* B. To see this, suppose that ,
, and are the rows of A and , ,
and are the rows of B. And suppose that the 's
are linear combinations of the 's:

Then if is a linear combination of the 's, I have

That is, every linear combination of the 's is a linear combination of the 's.

So I've reduced the problem to showing that each row of A is a linear combination of rows of B. This is an "Is the vector in the span?" problem. For the first row of A, I want

Rewriting the vectors as column vectors and using matrix form, I have

Row reduce to solve:

The first row of A is a linear combination of the rows of B, with coefficients 1, 1, and -1.

For the second row, the system is

Row reduce to solve:

The second row of A is a linear combination of the rows of B, with coefficients 0, 1, and 1.

For the third row, the system is

Row reduce to solve:

The third row of A is a linear combination of the rows of B, with coefficients 3, 1, and 0.

Since all the rows of A are linear combinations of the rows of B, the row space of A is contained in the row space of B.

If in any of the three cases the system had no solution, then the
corresponding row of A would be a vector in the row space of A which
was *not* in the row space of B.

24. Consider the following real matrix, where :

(a) Find a value of x for which .

(b) Find a value of x for which .

(a) If , the second row is twice the first row.

The row-reduced echelon form has one nonzero row, so its rank is 1. Hence, the rank of A is 1.

(b) Let :

The row-reduced echelon form has two nonzero rows, so its rank is 2. Hence, the rank of A is 2.

25. In a real matrix A, the 5 rows are independent. What is ?

Since and , it follows that .

26. (a) Let , where F is a field. Prove that

(b) Let , where F is a field. Suppose that A is invertible. Prove that

(c) Find a matrix such that

(d) Find a matrix such that

(a) Remember the definition: To say that x is in the null space of a matrix M means that .

Let . Then

Therefore, . Hence, .

(b) In part (a), I showed that . So assuming that A is invertible, I only have to show the opposite inclusion: .

Let . Then

This means that . Hence, . Combined with the other inclusion, I've proved that .

(c) Here's the idea. I want the null space --- the set of vectors
that are multiplied to --- to get *bigger* when I
square A. One way to multiply more vectors to is to have more
all-zero rows in than in A. Experiment a bit with matrix
multiplication. You might come up with something like this.

Let

Then

Now --- just take B
to equal A in part (a) above. To show that this is a *proper*
containment, I have to find a vector which is in but is not in .

Now that null space of A consists of vectors such that

Multiplying out the left side, I get , or . So vectors in the null space of A must have second component 0.

On the other hand, is the zero matrix, so it multiplies
*everything* to --- the null space is
.

So to get something in which is not in , I just take anything vector which does *not*
have second component 0 --- such as . You can check that
multiplies this vector to , but A does not.

(d) The row space is the set spanned by the rows. So you'd expect the
row space to get *smaller* if there are more all-zero rows in
than in A. But I just came up with an example that does this in part
(c): Use

The row space is all multiple of : Vectors of the form , where .

Now

Obviously, the row space of is just --- it's the only vector you can build using the rows of . And since has the form (with ), the row space of is contained in the row space of A.

On the other hand, the vector is in the row space of A, but it's not in the row space of . Therefore, in this case, .

27. For the given function, check each axiom for a linear transformation. If the axiom holds, prove it. If the axiom does not hold, give a specific counterexample.

(a) The function given by

(b) The function given by

(c) The function given by

(d) The function given by

(a) The sum axiom does not hold:

The scalar multiplication axiom does not hold:

(b) g may be written as

Since g is defined by multiplication by a matrix (of numbers), g must be a linear transformation.

Here are direct checks of the axioms:

(c) The sum axiom doesn't hold:

However,

The scalar multiplication axiom doesn't hold:

(d) The sum axiom doesn't hold:

The scalar multiplication axiom doesn't hold:

28. Let A be a *fixed* matrix in . Define by

Prove that T is a linear transformation.

Let . Then

Next, let and let . Then

Therefore, T is a linear transformation.

29. Define by

is called the * trace function*.

(a) Show that is a linear transformation.

(b) Find a basis for the null space of .

(a)

Let . Then

Hence, is a linear transformation.

(b) The kernel of consists of the elements of which maps to 0.

Therefore, , so

Now

Consider the set of matrices

All the matrices in S are in the kernel of . Moreover, the equation above shows that they span .

To show that they're independent, suppose that

Then

Therefore, the matrices are independent.

Therefore, the set S is a basis for .

30. Define by

(a) Compute the derivative .

(b) Show that there no values for which is the zero matrix.

(a)

(b) Set :

Consider the second and third entries in the second columns. They give the equations

Since , I may cancel it to get

There is no v for which these equations are both true. You can think about the graphs of sine and cosine to see this; alternatively, if both equations were true, you could square both and add:

This contradiction shows that the equations can't both hold simultaneously.

31. Find a linear transformation that takes the parallelogram determined by the vectors and to the parallelogram determined by the vectors and .

The following transformation takes the square determined by the vectors and to the parallelogram determined by the vectors and :

Therefore, the inverse takes the parallelogram determined by the vectors and to the square determined by the vectors and :

The following transformation takes the square determined by the vectors and to the parallelogram determined by the vectors and :

Therefore, the composite takes the parallelogram determined by the vectors and to the parallelogram determined by the vectors and :

32. Construct a linear transformation which reflects points in across the line .

Let . The following transformation rotates counterclockwise by :

Thus, rotates clockwise by :

The following transformation reflects across the x-axis:

The following composite first uses to rotate clockwise by , moving the line onto the x-axis. Next, h reflects points across the x-axis. Finally, g rotates counterclockwise by , which moves the x-axis back up to .

33. Find an affine transformation which takes the unit square to the parallelogram with vertices , , , in such a way that the origin goes to A, the vector goes to , and the vector goes to .

First, find vectors for the sides of the parallelogram: and .

Define

The matrix carries the unit square to the parallelogram determined by the vectors, but with mapped to . In order to map the square to the parallelogram , I added a vector to translate to . (The rest of the parallelogram is translated as well.)

34. Consider the following bases for the real vector space :

(a) Construct the translation matrices

(b) Write the vector in terms of .

(c) Write the vector in terms of the standard basis.

(d) Write the vector in terms of .

(e) Define by

Find .

(a)

Note that .

(b)

(c)

(d)

(e) First,

That is,

Then

35. Let be given by

Consider the following bases for and .

(a) Find , , and .

(b) Express relative to the basis .

(c) Express relative to the standard basis.

(d) Find .

(a)

(b)

(c)

(d) First,

Thus,

Therefore,

36. What is ?

37. Simplify .

38. Compute .

Use DeMoivre's Formula:

39. Find the inverse of the following matrix. (Multiply any constants outside the matrix into the matrix and simplify.)

*To do two things at once is to do neither.* - *Publilius
Syrus*

Copyright 2020 by Bruce Ikenaga