Solutions To Mathematics Textbooks/Algebra (9780132413770)/Chapter 3

Exercise 1.2 edit

Using an "educated guess" one observes that  . With this, it is easy to see that  ,  ,   and  .

Exercise 1.3 edit

 , as all the coefficients divisible by 7 reduce to 0.

Exercise 1.10 edit

Let us denote the matrices (appearing in the same order as in the book) by  . We need to check the following:

  •   is a group with matrix addition and   as the identity. We get the following addition table
0 1 A B
0 0 1 A B
1 1 0 B A
A A B 0 1
B B A 1 0

So we see that the elements with addition form an abelian group with   as the identity.

  •   is a group with   as the identity. Again we have the multiplication table
1 A B
1 1 A B
A A B 1
B B 1 A

Again, we see that the elements with matrix multiplication form an abelian group with   as the identity.

  • The distributive law follows from the distributive law for matrices in general.

Exercise 1.11 edit

Writing out a product and sum of two elements from the given set, and noticing that the coefficients of both elements, and thus their sums and products are in  , implies that the sum and product are in the set. To see that each non-zero element has an inverse in the   -operation is trivial. To see the same for the product operation, write the equations coming from the condition  , where   is a known element from the set and   a candidate for its inverse with unknown coefficients as a linear system. By Corollary 3.2.8 this system has a solution. The distribution law is immediate.

Exercise 2.2 edit

a) The space of symmetric matrices is a vector space, since the sum of two symmetric matrices is a symmetric matrix, and a scaling of a symmetric matrix by a scalar is symmetric.

b) The space of invertible matrices is not a vector space, since it does not contain the zero matrix.

c) The space of upper triangular matrices is also a vector space by similar reasoning as used in part a).

Exercise 3.1 edit

One possible basis for the space of symmetric matrices is for example the matrices   for   that have zeros everywhere but in the   and   entry. There are   such matrices, and they are linearly independent, since no two such matrices have ones in the same entry. Furthermore, the matrices   are symmetric and clearly any symmetric matrix can be written as a linear combination of  .

Exercise 3.7 edit

Let   be coefficients such that

  . (1)

The matrix   has as the  th column the vector   where  is the  th element of the vector  . Denote  , so that the (1) implies together with the condition that the vectors   form a basis that   for all  . So then we must have   for all  . This implies that   for all  , but since the vectors   form a basis, we must have   for all  .

Exercise 3.8 edit

Let   be the matrix with the vectors   as column vectors. Let  . Then   is equivalent to saying that   is a linear combination of the vectors  . By Theorem 1.2.21,   has a unique solution   if and only if   is invertible.

In particular, this means that also   has a unique solution  . This shows that 1)   span the space   and 2)   are linearly independent.

Exercise 4.2 edit

a)  .

b)  .

c)   or  .

Exercise 4.3 edit

The given operations correspond to row operations on matrices. By Theorem 1.2.16, any matrix that is invertible can be reduced to the identity using such operations. In Exercise 3.8 we proved that the columns of a matrix form a basis if and only if the matrix is invertible.

Exercise 4.4 edit

a) Any basis in   corresponds to a matrix that is invertible, i.e., an element of  . On the other hand the column vectors of any element from   form basis vectors for  .

b) For   we have that there are in total   matrices in   of which we have to count the ones that are not invertible. Considering the columns of a matrix in  , we have

  •   first columns that are not the   column vector, and   scalings of the first column with a value other than zero.
  • If the first column is  , the second column can be chosen in   ways such that it is not also the   vector.
  • If the second column is  , the first column can also be chosen in   ways such that it is not the   vector.
  • There is exactly one matrix with both columns  .

Combining these facts, we get that there are  invertible matrices in  .

For   we want to compute the number of matrices in   with determinant equal to 1. In   there are equally many elements with determinant 1, 2, 3, etc.Therefore, the number of elements in   is the number of elements with determinant 1 times  . From the previous calculation we thus get that the number of elements in   is  .

Exercise 4.5 edit

a) The key for finding the number of subspaces is to find the number of linearly independent vectors in  .

  • Subspaces of dimension 0: 1.
  • Subspaces of dimension 1: Each subspace is spanned by a nonzero vector of the form   with  . There are   such vectors. For any such given vector, there are   nonzero scalings with a scalar in  . Hence, the number of linearly independent vectors is  . Each such vector spans a subspace that is different from the subspaces spanned by the others.
  • Subspaces of dimension 2: Let   be some maximal collection of linearly independent vectors of  . We know that  , and any two vectors from   span a two-dimensional subspace of  . We can choose two vectors from   in   ways, but this is not the number of two-dimensional subspaces of  . Indeed, say we choose   such that  . Then   is a subspace of   containing   points and   linearly independent vectors. As  , this means   contains some vector  . The number of pairs of linearly independent vectors in   is  , and hence the number of two-dimensional subspaces of   is  . Another way to arriving the same conclusion is as follows: Let   be a subspace of   of dimension 2. Then,   is spanned by two linearly independent vectors, and there is a vector   such that  . In other words, the vectors in   are linearly independent of  . We know that there are   linearly independent vectors in  , so whenever we choose one of such vectors, we are left with a subspace of dimension 2 that does not contain the chosen vector (but contains all the others). Hence, there are also   subspaces of dimension 2.
  • Subspaces of dimension 3: 1.

b) The case of   can be generalised from the previous case:

  • Subspaces of dimeansion 0: 1.
  • Subspaces of dimension 1: The number of linearly independent vectors can be calculated similarly as in a), and we get  .
  • Subspaces of dimension 2: Similarly as in the case of  , we have that the number of two-dimensional subspaces is  .
  • Subspaces of dimension 3: Similarly as in the case of  , we have that for each three-dimensional subspace we have a one-dimensional subspace "left over". Therefore the number of three-dimensional subspaces is  .
  • Subspaces of dimension 4: 1.

Exercise 5.1 edit

Let   be the space of symmetric and   the space of skew-symmetric matrices. It is clear that   and   and that   contains only the zero matrix, so the spaces are independent. By Proposition 3.6.4 b), we have   and so by Proposition 3.4.23,  .

Exercise 5.2 edit

The condition   introduces a linear dependency between the elements of the matrix. Therefore, we have  , and thus any one-dimensional subspace of   that is independent of   suffices. For example we can take as   the span of the matrix for which the top-left corner element is 1 and the rest are 0. Then  .

Exercise 6.1 edit

The given vectors span the set of sequences that are constant apart from a finite set of indices.

Exercise M.3 edit

a) Let   and  , and  . Then we have also  . The coefficients   are linear in the coefficients  , so

we can solve as follows

 

Setting each   to zero yields a system of equations

 .

By Corollary 1.2.14 this system has a solution where at least one of the coefficients   is non-zero, so there is a polynomial   that is not identically zero, but   for every  .

b) We can solve for example   using similar approach to part a).

c) Let   be a polynomial of degree   and   polynomial of degree  , so that   and  . Let   be a polynomial with unknown coefficients  . In order to have this polynomial vanish at  , we have to solve the equations that set the coefficient of   to 0 for each   in the polynomial  . These equations are linear in  , and there are   of them. On the other hand, there are   variables  , so by Corollary 1.2.14 the linear system has a non-zero solution. Note that in part a) we restricted the degree of the polynomial   to 2, and thus did not end up with as many equations as in this proof.