Linear Algebra/Matrices

Matrices and Linear Transformations

edit

It turns out that linear transformations can be represented in a 1-1 fashion in matrices. This chapter will be most likely be a review as the topic has already probably been covered in high school (see this link). The establishment of a one-to-one correspondence between linear transformations and matrices is very important in the study of linear transformations.

Suppose you have a set of basis vectors x1, x2, x3, ..., xm of a vector space X and basis vectors y1, y2, y3, ..., yn of a vector space Y.

Consider a linear transformation T from X to Y, and the vectors

T(x1)=y1a11+y2a21+y3a31+...+ynan1,
T(x2)=y1a12+y2a22+y3a32+...+ynan2,
T(x3)=y1a13+y2a23+y3a33+...+ynan3,
...
T(xm)=y1a1m+y2a2m+y3a3m+...+ynanm,

You can arrange these coefficients in a matrix

 .

Thus, if you have any vector

 ,

Then

 

Thus, T(x) is a linear combination of basis vectors

 , where

 .

Thus knowledge of a matrix in respect to bases can determine the value of a the result of a linear transformation.

Thus, given any matrix, there is a corresponding function with the results being

 , where

 .

This is obviously a linear operator, whose matrix coincides with the matrix used. This establishes the fact that every n by m matrix can determine a linear operator mapping an m dimensional vector space into an n dimensional vector space.

Algebra of Transformations

edit

Addition

edit

Define the sum C=A+B where A and B are linear transformations to be the function C(x)=A(x)+B(x). One can easily verify that this is also a linear transformation. You can verify that given two linear transformations A and B, that

  1. A+B=B+A
  2. (A+B)+C=C+(B+A)
  3. A+0=A
  4. A+(-A)=0

where 0 is the zero operator -A is the function -A(x) which one can easily verify to be a linear transformation.

Scalar multiplication

edit

Given a linear transformation a, define the function   where   is an element of a field to be the function  .

You can easily verify that given a linear transformations A and B and an elements of a field  ,  , and  , that

  1.  
  2.  
  3.  
  4.  

This implies that linear transformations form a vector space.

Multiplication

edit

Given a linear transformation A from X to Y and a linear transformation B from Y to Z, then define the function AB from X to Z to be the composition of the two functions. One can easily verify that this is also a linear transformation.

Here are some useful relations that can easily be verified:

  1.  
  2.  
  3.  
  4.  .

Corresponding algebra of matrices

edit

Since there is a one-to-one correspondence between linear transformations from m-dimension spaces to n-dimensional spaces and m-by-n matrices, the addition, scalar multiplication, and multiplication operations are defined in their one-to-one correspondence, and all properties stated above hold for matrices. The addition of matrices M and N can be defined as the matrix that corresponds to the sum m+n where m and n are the linear transformations that correspond to M and N respectively. The other operations are defined similarly.

Addition

edit

Let A = |aij| and B = |bij| be two matrices of dimension n by m. Consider A and B which are the corresponding linear transformations from an m-dimensional vector space M to an n-dimensional vector space N. Let m1, m2, m3, ..., mm, be basis vectors of M and n1, n2, n3, ..., nn be basis vectors of N. Then

 , and  .

Thus

 

so the matrix of this operator has entries |aij+bij|. In other words, the sum of two matrices have entries that are the sum of the corresponding entries of the two matrices.

Examples:

 

Once addition is defined we have obviously also defined subtraction. A - B is computed by subtracting corresponding elements of A and B, and has the same dimensions as A and B. For example:

 

Scalar multiplication

edit

Scalar multiplication of matrices shall be defined to be the corresponding matrix of the scalar product of the corresponding linear transformations.

Consider a matrix A with entries |aij| and its corresponding linear transformation A from M to N, and an element of a field  , and let m1, m2, m3, ..., mm, be basis vectors of M and n1, n2, n3, ..., nn be basis vectors of N. Since

 , the entries of the corresponding matrix is has entries | aij|.

For example, multiplication by 2 of a matrix:
 

Scalar Multiplication has the following properties, which have been proven because of its one-to-one correspondence to Linear Transformations:

  1. Left distributivity: (α+β)A = αA+βA.
  2. Right distributivity: α(A+B) = αA+αB.
  3. Associativity: (αβ)A=α(βA)).
  4. 1A = A.
  5. 0A= 0.
  6. (-1)A = -A.

Matrix multiplication

edit

As above, matrix multiplication will also be defined as its correspondence to linear transformations. The product of two matrices is the corresponding matrices of the product of the corresponding two linear transformations.

Consider an o by n dimensional matrix A with entries |aij|, n by m dimensional matrix B with entries |bij|, and let A be a linear transformation from n-dimensional M to o-dimensional O that corresponds to A, and let B be a linear transformation from m-dimensional N to n-dimensional N that corresponds to B, and let m1, m2, m3, ..., mm, be basis vectors of M, n1, n2, n3, ..., nn, be basis vectors of N, o1, o2, o3, ..., oo, be basis vectors of O. Then

 

Thus the corresponding matrix has entries |pij| that are given by:

 

For example:

 
 
 
Matrix product

Matrix multiplication has the following properties, which have been verified due to the fact that they are also true of linear transformations.

  1. Associativity: A(BC) = (AB)C.
  2. Left distributivity: A(B+C) = AB+AC.
  3. Right distributivity: (A+B)C = AC+BC.
  4. IA = A = AI.
  5. α(BC) = (αB)C = B(αC).

Matrix multiplication is in general not commutative, i.e. there exist matrices for which AB   BA. An example can be given by:   and  

The way matrix multiplication is defined seems illogical and strange; why can matrix multiplication not be defined as just multiplying corresponding entries as in the case of addition and scalar multiplication? Unfortunately the actual answer will be available to us only later on in Chapter 3. In the meantime we will satisfy ourselves by noting the advantage that matrix multiplication gives us by representing a linear system in matrix form. This will be clear in the following section.

At this point we see fit to make another definition. An n by n matrix A is invertible if and only if there exists a matrix B such that

AB = In = BA.

In this case, B is the inverse matrix of A, denoted by A−1. Clearly the inverse of the identity matrix is itself. We will study invertible matrices in detail later.

One point more is to be noted here. The type of matrix multiplication when the product matrix is simply the matrix obtained by multiplying the corresponding entries of two equal dimension matrices also has a name. It is called the Hadamard product. We shall not use this kind of multiplication. Throughout the book matrix multiplication will always refer to the matrix product defined above.

Determinants of Products of Matrices (Binet's Theorem)

edit

In addition, the determinant is a multiplicative map in the sense that

  for all n-by-n matrices   and  .

This is generalized by the Cauchy-Binet formula to products of non-square matrices.

Matrices and system of linear equations

edit

The concept of a matrix was historically introduced to simplify the solution of linear systems although they today have much greater and broad-reaching applications. Let's see how a linear system can be represented using a matrix.

Consider a general system of m linear equations with n unknowns:

 

The system is equivalent to a matrix equation of the form

 

where A is an m×n matrix, x is a column matrix with n entries, and b is a column matrix with m entries.

 

Clearly our manner of defining matrix multiplication is used in representing the linear system in this fashion because now the product of the matrix A and the matrix x gives us precisely the matrix b.

Representing linear systems in this fashion also enables us to easily prove the following theorem:

Theorem 1: Any system of linear equations has either no solution, exactly one solution or infinitely many solutions.

Proof: Suppose a linear system Ax = b has two different solutions given by X and Y. Then let Z = X - Y. Clearly Z is non zero and A(X + kZ) = AX + kAZ = b + k(AX - AY) = b + k(b - b) = b so that X + kZ is a solution to the system for every possible value of k. Since k can assume infinitely many values so clearly we have an infinite number of solutions.

Exercises

edit

Hints to many of the exercises can be found at Famous Theorems of Mathematics/Algebra/Matrix Theory.

1. Let A and B be m × matrices. Then:

(i)   =  
(ii)  
(iii)  

2. Let a triangular matrix be a square matrix with either all (i,j) entries zero for either i<j (in which case it is called an lower triangular matrix) or for j<i (in which case it is called an upper triangular matrix). Show that any triangular matrix satisfying   is a diagonal matrix.

3. For a square matrix A show that:

(i)   and   are symmetric
(ii)   is skew symmetric
(iii) A can be expressed as the sum of a symmetric matrix,   and a skew symmetric matrix  

4. Suppose A is a m×n matrix and x is a n×1 column vector. Show that if   and   where   then  . This is also expressed by saying that Ax is a linear combination of the columns of A.

See also

edit
Linear Algebra
Linear Transformations Matrices Elementary row transformations