Introductory Linear Algebra/System of linear equations
Systems of linear equations in matrix form
editWe should define what is system of linear equations before expressing it in matrix form.
Definition. (System of linear equations) A system of linear equations (SLE) in unknowns is a family of equations of the form in which 's and 's are some constants.
Remark.
- in some other definitions, a single linear equation can be regarded as a system of linear equations, but since we can already easily solve a single linear equation, we are not interested in such a case, and thus we do not include this possibility
We often use the terms 'consistent' and 'inconsistent' to describe the number of solutions
of a system of linear equations.
Definition.
(Consistency of system of linear equations)A system of linear equations is consistent if it has at least one solution. Otherwise, it is inconsistent (i.e., it is inconsistent if it has no solutions).
Remark.
- as we will see, a SLE can either have no solutions, one unique solution, or infinitely many solutions. Thus, a SLE is consistent if and only if it has either a unique solution or infinitely many solutions
Example. (Consistent SLE) Consider the SLE Since the solutions are such that . Thus, this SLE has infinitely many solutions, and the SLE is consistent.
Example. (Inconsistent SLE) Consider the SLE Putting into , we get which is false, and thus there does not exist such that both equations are satisfied. That is, the SLE has no solutions, and thus it is inconsistent.
Example. (Application of SLE) Suppose ten bottles of orange juice have been allocated to a chicken, a duck and a goose. Given that the chicken and the duck have the same number of bottles of orange juice, and the goose has one more bottle of orange juice than the chicken, how many bottles of orange juice have been allocated to each animal?
Solution: Denote the number of orange juice bottles allocated to the chicken, duck and goose by and respectively. Then, from the situation and the given conditions, we have the following SLE: in which are all nonnegative integers.
Putting and into , we have It follows that , and .
Thus, three bottles of orange juice have been allocated to each of the chicken and duck, and four bottles of orange juice have been allocated to the goose.
Exercise.
After defining a system of linear equations, we can express it in matrix form in multiple ways, as in the following definition.
Definition. (Coefficient and augmented matrix) Let be a system of linear equations in the unknowns , in which 's and 's are some constants.
The matrix is the coefficient matrix of the system, and the matrix is the augmented matrix of the system.
Remark.
- the vertical bar in the augmented matrix is optional, it is put there to separate the constants at the left of ' ' sign and the constants at the right of ' ' sign in the system of linear equations.
- this system is equivalent to
which can be rewritten as
- the augmented matrix gives all necessary things to solve a system, since the notations for the unknowns are not needed for solving a system.
Example. (Coefficient matrix and augmented matrix) Consider the SLE which can alternatively expressed as
The coefficient matrix of this SLE is and the augmented matrix of this SLE is We may also express this SLE as which is in the form of .
Exercise.
Gauss-Jordan algorithm
editDefinition. (Elementary row operations) There are three types of elementary row operations (EROs) that we may perform on a matrix, as follows:
- (Type I) interchanging two different rows
- (Type II) multiplying a row by a nonzero scalar
- (Type III) adding a scalar multiple of a row to another row
Remark. We use the following notations for EROs:
- : rows of an matrix (boldface is used since the rows are essentially row vectors)
- : interchanging the th row and the th row
- : multiplying the th row by a nonzero scalar
- : adding times the th row to the th row
Definition. (Row equivalence) Two matrices of the same size are row equivalent to each other if one matrix can be obtained from the other matrix by performing some EROs.
Remark.
- since EROs are reversible (by the following proposition), if matrix can be obtained from matrix by performing some EROs, then can also be obtained from by performing some EROs (each of them reverse one ERO in the EROs performed to obtain from , in suitable order)
- thus, it suffices to prove that can be obtained from by some EROs (or vice versa, either one direction) for proving the row equivalence of and ( and are of same size)
Example. (Demonstration of three types of EROs) Consider the matrix We perform some EROs as follows: Each matrix shown here is row equivalent to , since each of them can be obtained from by performing some EROs, and has the same size as (it also shows how to perform EROs in suitable order to reverse the EROs performed previously).
Exercise.
Proposition. (EROs are reversible) If matrix can be obtained from matrix by performing some EROs, then can also be obtained from by performing some EROs (which may be different from the EROs for obtaining from ).
Proof. Outline: There is a reverse process (i.e. performing the ERO with its reverse process together, in arbitrary orders, will have no effect on matrix) for each type of EROs, which is also a ERO itself, as illustrated below:
- the reverse process of type I ERO is also
- the reverse process of type II ERO is type II ERO (if , this ERO is undefined, this is why must be nonzero for type II ERO, so that it is reversible)
- the reverse process of type III ERO is type III ERO
Example. (Illustration of reverse process of each type of ERO)
- Type I:
- Type II:
- Type III:
Exercise.
Proposition. (Same solution set by row equivalence) Let and be two systems of linear equations with the same number of equations and the same number of variables. If the augmented matrices and are row equivalent, then the two systems have the same solution set.
Proof. Outline: It suffices to show that the solution set is unchanged if we perform one ERO. E.g.
- Type I ERO:
- Type II ERO:
- Type III ERO:
Exercise. Consider three row equivalent matrices
Definition. (Leading entry) The leading entry of a row of a matrix is the leftmost nonzero entry.
Example. The leading entry of the 1st, 2nd row and 3rd of the matrix is respectively.
Exercise.
Definition. (Row echelon form) A matrix is in row echelon form (REF) if
- All the zero rows (if exist) lie at the bottom of the matrix, and
- the leading entry of each nonzero row is always strictly on the right of the leading entries of the rows above.
Definition. (Reduced row echelon form) A matrix is in reduced row echelon form (RREF) if
- It is in row echelon form (REF),
- the leading entry of each nonzero row equals (which is called a leading one), and
- for each leading one, all other entries in the same column are zero.
Example. (Examples of REF and RREF)
- the following matrices are in REF, but not in RREF:
- the following matrices are in RREF (and thus are also in REF):
- the following matrices are not REF (and thus are also not in RREF):
Exercise.
Definition. (Gauss-Jordan Algorithm) The Gauss-Jordan algorithm is a process of performing some EROs to a matrix that converts it to a matrix in RREF. Its steps are as follows:
- Consider the leftmost nonzero column, say column . Swap rows (if necessary) to make the 1st entry of column is nonzero.
- Multiply the 1st row by so that the 1st entry of column is .
- For each other row with a nonzero entry in column , add times the 1st row to this row to make the entry .
- If all rows except the 1st are zero rows, we are done. Otherwise, consider the 1st column which contains a nonzero entry not in the 1st row, say column . Swap the rows below the 1st row (if necessary) to make the 2nd entry of the column nonzero.
- Multiply the 2nd row by to make the 2nd entry of column equal to .
- For each other row with a nonzero entry in column , add times the 2nd row to this row to make the entry .
- Repeat the process by considering each of the rows below until all rows or columns are used, or the remaining rows are all zero rows. Then, the resultant matrix is in RREF.
Remark.
- RREF of matrix is a matrix in RREF obtained from by performing some EROs
- it follows that every matrix has its RREF, since we can use Gauss-Jordan algorithm for every matrix
- RREF of a matrix is unique (the proof is complicated, and so is skipped)
- in some other definitions of Gauss-Jordan algorithm, some steps may differ, but we should also be able to convert the matrix to its RREF in those ways
- you may find this website useful for performing computations related to EROs, Gauss-Jordan algorithm, etc.
Example. (Illustration of Gauss-Jordan Algorithm)
Exercise.
Prove that the RREF of the matrix
is
and
Proposition. (Determining number of solutions of a system of linear equations) Let be a system of linear equations in unknowns. Let be the RREF of the augmented matrix , which is of size . Then, the following hold.
- if has a leading one in the st column, then the system is inconsistent
- if has a leading one in each of the first columns, but not the st column, then the system has a unique solution.
- if has no leading one in the st column, and not all of the first columns contain leading ones, then the system has infinitely many solutions
Remark. Since must only satisfy one of these three conditions, it follows that the number of solutions to a system of linear equations can only be zero (no solutions), one (unique solution), or infinitely many (exclusive or).
Example. The SLE represented by the augmented matrix which is in RREF, is inconsistent, since it has a leading one in the 4th column.
The SLE represented by the augmented matrix which is in RREF, has a unique solution, since it has a leading one in each of the first 3 columns, but not the 4th column.
The SLE represented by the augmented matrix which is in RREF, has infinitely many solutions, since it has no leading one in the 4th column, and not all of the first 3 columns contain leading ones (2nd column does not contain leading one). This matrix can represent the SLE If we let be the independent unknown, then and
Remark.
- independent unknowns (or free variables) are unknowns corresponding to columns without leading ones
- dependent unknowns (or basic variables) are unknowns corresponding to columns with leading ones
Exercise. It is given that the RREF of the matrix is Denote the SLE's by , and respectively.
Definition. (Homogeneous system of linear equations) A system of linear equations is homogeneous if it is of the form . A homogeneous system must be consistent, since it has a solution in which each unknown equals zero , which is defined to be a trivial solution. Other solutions, if they exist, are called nontrivial solutions.
Remark. By the proposition about determining the number of solutions of a system of linear equations, it follows that a homogeneous system of linear equations can either have a unique solution, or infinitely many solutions, since the possibility of having no solutions is excluded, by the fact that a homogeneous system of linear equations must be consistent.
Example. (Examples of homogeneous systems of linear equations) The SLE is homogeneous, and thus is consistent.
Indeed, the RREF of the augmented matrix representing this SLE is and we can see that the unique solution of this SLE is , which is the trivial solution.
The SLE is homogeneous, and thus is consistent.
Indeed, the RREF of the augmented matrix representing this SLE is and therefore this SLE has infinitely many solutions, since not all of the first 3 columns contain leading ones, and there is no leading one in the 4th column in this matrix.
Proposition. (Sufficient condition for homogeneous system of linear equations to have nontrivial solutions) A homogeneous system of linear equations in unknowns must have a nontrivial solution if .
Proof. The augmented matrix of the homogeneous system of linear equations in unknowns is is , and thus its RREF has the form in which is of size , since there are linear equations and unknowns.
If has a leading one in each of the first columns, then there are at least rows in . However, has only rows, a contradiction. Thus, the homogeneous SLE does not have a unique solution.
Since a SLE can either have no solutions (which is impossible for a homogeneous SLE), a unique solution (which is impossible in this case), or infinitely many solutions, it follows that the homogeneous SLE must have infinitely many solutions, and thus have a nontrivial solution.
Remark.
- this proposition does not claim that if a homogeneous SLE has a nontrivial solution, then the number of linear equations is strictly lower than the number of unknowns (converse of the proposition)
- e.g. in the above example, even if the number of linear equations equals that of unknowns, the SLE can still have a nontrivial solution
Example. The homogeneous SLE must have a nontrivial solution.
Indeed, the RREF of the augmented matrix representing this SLE is Let be the independent unknown, we have .