Linear Algebra/Print version/Part 1


Several graphs distorted in a line


Table of Contents

Linear Algebra
An Introduction to Mathematical Discourse

This book discusses proof-based linear algebra. The book was designed specifically for students who had not previously been exposed to mathematics as mathematicians view it. That is, as a subject whose goal is to rigorously prove theorems starting from clear consistent definitions. This book attempts to build students up from a background where mathematics is simply a tool that provides useful calculations to the point where the students have a grasp of the clear and precise nature of mathematics. A more detailed discussion of the prerequisites and goal of this book is given in the introduction.

Because of the proof-based nature of this book, readers are recommended to be familiar with mathematical proof before reading this book (although this is not a prerequisite, strictly speaking), so that their reading experiences can be smoother. To gain familiarity with mathematical proof and also some basic mathematical concepts, readers may read the wikibook Mathematical Proof. For a milder introduction to linear algebra that is not too proof-based, see the wikibook Introductory Linear Algebra.

Table of Contents

Linear Systems

  1. Solving Linear Systems100% developed  as of Jul 13, 2009 (Jul 13, 2009)
    1. Gauss' Method 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
    2. Describing the Solution Set 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
    3. General = Particular + Homogeneous 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
    4. Comparing Set Descriptions 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
    5. Automation 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
  2. Linear Geometry of n-Space 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
    1. Vectors in Space 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
    2. Length and Angle Measures 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
  3. Reduced Echelon Form 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
    1. Gauss-Jordan Reduction 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
    2. Row Equivalence 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
  4. Topic: Computer Algebra Systems 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
  5. Topic: Input-Output Analysis 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
  6. Input-Output Analysis M File 100% developed  as of Mar 24 2008 (Mar 24 2008)
  7. Topic: Accuracy of Computations 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
  8. Topic: Analyzing Networks 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
  9. Topic: Speed of Gauss' Method 50% developed  as of Mar 24, 2008 (Mar 24, 2008)

Vector Spaces 100% developed  as of Apr 17, 2009 (Apr 17, 2009)

  1. Definition of Vector Space100% developed  as of Apr 17, 2009 (Apr 17, 2009)
    1. Definition and Examples100% developed  as of Jun 18, 2009 (Jun 18, 2009)
    2. Subspaces and Spanning sets100% developed  as of Jun 18, 2009 (Jun 18, 2009)
  2. Linear Independence100% developed  as of Apr 17, 2009 (Apr 17, 2009)
    1. Definition and Examples100% developed  as of Apr 17, 2009 (Apr 17, 2009)
  3. Basis and Dimension100% developed  as of Apr 17, 2009 (Apr 17, 2009)
    1. Basis100% developed  as of Jun 18, 2009 (Jun 18, 2009)
    2. Dimension100% developed  as of Apr 17, 2009 (Apr 17, 2009)
    3. Vector Spaces and Linear Systems100% developed  as of Apr 17, 2009 (Apr 17, 2009)
    4. Combining Subspaces100% developed  as of Apr 17, 2009 (Apr 17, 2009)
  4. Topic: Fields100% developed  as of Apr 17, 2009 (Apr 17, 2009)
  5. Topic: Crystals100% developed  as of Apr 17, 2009 (Apr 17, 2009)
  6. Topic: Voting Paradoxes100% developed  as of Apr 17, 2009 (Apr 17, 2009)
  7. Topic: Dimensional Analysis100% developed  as of Apr 17, 2009 (Apr 17, 2009)

Maps Between Spaces

  1. Isomorphisms100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    1. Definition and Examples100% developed  as of July 19, 2009 (July 19, 2009)
    2. Dimension Characterizes Isomorphism100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  2. Homomorphisms100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    1. Definition of Homomorphism100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    2. Rangespace and Nullspace100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  3. Computing Linear Maps100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    1. Representing Linear Maps with Matrices100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    2. Any Matrix Represents a Linear Map100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  4. Matrix Operations100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    1. Sums and Scalar Products100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    2. Matrix Multiplication100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    3. Mechanics of Matrix Multiplication100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    4. Inverses100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  5. Change of Basis100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    1. Changing Representations of Vectors100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    2. Changing Map Representations100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  6. Projection100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    1. Orthogonal Projection Onto a Line100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    2. Gram-Schmidt Orthogonalization100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    3. Projection Onto a Subspace100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  7. Topic: Line of Best Fit100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  8. Topic: Geometry of Linear Maps100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  9. Topic: Markov Chains100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  10. Topic: Orthonormal Matrices100% developed  as of Jun 21, 2009 (Jun 21, 2009)

Determinants100% developed  as of Jun 21, 2009 (Jun 21, 2009)

  1. Definition100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    1. Exploration100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    2. Properties of Determinants100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    3. The Permutation Expansion100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    4. Determinants Exist100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  2. Geometry of Determinants100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    1. Determinants as Size Functions100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  3. Other Formulas for Determinants100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    1. Laplace's Expansion100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  4. Topic: Cramer's Rule100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  5. Topic: Speed of Calculating Determinants100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  6. Topic: Projective Geometry100% developed  as of Jun 21, 2009 (Jun 21, 2009)

Similarity100% developed  as of Jun 24, 2009 (Jun 24, 2009)

  1. Complex Vector Spaces100% developed  as of Jun 24, 2009 (Jun 24, 2009)
    1. Factoring and Complex Numbers: A Review100% developed  as of Jun 24, 2009 (Jun 24, 2009)
    2. Complex Representations100% developed  as of Jun 24, 2009 (Jun 24, 2009)
  2. Similarity
    1. Definition and Examples100% developed  as of Jun 24, 2009 (Jun 24, 2009)
    2. Diagonalizability100% developed  as of Jun 24, 2009 (Jun 24, 2009)
    3. Eigenvalues and Eigenvectors100% developed  as of Jun 24, 2009 (Jun 24, 2009)
  3. Nilpotence100% developed  as of Jun 24, 2009 (Jun 24, 2009)
    1. Self-Composition100% developed  as of Jun 24, 2009 (Jun 24, 2009)
    2. Strings100% developed  as of Jun 24, 2009 (Jun 24, 2009)
  4. Jordan Form100% developed  as of Jun 24, 2009 (Jun 24, 2009)
    1. Polynomials of Maps and Matrices100% developed  as of Jun 24, 2009 (Jun 24, 2009)
    2. Jordan Canonical Form100% developed  as of Jun 24, 2009 (Jun 24, 2009)
  5. Topic: Geometry of Eigenvalues50% developed  as of Jun 24, 2009 (Jun 24, 2009)
  6. Topic: The Method of Powers100% developed  as of Jun 24, 2009 (Jun 24, 2009)
  7. Topic: Stable Populations100% developed  as of Jun 24, 2009 (Jun 24, 2009)
  8. Topic: Linear Recurrences100% developed  as of Jun 24, 2009 (Jun 24, 2009)

Unitary Transformations

  1. Inner product spaces75% developed
  2. Unitary and Hermitian matrices75% developed
  3. Singular Value Decomposition75% developed
  4. Spectral Theorem75% developed

Appendix

The following is a brief overview of some basic concepts in mathematics. For more details, reader can read the wikibook Mathematical Proof.

Resources and Licensing



Notation

, , real numbers, reals greater than , ordered -tuples of reals

natural numbers:

complex numbers

set of . . . such that . . .

,

interval (open or closed) of reals between and

sequence; like a set but order matters

vector spaces

vectors

,

zero vector, zero vector of

bases

standard basis for

basis vectors

matrix representing the vector

set of -th degree polynomials

set of matrices

span of the set

direct sum of subspaces

isomorphic spaces

homomorphisms, linear maps

matrices

transformations; maps from a space to itself

square matrices

matrix representing the map

matrix entry from row , column

determinant of the matrix

rangespace and nullspace of the map

generalized rangespace and nullspace

Lower case Greek alphabet

About the Cover. This is Cramer's Rule for the system , . The size of the first box is the determinant shown (the absolute value of the size is the area). The size of the second box is times that, and equals the size of the final box. Hence, is the final determinant divided by the first determinant.


Introduction

This book helps students to master the material of a standard undergraduate linear algebra course.

The material is standard in that the topics covered are Gaussian reduction, vector spaces, linear maps, determinants, and eigenvalues and eigenvectors. The audience is also standard: sophomores or juniors, usually with a background of at least one semester of calculus and perhaps with as many as three semesters.

The help that it gives to students comes from taking a developmental approach—this book's presentation emphasizes motivation and naturalness, driven home by a wide variety of examples and extensive, careful, exercises. The developmental approach is what sets this book apart, so some expansion of the term is appropriate here.

Courses in the beginning of most mathematics programs reward students less for understanding the theory and more for correctly applying formulas and algorithms. Later courses ask for mathematical maturity: the ability to follow different types of arguments, a familiarity with the themes that underlie many mathematical investigations like elementary set and function facts, and a capacity for some independent reading and thinking. Linear algebra is an ideal spot to work on the transition between the two kinds of courses. It comes early in a program so that progress made here pays off later, but also comes late enough that students are often majors and minors. The material is coherent, accessible, and elegant. There are a variety of argument styles—proofs by contradiction, if and only if statements, and proofs by induction, for instance—and examples are plentiful.

So, the aim of this book's exposition is to help students develop from being successful at their present level, in classes where a majority of the members are interested mainly in applications in science or engineering, to being successful at the next level, that of serious students of the subject of mathematics itself.

Helping students make this transition means taking the mathematics seriously, so all of the results in this book are proved. On the other hand, we cannot assume that students have already arrived, and so in contrast with more abstract texts, we give many examples and they are often quite detailed.

In the past, linear algebra texts commonly made this transition abruptly. They began with extensive computations of linear systems, matrix multiplications, and determinants. When the concepts—vector spaces and linear maps—finally appeared, and definitions and proofs started, often the change brought students to a stop. In this book, while we start with a computational topic, linear reduction, from the first we do more than compute. We do linear systems quickly but completely, including the proofs needed to justify what we are computing. Then, with the linear systems work as motivation and at a point where the study of linear combinations seems natural, the second chapter starts with the definition of a real vector space. This occurs by the end of the third week.

Another example of our emphasis on motivation and naturalness is that the third chapter on linear maps does not begin with the definition of homomorphism, but with that of isomorphism. That's because this definition is easily motivated by the observation that some spaces are "just like" others. After that, the next section takes the reasonable step of defining homomorphism by isolating the operation-preservation idea. This approach loses mathematical slickness, but it is a good trade because it comes in return for a large gain in sensibility to students.

One aim of a developmental approach is that students should feel throughout the presentation that they can see how the ideas arise, and perhaps picture themselves doing the same type of work.

The clearest example of the developmental approach taken here—and the feature that most recommends this book—is the exercises. A student progresses most while doing the exercises, so they have been selected with great care. Each problem set ranges from simple checks to reasonably involved proofs. Since an instructor usually assigns about a dozen exercises after each lecture, each section ends with about twice that many, thereby providing a selection. There are even a few problems that are challenging puzzles taken from various journals, competitions, or problems collections. (These are marked with a "?" and as part of the fun, the original wording has been retained as much as possible.) In total, the exercises are aimed to both build an ability at, and help students experience the pleasure of, doing mathematics.

Applications and Computers.

The point of view taken here, that linear algebra is about vector spaces and linear maps, is not taken to the complete exclusion of others. Applications and the role of the computer are important and vital aspects of the subject. Consequently, each of this book's chapters closes with a few application or computer-related topics. Some are: network flows, the speed and accuracy of computer linear reductions, Leontief Input/Output analysis, dimensional analysis, Markov chains, voting paradoxes, analytic projective geometry, and difference equations.

These topics are brief enough to be done in a day's class or to be given as independent projects for individuals or small groups. Most simply give the reader a taste of the subject, discuss how linear algebra comes in, point to some further reading, and give a few exercises. In short, these topics invite readers to see for themselves that linear algebra is a tool that a professional must have.

For people reading this book on their own.

This book's emphasis on motivation and development make it a good choice for self-study. But, while a professional instructor can judge what pace and topics suit a class, if you are an independent student then perhaps you would find some advice helpful.

Here are two timetables for a semester. The first focuses on core material.

week Monday Wednesday Friday
1 One.I.1 One.I.1, 2 One.I.2, 3
2 One.I.3 One.II.1 One.II.2
3 One.III.1, 2 One.III.2 Two.I.1
4 Two.I.2 Two.II Two.III.1
5 Two.III.1, 2 Two.III.2 Exam
6 Two.III.2, 3 Two.III.3 Three.I.1
7 Three.I.2 Three.II.1 Three.II.2
8 Three.II.2 Three.II.2 Three.III.1
9 Three.III.1 Three.III.2 Three.IV.1, 2
10 Three.IV.2, 3, 4 Three.IV.4 Exam
11 Three.IV.4, Three.V.1 Three.V.1, 2 Four.I.1, 2
12 Four.I.3 Four.II Four.II
13 Four.III.1 Five.I Five.II.1
14 Five.II.2 Five.II.3 Review

The second timetable is more ambitious (it supposes that you know One.II, the elements of vectors, usually covered in third semester calculus).

week Monday Wednesday Friday
1 One.I.1 One.I.2 One.I.3
2 One.I.3 One.III.1, 2 One.III.2
3 Two.I.1 Two.I.2 Two.II
4 Two.III.1 Two.III.2 Two.III.3
5 Two.III.4 Three.I.1 Exam
6 Three.I.2 Three.II.1 Three.II.2
7 Three.III.1 Three.III.2 Three.IV.1, 2
8 Three.IV.2 Three.IV.3 Three.IV.4
9 Three.V.1 Three.V.2 Three.VI.1
10 Three.VI.2 Four.I.1 Exam
11 Four.I.2 Four.I.3 Four.I.4
12 Four.II Four.II, Four.III.1 Four.III.2, 3
13 Five.II.1, 2 Five.II.3 Five.III.1
14 Five.III.2 Five.IV.1, 2 Five.IV.2

See the table of contents for the titles of these subsections.

To help you make time trade-offs, in the table of contents I have marked subsections as optional if some instructors will pass over them in favor of spending more time elsewhere. You might also try picking one or two topics that appeal to you from the end of each chapter. You'll get more from these if you have access to computer software that can do the big calculations.

The most important advice is: do many exercises. The recommended exercises are labeled throughout. (The answers are available.) You should be aware, however, that few inexperienced people can write correct proofs. Try to find a knowledgeable person to work with you on this.

Finally, if I may, a caution for all students, independent or not: I cannot overemphasize how much the statement that I sometimes hear, "I understand the material, but it's only that I have trouble with the problems" reveals a lack of understanding of what we are up to. Being able to do things with the ideas is their point. The quotes below express this sentiment admirably. They state what I believe is the key to both the beauty and the power of mathematics and the sciences in general, and of linear algebra in particular (I took the liberty of formatting them as poems).

I know of no better tactic
 than the illustration of exciting principles
by well-chosen particulars.
        --Stephen Jay Gould

If you really wish to learn
 then you must mount the machine
 and become acquainted with its tricks
by actual trial.
        --Wilbur Wright

Jim Hefferon
Mathematics, Saint Michael's College
Colchester, Vermont USA 05439
http://joshua.smcvt.edu
2006-May-20

Author's Note. Inventing a good exercise, one that enlightens as well as tests, is a creative act, and hard work.

The inventor deserves recognition. But for some reason texts have traditionally not given attributions for questions. I have changed that here where I was sure of the source. I would greatly appreciate hearing from anyone who can help me to correctly attribute others of the questions.



Chapter I - Linear Systems

Section I - Solving Linear Systems

Systems of linear equations are common in science and mathematics. These two examples from high school science (O'Nan 1990) give a sense of how they arise.

The first example is from Physics. Suppose that we are given three objects, one with a mass known to be 2 kg, and are asked to find the unknown masses. Suppose further that experimentation with a meter stick produces two balances (one of which is depicted below).

Since the sum of magnitudes of the torques of the clockwise forces equal those of the counter clockwise forces (the torque of an object rotating about a fixed origin is the cross product of the force on it and its position vector relative to the origin; gravitational acceleration is uniform we can divide both sides by it). The two balances give this system of two equations.

Can you finish the solution?

c =

kg
h =

kg

The second example of a linear system is from Chemistry. We can mix, under controlled conditions, toluene and nitric acid to produce trinitrotoluene along with the byproduct water (conditions have to be controlled very well, indeed— trinitrotoluene is better known as TNT). In what proportion should those components be mixed? The number of atoms of each element present before the reaction.

must equal the number present afterward. Applying that principle to the elements C, H, N, and O in turn gives this system.

Can you balance the equation?

To finish each of these examples requires solving a system of equations. In each, the equations involve only the first power of the variables. This chapter shows how to solve any such system.


1 - Gauss' Method

Definition 1.1

A linear equation in variables has the form

where the numbers are the equation's coefficients and is the constant. An -tuple is a solution of, or satisfies, that equation if substituting the numbers for the variables gives a true statement: .

A system of linear equations

has the solution if that -tuple is a solution of all of the equations in the system.

Example 1.2

The ordered pair is a solution of this system.

In contrast, is not a solution.

Finding the set of all solutions is solving the system. No guesswork or good fortune is needed to solve a linear system. There is an algorithm that always works. The next example introduces that algorithm, called Gauss' method. It transforms the system, step by step, into one with a form that is easily solved.

Example 1.3

To solve this system

we repeatedly transform it until it is in a form that is easy to solve.

The third step is the only nontrivial one. We've mentally multiplied both sides of the first row by , mentally added that to the old second row, and written the result in as the new second row.

Now we can find the value of each variable. The bottom equation shows that . Substituting for in the middle equation shows that . Substituting those two into the top equation gives that and so the system has a unique solution: the solution set is .

Most of this subsection and the next one consists of examples of solving linear systems by Gauss' method. We will use it throughout this book. It is fast and easy. But, before we get to those examples, we will first show that this method is also safe in that it never loses solutions or picks up extraneous solutions.

Theorem 1.4 (Gauss' method)

If a linear system is changed to another by one of these operations

  1. an equation is swapped with another
  2. an equation has both sides multiplied by a nonzero constant
  3. an equation is replaced by the sum of itself and a multiple of another

then the two systems have the same set of solutions.

Each of those three operations has a restriction. Multiplying a row by is not allowed because obviously that can change the solution set of the system. Similarly, adding a multiple of a row to itself is not allowed because adding times the row to itself has the effect of multiplying the row by . Finally, swapping a row with itself is disallowed to make some results in the fourth chapter easier to state and remember (and besides, self-swapping doesn't accomplish anything).

Proof

We will cover the equation swap operation here and save the other two cases for Problem 14.

Consider this swap of row with row .


The -tuple satisfies the system before the swap if and only if substituting the values, the 's, for the variables, the 's, gives true statements: and ... and ... and ... .

In a requirement consisting of statements and-ed together we can rearrange the order of the statements, so that this requirement is met if and only if and ... and ... and ... . This is exactly the requirement that solves the system after the row swap.

Definition 1.5

The three operations from Theorem 1.4 are the elementary reduction operations, or row operations, or Gaussian operations. They are swapping, multiplying by a scalar or rescaling, and pivoting.

When writing out the calculations, we will abbreviate "row " by "". For instance, we will denote a pivot operation by , with the row that is changed written second. We will also, to save writing, often list pivot steps together when they use the same .

Example 1.6

A typical use of Gauss' method is to solve this system.

The first transformation of the system involves using the first row to eliminate the in the second row and the in the third. To get rid of the second row's , we multiply the entire first row by , add that to the second row, and write the result in as the new second row. To get rid of the third row's , we multiply the first row by , add that to the third row, and write the result in as the new third row.

(Note that the two steps and are written as one operation.) In this second system, the last two equations involve only two unknowns. To finish we transform the second system into a third system, where the last equation involves only one unknown. This transformation uses the second row to eliminate from the third row.

Now we are set up for the solution. The third row shows that . Substitute that back into the second row to get , and then substitute back into the first row to get .

Example 1.7

For the Physics problem from the start of this chapter, Gauss' method gives this.

So , and back-substitution gives that . (The Chemistry problem is solved later.)

Example 1.8

The reduction

shows that , , and .

As these examples illustrate, Gauss' method uses the elementary reduction operations to set up back-substitution.

Definition 1.9

In each row, the first variable with a nonzero coefficient is the row's leading variable. A system is in echelon form if each leading variable is to the right of the leading variable in the row above it (except for the leading variable in the first row).

Example 1.10

The only operation needed in the examples above is pivoting. Here is a linear system that requires the operation of swapping equations. After the first pivot

the second equation has no leading . To get one, we look lower down in the system for a row that has a leading and swap it in.

(Had there been more than one row below the second with a leading then we could have swapped in any one.) The rest of Gauss' method goes as before.

Back-substitution gives , , , and .

Strictly speaking, the operation of rescaling rows is not needed to solve linear systems. We have included it because we will use it later in this chapter as part of a variation on Gauss' method, the Gauss-Jordan method.

All of the systems seen so far have the same number of equations as unknowns. All of them have a solution, and for all of them there is only one solution. We finish this subsection by seeing for contrast some other things that can happen.

Example 1.11

Linear systems need not have the same number of equations as unknowns. This system

has more equations than variables. Gauss' method helps us understand this system also, since this

shows that one of the equations is redundant. Echelon form

gives and . The "" is derived from the redundancy.

That example's system has more equations than variables. Gauss' method is also useful on systems with more variables than equations. Many examples are in the next subsection.

Another way that linear systems can differ from the examples shown earlier is that some linear systems do not have a unique solution. This can happen in two ways.

The first is that it can fail to have any solution at all.

Example 1.12

Contrast the system in the last example with this one.

Here the system is inconsistent: no pair of numbers satisfies all of the equations simultaneously. Echelon form makes this inconsistency obvious.

The solution set is empty.

Example 1.13

The prior system has more equations than unknowns, but that is not what causes the inconsistency— Example 1.11 has more equations than unknowns and yet is consistent. Nor is having more equations than unknowns sufficient for inconsistency, as is illustrated by this inconsistent system with the same number of equations as unknowns.

The other way that a linear system can fail to have a unique solution is to have many solutions.

Example 1.14

In this system

any pair of numbers satisfying the first equation automatically satisfies the second. The solution set is infinite; some of its members are , , and . The result of applying Gauss' method here contrasts with the prior example because we do not get a contradictory equation.

Don't be fooled by the "" equation in that example. It is not the signal that a system has many solutions.

Example 1.15

The absence of a "" does not keep a system from having many different solutions. This system is in echelon form

has no "", and yet has infinitely many solutions. (For instance, each of these is a solution: , , , and . There are infinitely many solutions because any triple whose first component is and whose second component is the negative of the third is a solution.)

Nor does the presence of a "" mean that the system must have many solutions. Example 1.11 shows that. So does this system, which does not have many solutions— in fact it has none— despite that when it is brought to echelon form it has a "" row.

We will finish this subsection with a summary of what we've seen so far about Gauss' method.

Gauss' method uses the three row operations to set a system up for back substitution. If any step shows a contradictory equation then we can stop with the conclusion that the system has no solutions. If we reach echelon form without a contradictory equation, and each variable is a leading variable in its row, then the system has a unique solution and we find it by back substitution. Finally, if we reach echelon form without a contradictory equation, and there is not a unique solution (at least one variable is not a leading variable) then the system has many solutions.

The next subsection deals with the third case— we will see how to describe the solution set of a system with many solutions.

Exercises

This exercise is recommended for all readers.
Problem 1

Use Gauss' method to find the unique solution for each system.



This exercise is recommended for all readers.
Problem 2

Use Gauss' method to solve each system or conclude "many solutions" or "no solutions".











This exercise is recommended for all readers.
Problem 3

There are methods for solving linear systems other than Gauss' method. One often taught in high school is to solve one of the equations for a variable, then substitute the resulting expression into other equations. That step is repeated until there is an equation with only one variable. From that, the first number in the solution is derived, and then back-substitution can be done. This method takes longer than Gauss' method, since it involves more arithmetic operations, and is also more likely to lead to errors. To illustrate how it can lead to wrong conclusions, we will use the system

from Example 1.12.

  1. Solve the first equation for and substitute that expression into the second equation. Find the resulting .
  2. Again solve the first equation for , but this time substitute that expression into the third equation. Find this .

What extra step must a user of this method take to avoid erroneously concluding a system has a solution?

This exercise is recommended for all readers.
Problem 4

For which values of are there no solutions, many solutions, or a unique solution to this system?

This exercise is recommended for all readers.
Problem 5

This system is not linear, in some sense,

and yet we can nonetheless apply Gauss' method. Do so. Does the system have a solution?

This exercise is recommended for all readers.
Problem 6

What conditions must the constants, the 's, satisfy so that each of these systems has a solution? Hint. Apply Gauss' method and see what happens to the right side (Anton 1987).



Problem 7

True or false: a system with more unknowns than equations has at least one solution. (As always, to say "true" you must prove it, while to say "false" you must produce a counterexample.)

Problem 8

Must any Chemistry problem like the one that starts this subsection— a balance the reaction problem— have infinitely many solutions?

This exercise is recommended for all readers.
Problem 9

Find the coefficients , , and so that the graph of passes through the points , , and .

Problem 10

Gauss' method works by combining the equations in a system to make new equations.

  1. Can the equation be derived, by a sequence of Gaussian reduction steps, from the equations in this system?
  2. Can the equation be derived, by a sequence of Gaussian reduction steps, from the equations in this system?
  3. Can the equation be derived, by a sequence of Gaussian reduction steps, from the equations in the system?
Problem 11

Prove that, where are real numbers and , if

has the same solution set as

then they are the same equation. What if ?

This exercise is recommended for all readers.
Problem 12

Show that if then

has a unique solution.

This exercise is recommended for all readers.
Problem 13

In the system

each of the equations describes a line in the -plane. By geometrical reasoning, show that there are three possibilities: there is a unique solution, there is no solution, and there are infinitely many solutions.

Problem 14

Finish the proof of Theorem 1.4.

Problem 15

Is there a two-unknowns linear system whose solution set is all of ?

This exercise is recommended for all readers.
Problem 16

Are any of the operations used in Gauss' method redundant? That is, can any of the operations be synthesized from the others?

Problem 17

Prove that each operation of Gauss' method is reversible. That is, show that if two systems are related by a row operation then there is a row operation to go back .

? Problem 18

A box holding pennies, nickels and dimes contains thirteen coins with a total value of cents. How many coins of each type are in the box? (Anton 1987)

? Problem 19

Four positive integers are given. Select any three of the integers, find their arithmetic average, and add this result to the fourth integer. Thus the numbers 29, 23, 21, and 17 are obtained. One of the original integers is:

  1. 19
  2. 21
  3. 23
  4. 29
  5. 17

(Salkind 1975, 1955 problem 38)

This exercise is recommended for all readers.
? Problem 20

Laugh at this: . It resulted from substituting a code letter for each digit of a simple example in addition, and it is required to identify the letters and prove the solution unique (Ransom & Gupta 1935).

? Problem 21

The Wohascum County Board of Commissioners, which has 20 members, recently had to elect a President. There were three candidates (, , and ); on each ballot the three candidates were to be listed in order of preference, with no abstentions. It was found that 11 members, a majority, preferred over (thus the other 9 preferred over ). Similarly, it was found that 12 members preferred over . Given these results, it was suggested that should withdraw, to enable a runoff election between and . However, protested, and it was then found that 14 members preferred over ! The Board has not yet recovered from the resulting confusion. Given that every possible order of , , appeared on at least one ballot, how many members voted for as their first choice (Gilbert, Krusemeyer & Larson 1993, Problem number 2)?

? Problem 22

"This system of linear equations with unknowns," said the Great Mathematician, "has a curious property."

"Good heavens!" said the Poor Nut, "What is it?"

"Note," said the Great Mathematician, "that the constants are in arithmetic progression."

"It's all so clear when you explain it!" said the Poor Nut. "Do you mean like and ?"

"Quite so," said the Great Mathematician, pulling out his bassoon. "Indeed, even larger systems can be solved regardless of their progression. Can you find their solution?"

"Good heavens!" cried the Poor Nut, "I am baffled."

Are you? (Dudley, Lebow & Rothman 1963)


2 - Describing the Solution Set

A linear system with a unique solution has a solution set with one element. A linear system with no solution has a solution set that is empty. In these cases the solution set is easy to describe. Solution sets are a challenge to describe only when they contain many elements.

Example 2.1

This system has many solutions because in echelon form

not all of the variables are leading variables. The Gauss' method theorem showed that a triple satisfies the first system if and only if it satisfies the third. Thus, the solution set can also be described as . However, this second description is not much of an improvement. It has two equations instead of three, but it still involves some hard-to-understand interaction among the variables.

To get a description that is free of any such interaction, we take the variable that does not lead any equation, , and use it to describe the variables that do lead, and . The second equation gives and the first equation gives . Thus, the solution set can be described as . For instance, is a solution because taking gives a first component of and a second component of .

The advantage of this description over the ones above is that the only variable appearing, , is unrestricted — it can be any real number.

Definition 2.2

The non-leading variables in an echelon-form linear system are free variables.

In the echelon form system derived in the above example, and are leading variables and is free.

Example 2.3

A linear system can end with more than one variable free. This row reduction

ends with and leading, and with both and free. To get the description that we prefer we will start at the bottom. We first express in terms of the free variables and with . Next, moving up to the top equation, substituting for in the first equation and solving for yields . Thus, the solution set is .

We prefer this description because the only variables that appear, and , are unrestricted. This makes the job of deciding which four-tuples are system solutions into an easy one. For instance, taking and gives the solution . In contrast, is not a solution, since the first component of any solution must be minus twice the third component plus twice the fourth.

Example 2.4

After this reduction

lead, are free. The solution set is . For instance, satisfies the system — take and . The four-tuple is not a solution since its first coordinate does not equal its second.

We refer to a variable used to describe a family of solutions as a parameter and we say that the set above is parametrized with and . (The terms "parameter" and "free variable" do not mean the same thing. Above, and are free because in the echelon form system they do not lead any row. They are parameters because they are used in the solution set description. We could have instead parametrized with and by rewriting the second equation as . In that case, the free variables are still and , but the parameters are and . Notice that we could not have parametrized with and , so there is sometimes a restriction on the choice of parameters. The terms "parameter" and "free" are related because, as we shall show later in this chapter, the solution set of a system can always be parametrized with the free variables. Consequently, we shall parametrize all of our descriptions in this way.)

Example 2.5

This is another system with infinitely many solutions.

The leading variables are . The variable is free. (Notice here that, although there are infinitely many solutions, the value of one of the variables is fixed — .) Write in terms of with . Then . To express in terms of , substitute for into the first equation to get . The solution set is .

We finish this subsection by developing the notation for linear systems and their solution sets that we shall use in the rest of this book.

Definition 2.6

An matrix is a rectangular array of numbers with rows and columns. Each number in the matrix is an entry.

Matrices are usually named by upper case roman letters, e.g. . Each entry is denoted by the corresponding lower-case letter, e.g. is the number in row and column of the array. For instance,

has two rows and three columns, and so is a matrix. (Read that "two-by-three"; the number of rows is always stated first.) The entry in the second row and first column is . Note that the order of the subscripts matters: since . (The parentheses around the array are a typographic device so that when two matrices are side by side we can tell where one ends and the other starts.)

Matrices occur throughout this book. We shall use to denote the collection of matrices.

Example 2.7

We can abbreviate this linear system

with this matrix.

The vertical bar just reminds a reader of the difference between the coefficients on the systems's left hand side and the constants on the right. When a bar is used to divide a matrix into parts, we call it an augmented matrix. In this notation, Gauss' method goes this way.

The second row stands for and the first row stands for so the solution set is . One advantage of the new notation is that the clerical load of Gauss' method — the copying of variables, the writing of 's and 's, etc. — is lighter.

We will also use the array notation to clarify the descriptions of solution sets. A description like from Example 2.3 is hard to read. We will rewrite it to group all the constants together, all the coefficients of together, and all the coefficients of together. We will write them vertically, in one-column wide matrices.

For instance, the top line says that . The next section gives a geometric interpretation that will help us picture the solution sets when they are written in this way.

Definition 2.8

A vector (or column vector) is a matrix with a single column. A matrix with a single row is a row vector. The entries of a vector are its components.

Vectors are an exception to the convention of representing matrices with capital roman letters. We use lower-case roman or greek letters overlined with an arrow: ... or ... (boldface is also common: or ). For instance, this is a column vector with a third component of .

Definition 2.9

The linear equation with unknowns is satisfied by

if . A vector satisfies a linear system if it satisfies each equation in the system.

The style of description of solution sets that we use involves adding the vectors, and also multiplying them by real numbers, such as the and . We need to define these operations.

Definition 2.10

The vector sum of and is this.

In general, two matrices with the same number of rows and the same number of columns add in this way, entry-by-entry.

Definition 2.11

The scalar multiplication of the real number and the vector is this.

In general, any matrix is multiplied by a real number in this entry-by-entry way.

Scalar multiplication can be written in either order: or , or without the "" symbol: . (Do not refer to scalar multiplication as "scalar product" because that name is used for a different operation.)

Example 2.12

Notice that the definitions of vector addition and scalar multiplication agree where they overlap, for instance, .

With the notation defined, we can now solve systems in the way that we will use throughout this book.

Example 2.13

This system

reduces in this way.

The solution set is . We write that in vector form.

Note again how well vector notation sets off the coefficients of each parameter. For instance, the third row of the vector form shows plainly that if is held fixed then increases three times as fast as .

That format also shows plainly that there are infinitely many solutions. For example, we can fix as , let range over the real numbers, and consider the first component . We get infinitely many first components and hence infinitely many solutions.

Another thing shown plainly is that setting both to 0 gives that this

is a particular solution of the linear system.

Example 2.14

In the same way, this system

reduces

to a one-parameter solution set.

Before the exercises, we pause to point out some things that we have yet to do.

The first two subsections have been on the mechanics of Gauss' method. Except for one result, Theorem 1.4— without which developing the method doesn't make sense since it says that the method gives the right answers— we have not stopped to consider any of the interesting questions that arise.

For example, can we always describe solution sets as above, with a particular solution vector added to an unrestricted linear combination of some other vectors? The solution sets we described with unrestricted parameters were easily seen to have infinitely many solutions so an answer to this question could tell us something about the size of solution sets. An answer to that question could also help us picture the solution sets, in , or in , etc.

Many questions arise from the observation that Gauss' method can be done in more than one way (for instance, when swapping rows, we may have a choice of which row to swap with). Theorem 1.4 says that we must get the same solution set no matter how we proceed, but if we do Gauss' method in two different ways must we get the same number of free variables both times, so that any two solution set descriptions have the same number of parameters? Must those be the same variables (e.g., is it impossible to solve a problem one way and get and free or solve it another way and get and free)?

In the rest of this chapter we answer these questions. The answer to each is "yes". The first question is answered in the last subsection of this section. In the second section we give a geometric description of solution sets. In the final section of this chapter we tackle the last set of questions. Consequently, by the end of the first chapter we will not only have a solid grounding in the practice of Gauss' method, we will also have a solid grounding in the theory. We will be sure of what can and cannot happen in a reduction.

Exercises

This exercise is recommended for all readers.
Problem 1

Find the indicated entry of the matrix, if it is defined.

This exercise is recommended for all readers.
Problem 2

Give the size of each matrix.

This exercise is recommended for all readers.
Problem 3

Do the indicated vector operation, if it is defined.

This exercise is recommended for all readers.
Problem 4

Solve each system using matrix notation. Express the solution using vectors.

This exercise is recommended for all readers.
Problem 5

Solve each system using matrix notation. Give each solution set in vector notation.

This exercise is recommended for all readers.
Problem 6

The vector is in the set. What value of the parameters produces that vector?

  1. ,
  2. ,
  3. ,
Problem 7

Decide if the vector is in the set.

  1. ,
  2. ,
  3. ,
  4. ,
Problem 8

Parametrize the solution set of this one-equation system.

This exercise is recommended for all readers.
Problem 9
  1. Apply Gauss' method to the left-hand side to solve
    for , , , and , in terms of the constants , , and . Note that will be a free variable.
  2. Use your answer from the prior part to solve this.
This exercise is recommended for all readers.
Problem 10

Why is the comma needed in the notation "" for matrix entries?

This exercise is recommended for all readers.
Problem 11

Give the matrix whose -th entry is

  1. ;
  2. to the power.
Problem 12

For any matrix , the transpose of , written , is the matrix whose columns are the rows of . Find the transpose of each of these.

This exercise is recommended for all readers.
Problem 13
  1. Describe all functions such that and .
  2. Describe all functions such that .
Problem 14

Show that any set of five points from the plane lie on a common conic section, that is, they all satisfy some equation of the form where some of are nonzero.

Problem 15

Make up a four equations/four unknowns system having

  1. a one-parameter solution set;
  2. a two-parameter solution set;
  3. a three-parameter solution set.
? Problem 16
  1. Solve the system of equations.
    For what values of does the system fail to have solutions, and for what values of are there infinitely many solutions?
  2. Answer the above question for the system.

(USSR Olympiad #174)

? Problem 17

In air a gold-surfaced sphere weighs grams. It is known that it may contain one or more of the metals aluminum, copper, silver, or lead. When weighed successively under standard conditions in water, benzene, alcohol, and glycerine its respective weights are , , , and grams. How much, if any, of the forenamed metals does it contain if the specific gravities of the designated substances are taken to be as follows?

Aluminum 2.7 Alcohol 0.81
Copper 8.9 Benzene 0.90
Gold 19.3 Glycerine 1.26
Lead 11.3 Water 1.00
Silver 10.8

(Duncan & Quelch 1952)


3 - General = Particular + Homogeneous

Description of Solution Sets

The prior subsection has many descriptions of solution sets. They all fit a pattern. They have a vector that is a particular solution of the system added to an unrestricted combination of some other vectors. The solution set from Example 2.13 illustrates.

The combination is unrestricted in that and can be any real numbers— there is no condition like "such that " that would restrict which pairs can be used to form combinations.

That example shows an infinite solution set conforming to the pattern. We can think of the other two kinds of solution sets as also fitting the same pattern. A one-element solution set fits in that it has a particular solution, and the unrestricted combination part is a trivial sum (that is, instead of being a combination of two vectors, as above, or a combination of one vector, it is a combination of no vectors). A zero-element solution set fits the pattern since there is no particular solution, and so the set of sums of that form is empty.

We will show that the examples from the prior subsection are representative, in that the description pattern discussed above holds for every solution set.

Theorem 3.1
For any linear system there are vectors , ..., such that the solution set can be described as

where is any particular solution, and where the system has free variables.

This description has two parts, the particular solution and also the unrestricted linear combination of the 's. We shall prove the theorem in two corresponding parts, with two lemmas.

Homogeneous Systems

We will focus first on the unrestricted combination part. To do that, we consider systems that have the vector of zeroes as one of the particular solutions, so that can be shortened to .

Definition 3.2

A linear equation is homogeneous if it has a constant of zero, that is, if it can be put in the form .

(These are "homogeneous" because all of the terms involve the same power of their variable— the first power— including a "" that we can imagine is on the right side.)

Example 3.3

With any linear system like

we associate a system of homogeneous equations by setting the right side to zeros.

Our interest in the homogeneous system associated with a linear system can be understood by comparing the reduction of the system

with the reduction of the associated homogeneous system.

Obviously the two reductions go in the same way. We can study how linear systems are reduced by instead studying how the associated homogeneous systems are reduced.

Studying the associated homogeneous system has a great advantage over studying the original system. Nonhomogeneous systems can be inconsistent. But a homogeneous system must be consistent since there is always at least one solution, the vector of zeros.

Definition 3.4

A column or row vector of all zeros is a zero vector, denoted .

There are many different zero vectors, e.g., the one-tall zero vector, the two-tall zero vector, etc. Nonetheless, people often refer to "the" zero vector, expecting that the size of the one being discussed will be clear from the context.

Example 3.5

Some homogeneous systems have the zero vector as their only solution.

Example 3.6

Some homogeneous systems have many solutions. One example is the Chemistry problem from the first page of this book.

The solution set:

has many vectors besides the zero vector (if we interpret as a number of molecules then solutions make sense only when is a nonnegative multiple of ).

We now have the terminology to prove the two parts of Theorem 3.1. The first lemma deals with unrestricted combinations.

Lemma 3.7

For any homogeneous linear system there exist vectors , ..., such that the solution set of the system is

where is the number of free variables in an echelon form version of the system.

Before the proof, we will recall the back substitution calculations that were done in the prior subsection.

Imagine that we have brought a system to this echelon form.

We next perform back-substitution to express each variable in terms of the free variable . Working from the bottom up, we get first that is , next that is , and then substituting those two into the top equation gives . So, back substitution gives a parametrization of the solution set by starting at the bottom equation and using the free variables as the parameters to work row-by-row to the top. The proof below follows this pattern.

Comment: That is, this proof just does a verification of the bookkeeping in back substitution to show that we haven't overlooked any obscure cases where this procedure fails, say, by leading to a division by zero. So this argument, while quite detailed, doesn't give us any new insights. Nevertheless, we have written it out for two reasons. The first reason is that we need the result— the computational procedure that we employ must be verified to work as promised. The second reason is that the row-by-row nature of back substitution leads to a proof that uses the technique of mathematical induction.[1] This is an important, and non-obvious, proof technique that we shall use a number of times in this book. Doing an induction argument here gives us a chance to see one in a setting where the proof material is easy to follow, and so the technique can be studied. Readers who are unfamiliar with induction arguments should be sure to master this one and the ones later in this chapter before going on to the second chapter.

Proof

First use Gauss' method to reduce the homogeneous system to echelon form. We will show that each leading variable can be expressed in terms of free variables. That will finish the argument because then we can use those free variables as the parameters. That is, the