# Linear Algebra/Exploration

*This subsection is optional. It briefly describes how an investigator might come to a good general definition, which is given in the next subsection.*

The three cases above don't show an evident pattern to use for the general formula. We may spot that the term has one letter, that the terms and have two letters, and that the terms , etc., have three letters. We may also observe that in those terms there is a letter from each row and column of the matrix, e.g., the letters in the term

come one from each row and one from each column. But these observations perhaps seem more puzzling than enlightening. For instance, we might wonder why some of the terms are added while others are subtracted.

A good problem solving strategy is to see what properties a solution must have and then search for something with those properties. So we shall start by asking what properties we require of the formulas.

At this point, our primary way to decide whether a matrix is singular is to do Gaussian reduction and then check whether the diagonal of resulting echelon form matrix has any zeroes (that is, to check whether the product down the diagonal is zero). So, we may expect that the proof that a formula determines singularity will involve applying Gauss' method to the matrix, to show that in the end the product down the diagonal is zero if and only if the determinant formula gives zero. This suggests our initial plan: we will look for a family of functions with the property of being unaffected by row operations and with the property that a determinant of an echelon form matrix is the product of its diagonal entries. Under this plan, a proof that the functions determine singularity would go, "Where is the Gaussian reduction, the determinant of equals the determinant of (because the determinant is unchanged by row operations), which is the product down the diagonal, which is zero if and only if the matrix is singular". In the rest of this subsection we will test this plan on the and determinants that we know. We will end up modifying the "unaffected by row operations" part, but not by much.

The first step in checking the plan is to test whether the and formulas are unaffected by the row operation of pivoting: if

then is ? This check of the determinant after the operation

shows that it is indeed unchanged, and the other pivot gives the same result. The pivot leaves the determinant unchanged

as do the other pivot operations.

So there seems to be promise in the plan. Of course, perhaps the determinant formula is affected by pivoting. We are exploring a possibility here and we do not yet have all the facts. Nonetheless, so far, so good.

The next step is to compare with for the operation

of swapping two rows. The row swap

does not yield . This swap inside of a matrix

also does not give the same determinant as before the swap — again there is a sign change. Trying a different swap

also gives a change of sign.

Thus, row swaps appear to change the sign of a determinant. This modifies our plan, but does not wreck it. We intend to decide nonsingularity by considering only whether the determinant is zero, not by considering its sign. Therefore, instead of expecting determinants to be entirely unaffected by row operations, will look for them to change sign on a swap.

To finish, we compare to for the operation

of multiplying a row by a scalar . One of the cases is

and the other case has the same result. Here is one case

and the other two are similar. These lead us to suspect that multiplying a row by multiplies the determinant by . This fits with our modified plan because we are asking only that the zeroness of the determinant be unchanged and we are not focusing on the determinant's sign or magnitude.

In summary, to develop the scheme for the formulas to compute determinants, we look for determinant functions that remain unchanged under the pivoting operation, that change sign on a row swap, and that rescale on the rescaling of a row. In the next two subsections we will find that for each such a function exists and is unique.

For the next subsection, note that, as above, scalars come out of each row without affecting other rows. For instance, in this equality

the isn't factored out of all three rows, only out of the top row. The determinant acts on each row of independently of the other rows. When we want to use this property of determinants, we shall write the determinant as a function of the rows: "", instead of as "" or "". The definition of the determinant that starts the next subsection is written in this way.

## Exercises

edit*This exercise is recommended for all readers.*

- Problem 1

Evaluate the determinant of each.

- Problem 2

Evaluate the determinant of each.

*This exercise is recommended for all readers.*

- Problem 3

Verify that the determinant of an upper-triangular matrix is the product down the diagonal.

Do lower-triangular matrices work the same way?

*This exercise is recommended for all readers.*

- Problem 4

Use the determinant to decide if each is singular or nonsingular.

- Problem 5

Singular or nonsingular? Use the determinant to decide.

*This exercise is recommended for all readers.*

- Problem 6

Each pair of matrices differ by one row operation. Use this operation to compare with .

- Problem 7

Show this.

*This exercise is recommended for all readers.*

- Problem 8

Which real numbers make this matrix singular?

- Problem 9

Do the Gaussian reduction to check the formula for matrices stated in the preamble to this section.

is nonsingular iff

- Problem 10

Show that the equation of a line in thru and is expressed by this determinant.

*This exercise is recommended for all readers.*

- Problem 11

Many people know this mnemonic for the determinant of a matrix: first repeat the first two columns and then sum the products on the forward diagonals and subtract the products on the backward diagonals. That is, first write

and then calculate this.

- Check that this agrees with the formula given in the preamble to this section.
- Does it extend to other-sized determinants?

- Problem 12

The **cross product** of the vectors

is the vector computed as this determinant.

Note that the first row is composed of vectors, the vectors from the standard basis for . Show that the cross product of two vectors is perpendicular to each vector.

- Problem 13

Prove that each statement holds for matrices.

- The determinant of a product is the product of the determinants .
- If is invertible then the determinant of the inverse is the inverse of the determinant .

Matrices and are
**similar** if there is a
nonsingular matrix such that .
(This definition is in Chapter Five.)
Show that similar matrices have the same
determinant.

*This exercise is recommended for all readers.*

- Problem 14

Prove that the area of this region in the plane

is equal to the value of this determinant.

Compare with this.

- Problem 15

Prove that for matrices, the determinant of a matrix equals the determinant of its transpose. Does that also hold for matrices?

*This exercise is recommended for all readers.*

- Problem 16

Is the determinant function linear — is ?

- Problem 17

Show that if is then for any scalar .

- Problem 18

Which real numbers make

singular? Explain geometrically.

- ? Problem 19

If a third order determinant has elements , , ..., , what is the maximum value it may have? (Haggett & Saunders 1955)

## References

edit- Haggett, Vern (proposer); Saunders, F. W. (solver) (1955), "Elementary problem 1135",
*American Mathematical Monthly*, American Mathematical Society,**62**(5): 257`{{citation}}`

: Unknown parameter`|month=`

ignored (help).