Linear Algebra/Determinants as Size Functions/Solutions

Solutions

Problem 1

Find the volume of the region formed.

1. ${\displaystyle \langle {\begin{pmatrix}1\\3\end{pmatrix}},{\begin{pmatrix}-1\\4\end{pmatrix}}\rangle }$
2. ${\displaystyle \langle {\begin{pmatrix}2\\1\\0\end{pmatrix}},{\begin{pmatrix}3\\-2\\4\end{pmatrix}},{\begin{pmatrix}8\\-3\\8\end{pmatrix}}\rangle }$
3. ${\displaystyle \langle {\begin{pmatrix}1\\2\\0\\1\end{pmatrix}},{\begin{pmatrix}2\\2\\2\\2\end{pmatrix}},{\begin{pmatrix}-1\\3\\0\\5\end{pmatrix}},{\begin{pmatrix}0\\1\\0\\7\end{pmatrix}}\rangle }$

For each, find the determinant and take the absolute value.

1. ${\displaystyle 7}$
2. ${\displaystyle 0}$
3. ${\displaystyle 58}$
This exercise is recommended for all readers.
Problem 2

Is

${\displaystyle {\begin{pmatrix}4\\1\\2\end{pmatrix}}}$

inside of the box formed by these three?

${\displaystyle {\begin{pmatrix}3\\3\\1\end{pmatrix}}\quad {\begin{pmatrix}2\\6\\1\end{pmatrix}}\quad {\begin{pmatrix}1\\0\\5\end{pmatrix}}}$

Solving

${\displaystyle c_{1}{\begin{pmatrix}3\\3\\1\end{pmatrix}}+c_{2}{\begin{pmatrix}2\\6\\1\end{pmatrix}}+c_{3}{\begin{pmatrix}1\\0\\5\end{pmatrix}}={\begin{pmatrix}4\\1\\2\end{pmatrix}}}$

gives the unique solution ${\displaystyle c_{3}=11/57}$ , ${\displaystyle c_{2}=-40/57}$  and ${\displaystyle c_{1}=99/57}$ . Because ${\displaystyle c_{1}>1}$ , the vector is not in the box.

This exercise is recommended for all readers.
Problem 3

Find the volume of this region.

Move the parallelepiped to start at the origin, so that it becomes the box formed by

${\displaystyle \langle {\begin{pmatrix}3\\0\end{pmatrix}},{\begin{pmatrix}2\\1\end{pmatrix}}\rangle }$

and now the absolute value of this determinant is easily computed as ${\displaystyle 3}$ .

${\displaystyle {\begin{vmatrix}3&2\\0&1\end{vmatrix}}=3}$
This exercise is recommended for all readers.
Problem 4

Suppose that ${\displaystyle \left|A\right|=3}$ . By what factor do these change volumes?

1. ${\displaystyle A}$
2. ${\displaystyle A^{2}}$
3. ${\displaystyle A^{-2}}$
1. ${\displaystyle 3}$
2. ${\displaystyle 9}$
3. ${\displaystyle 1/9}$
This exercise is recommended for all readers.
Problem 5

By what factor does each transformation change the size of boxes?

1. ${\displaystyle {\begin{pmatrix}x\\y\end{pmatrix}}\mapsto {\begin{pmatrix}2x\\3y\end{pmatrix}}}$
2. ${\displaystyle {\begin{pmatrix}x\\y\end{pmatrix}}\mapsto {\begin{pmatrix}3x-y\\-2x+y\end{pmatrix}}}$
3. ${\displaystyle {\begin{pmatrix}x\\y\\z\end{pmatrix}}\mapsto {\begin{pmatrix}x-y\\x+y+z\\y-2z\end{pmatrix}}}$

Express each transformation with respect to the standard bases and find the determinant.

1. ${\displaystyle 6}$
2. ${\displaystyle -1}$
3. ${\displaystyle -5}$
Problem 6

What is the area of the image of the rectangle ${\displaystyle [2..4]\times [2..5]}$  under the action of this matrix?

${\displaystyle {\begin{pmatrix}2&3\\4&-1\end{pmatrix}}}$

The starting area is ${\displaystyle 6}$  and the matrix changes sizes by ${\displaystyle -14}$ . Thus the area of the image is ${\displaystyle 84}$ .

Problem 7

If ${\displaystyle t:\mathbb {R} ^{3}\to \mathbb {R} ^{3}}$  changes volumes by a factor of ${\displaystyle 7}$  and ${\displaystyle s:\mathbb {R} ^{3}\to \mathbb {R} ^{3}}$  changes volumes by a factor of ${\displaystyle 3/2}$  then by what factor will their composition changes volumes?

By a factor of ${\displaystyle 21/2}$ .

Problem 8

In what way does the definition of a box differ from the defintion of a span?

For a box we take a sequence of vectors (as described in the remark, the order in which the vectors are taken matters), while for a span we take a set of vectors. Also, for a box subset of ${\displaystyle \mathbb {R} ^{n}}$  there must be ${\displaystyle n}$  vectors; of course for a span there can be any number of vectors. Finally, for a box the coefficients ${\displaystyle t_{1}}$ , ..., ${\displaystyle t_{n}}$  are restricted to the interval ${\displaystyle [0..1]}$ , while for a span the coefficients are free to range over all of ${\displaystyle \mathbb {R} }$ .

This exercise is recommended for all readers.
Problem 9

Why doesn't this picture contradict Theorem 1.5?

 ${\displaystyle {\xrightarrow[{}]{\scriptstyle {\begin{pmatrix}2&1\\0&1\end{pmatrix}}}}}$ area is ${\displaystyle 2}$ determinant is ${\displaystyle 2}$ area is ${\displaystyle 5}$

That picture is drawn to mislead. The picture on the left is not the box formed by two vectors. If we slide it to the origin then it becomes the box formed by this sequence.

${\displaystyle \langle {\begin{pmatrix}0\\1\end{pmatrix}},{\begin{pmatrix}2\\0\end{pmatrix}}\rangle }$

Then the image under the action of the matrix is the box formed by this sequence.

${\displaystyle \langle {\begin{pmatrix}1\\1\end{pmatrix}},{\begin{pmatrix}4\\0\end{pmatrix}}\rangle }$

which has an area of ${\displaystyle 4}$ .

This exercise is recommended for all readers.
Problem 10

Does ${\displaystyle \left|TS\right|=\left|ST\right|}$ ? ${\displaystyle \left|T(SP)\right|=\left|(TS)P\right|}$ ?

Yes to both. For instance, the first is ${\displaystyle \left|TS\right|=\left|T\right|\cdot \left|S\right|=\left|S\right|\cdot \left|T\right|=\left|ST\right|}$ .

Problem 11
1. Suppose that ${\displaystyle \left|A\right|=3}$  and that ${\displaystyle \left|B\right|=2}$ . Find ${\displaystyle \left|A^{2}\cdot {{B}^{\rm {trans}}}\cdot B^{-2}\cdot {{A}^{\rm {trans}}}\right|}$ .
2. Assume that ${\displaystyle \left|A\right|=0}$ . Prove that ${\displaystyle \left|6A^{3}+5A^{2}+2A\right|=0}$ .
1. If it is defined then it is ${\displaystyle (3^{2})\cdot (2)\cdot (2^{-2})\cdot (3)}$ .
2. ${\displaystyle \left|6A^{3}+5A^{2}+2A\right|=\left|A\right|\cdot \left|6A^{2}+5A+2I\right|}$ .
This exercise is recommended for all readers.
Problem 12

Let ${\displaystyle T}$  be the matrix representing (with respect to the standard bases) the map that rotates plane vectors counterclockwise thru ${\displaystyle \theta }$  radians. By what factor does ${\displaystyle T}$  change sizes?

${\displaystyle {\begin{vmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{vmatrix}}=1}$

This exercise is recommended for all readers.
Problem 13

Must a transformation ${\displaystyle t:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}$  that preserves areas also preserve lengths?

No, for instance the determinant of

${\displaystyle T={\begin{pmatrix}2&0\\0&1/2\end{pmatrix}}}$

is ${\displaystyle 1}$  so it preserves areas, but the vector ${\displaystyle T{\vec {e}}_{1}}$  has length ${\displaystyle 2}$ .

This exercise is recommended for all readers.
Problem 14

What is the volume of a parallelepiped in ${\displaystyle \mathbb {R} ^{3}}$  bounded by a linearly dependent set?

It is zero.

This exercise is recommended for all readers.
Problem 15

Find the area of the triangle in ${\displaystyle \mathbb {R} ^{3}}$  with endpoints ${\displaystyle (1,2,1)}$ , ${\displaystyle (3,-1,4)}$ , and ${\displaystyle (2,2,2)}$ . (Area, not volume. The triangle defines a plane— what is the area of the triangle in that plane?)

Two of the three sides of the triangle are formed by these vectors.

${\displaystyle {\begin{pmatrix}2\\2\\2\end{pmatrix}}-{\begin{pmatrix}1\\2\\1\end{pmatrix}}={\begin{pmatrix}1\\0\\1\end{pmatrix}}\qquad {\begin{pmatrix}3\\-1\\4\end{pmatrix}}-{\begin{pmatrix}1\\2\\1\end{pmatrix}}={\begin{pmatrix}2\\-3\\3\end{pmatrix}}}$

One way to find the area of this triangle is to produce a length-one vector orthogonal to these two. From these two relations

${\displaystyle {\begin{pmatrix}1\\0\\1\end{pmatrix}}\cdot {\begin{pmatrix}x\\y\\z\end{pmatrix}}={\begin{pmatrix}0\\0\\0\end{pmatrix}}\qquad {\begin{pmatrix}2\\-3\\3\end{pmatrix}}\cdot {\begin{pmatrix}x\\y\\z\end{pmatrix}}={\begin{pmatrix}0\\0\\0\end{pmatrix}}}$

we get a system

${\displaystyle {\begin{array}{*{3}{rc}r}x&&&+&z&=&0\\2x&-&3y&+&3z&=&0\end{array}}\;{\xrightarrow[{}]{-2\rho _{1}+\rho _{2}}}\;{\begin{array}{*{3}{rc}r}x&&&+&z&=&0\\&&-3y&+&z&=&0\end{array}}}$

with this solution set.

${\displaystyle \{{\begin{pmatrix}-1\\1/3\\1\end{pmatrix}}z\,{\big |}\,z\in \mathbb {R} \},}$

A solution of length one is this.

${\displaystyle {\frac {1}{\sqrt {19/9}}}{\begin{pmatrix}-1\\1/3\\1\end{pmatrix}}}$

Thus the area of the triangle is the absolute value of this determinant.

${\displaystyle {\begin{vmatrix}1&2&-3/{\sqrt {19}}\\0&-3&1/{\sqrt {19}}\\1&3&3/{\sqrt {19}}\end{vmatrix}}=-12/{\sqrt {19}}}$
This exercise is recommended for all readers.
Problem 16

An alternate proof of Theorem 1.5 uses the definition of determinant functions.

1. Note that the vectors forming ${\displaystyle S}$  make a linearly dependent set if and only if ${\displaystyle \left|S\right|=0}$ , and check that the result holds in this case.
2. For the ${\displaystyle \left|S\right|\neq 0}$  case, to show that ${\displaystyle \left|TS\right|/\left|S\right|=\left|T\right|}$  for all transformations, consider the function ${\displaystyle d:{\mathcal {M}}_{n\!\times \!n}\to \mathbb {R} }$  given by ${\displaystyle T\mapsto \left|TS\right|/\left|S\right|}$ . Show that ${\displaystyle d}$  has the first property of a determinant.
3. Show that ${\displaystyle d}$  has the remaining three properties of a determinant function.
4. Conclude that ${\displaystyle \left|TS\right|=\left|T\right|\cdot \left|S\right|}$ .
1. Because the image of a linearly dependent set is linearly dependent, if the vectors forming ${\displaystyle S}$  make a linearly dependent set, so that ${\displaystyle \left|S\right|=0}$ , then the vectors forming ${\displaystyle t(S)}$  make a linearly dependent set, so that ${\displaystyle \left|TS\right|=0}$ , and in this case the equation holds.
2. We must check that if ${\displaystyle T[b]{\xrightarrow[{}]{k\rho _{i}+\rho _{j}}}{\hat {T}}}$  then ${\displaystyle d(T)=\left|TS\right|/\left|S\right|=\left|{\hat {T}}S\right|/\left|S\right|=d({\hat {T}})}$ . We can do this by checking that pivoting first and then multiplying to get ${\displaystyle {\hat {T}}S}$  gives the same result as multiplying first to get ${\displaystyle TS}$  and then pivoting (because the determinant ${\displaystyle \left|TS\right|}$  is unaffected by the pivot so we'll then have that ${\displaystyle \left|{\hat {T}}S\right|=\left|TS\right|}$  and hence that ${\displaystyle d({\hat {T}})=d(T)}$ ). This check runs: after adding ${\displaystyle k}$  times row ${\displaystyle i}$  of ${\displaystyle TS}$  to row ${\displaystyle j}$  of ${\displaystyle TS}$ , the ${\displaystyle j,p}$  entry is ${\displaystyle (kt_{i,1}+t_{j,1})s_{1,p}+\dots +(kt_{i,r}+t_{j,r})s_{r,p}}$ , which is the ${\displaystyle j,p}$  entry of ${\displaystyle {\hat {T}}S}$ .
3. For the second property, we need only check that swapping ${\displaystyle T[b]{\xrightarrow[{}]{\rho _{i}\leftrightarrow \rho _{j}}}{\hat {T}}}$  and then multiplying to get ${\displaystyle {\hat {T}}S}$  gives the same result as multiplying ${\displaystyle T}$  by ${\displaystyle S}$  first and then swapping (because, as the determinant ${\displaystyle \left|TS\right|}$  changes sign on the row swap, we'll then have ${\displaystyle \left|{\hat {T}}S\right|=-\left|TS\right|}$ , and so ${\displaystyle d({\hat {T}})=-d(T)}$ ). This ckeck runs just like the one for the first property. For the third property, we need only show that performing ${\displaystyle T[b]{\xrightarrow[{}]{k\rho _{i}}}{\hat {T}}}$  and then computing ${\displaystyle {\hat {T}}S}$  gives the same result as first computing ${\displaystyle TS}$  and then performing the scalar multiplication (as the determinant ${\displaystyle \left|TS\right|}$  is rescaled by ${\displaystyle k}$ , we'll have ${\displaystyle \left|{\hat {T}}S\right|=k\left|TS\right|}$  and so ${\displaystyle d({\hat {T}})=k\,d(T)}$ ). Here too, the argument runs just as above. The fourth property, that if ${\displaystyle T}$  is ${\displaystyle I}$  then the result is ${\displaystyle 1}$ , is obvious.
4. Determinant functions are unique, so ${\displaystyle \left|TS\right|/\left|S\right|=d(T)=\left|T\right|}$ , and so ${\displaystyle \left|TS\right|=\left|T\right|\left|S\right|}$ .
Problem 17

Give a non-identity matrix with the property that ${\displaystyle {{A}^{\rm {trans}}}=A^{-1}}$ . Show that if ${\displaystyle {{A}^{\rm {trans}}}=A^{-1}}$  then ${\displaystyle \left|A\right|=\pm 1}$ . Does the converse hold?

Any permutation matrix has the property that the transpose of the matrix is its inverse.

For the implication, we know that ${\displaystyle \left|{{A}^{\rm {trans}}}\right|=\left|A\right|}$ . Then ${\displaystyle 1=\left|A\cdot A^{-1}\right|=\left|A\cdot {{A}^{\rm {trans}}}\right|=\left|A\right|\cdot \left|{{A}^{\rm {trans}}}\right|=\left|A\right|^{2}}$ .

The converse does not hold; here is an example.

${\displaystyle {\begin{pmatrix}3&1\\2&1\end{pmatrix}}}$
Problem 18

The algebraic property of determinants that factoring a scalar out of a single row will multiply the determinant by that scalar shows that where ${\displaystyle H}$  is ${\displaystyle 3\!\times \!3}$ , the determinant of ${\displaystyle cH}$  is ${\displaystyle c^{3}}$  times the determinant of ${\displaystyle H}$ . Explain this geometrically, that is, using Theorem 1.5,

Where the sides of the box are ${\displaystyle c}$  times longer, the box has ${\displaystyle c^{3}}$  times as many cubic units of volume.

This exercise is recommended for all readers.
Problem 19

Matrices ${\displaystyle H}$  and ${\displaystyle G}$  are said to be similar if there is a nonsingular matrix ${\displaystyle P}$  such that ${\displaystyle H=P^{-1}GP}$  (we will study this relation in Chapter Five). Show that similar matrices have the same determinant.

If ${\displaystyle H=P^{-1}GP}$  then ${\displaystyle \left|H\right|=\left|P^{-1}\right|\left|G\right|\left|P\right|=\left|P^{-1}\right|\left|P\right|\left|G\right|=\left|P^{-1}P\right|\left|G\right|=\left|G\right|}$ .

Problem 20

We usually represent vectors in ${\displaystyle \mathbb {R} ^{2}}$  with respect to the standard basis so vectors in the first quadrant have both coordinates positive.

${\displaystyle {\rm {Rep}}_{{\mathcal {E}}_{2}}({\vec {v}})={\begin{pmatrix}+3\\+2\end{pmatrix}}}$

Moving counterclockwise around the origin, we cycle thru four regions:

${\displaystyle \cdots \;\longrightarrow {\begin{pmatrix}+\\+\end{pmatrix}}\;\longrightarrow {\begin{pmatrix}-\\+\end{pmatrix}}\;\longrightarrow {\begin{pmatrix}-\\-\end{pmatrix}}\;\longrightarrow {\begin{pmatrix}+\\-\end{pmatrix}}\;\longrightarrow \cdots \,.}$

Using this basis

${\displaystyle B=\langle {\begin{pmatrix}0\\1\end{pmatrix}},{\begin{pmatrix}-1\\0\end{pmatrix}}\rangle }$

gives the same counterclockwise cycle. We say these two bases have the same orientation.

1. Why do they give the same cycle?
2. What other configurations of unit vectors on the axes give the same cycle?
3. Find the determinants of the matrices formed from those (ordered) bases.
4. What other counterclockwise cycles are possible, and what are the associated determinants?
5. What happens in ${\displaystyle \mathbb {R} ^{1}}$ ?
6. What happens in ${\displaystyle \mathbb {R} ^{3}}$ ?

A fascinating general-audience discussion of orientations is in (Gardner 1990).

1. The new basis is the old basis rotated by ${\displaystyle \pi /4}$ .
2. ${\displaystyle \langle {\begin{pmatrix}-1\\0\end{pmatrix}},{\begin{pmatrix}0\\-1\end{pmatrix}}\rangle }$ , ${\displaystyle \langle {\begin{pmatrix}0\\-1\end{pmatrix}},{\begin{pmatrix}1\\0\end{pmatrix}}\rangle }$
3. In each case the determinant is ${\displaystyle +1}$  (these bases are said to have positive orientation).
4. Because only one sign can change at a time, the only other cycle possible is
${\displaystyle \cdots \;\longrightarrow {\begin{pmatrix}+\\+\end{pmatrix}}\;\longrightarrow {\begin{pmatrix}+\\-\end{pmatrix}}\;\longrightarrow {\begin{pmatrix}-\\-\end{pmatrix}}\;\longrightarrow {\begin{pmatrix}-\\+\end{pmatrix}}\;\longrightarrow \cdots \,.}$
Here each associated determinant is ${\displaystyle -1}$  (such bases are said to have a negative orientation).
5. There is one positively oriented basis ${\displaystyle \langle (1)\rangle }$  and one negatively oriented basis ${\displaystyle \langle (-1)\rangle }$ .
6. There are ${\displaystyle 48}$  bases (${\displaystyle 6}$  half-axis choices are possible for the first unit vector, ${\displaystyle 4}$  for the second, and ${\displaystyle 2}$  for the last). Half are positively oriented like the standard basis on the left below, and half are negatively oriented like the one on the right

In ${\displaystyle \mathbb {R} ^{3}}$  positive orientation is sometimes called "right hand orientation" because if a person's right hand is placed with the fingers curling from ${\displaystyle {\vec {e}}_{1}}$  to ${\displaystyle {\vec {e}}_{2}}$  then the thumb will point with ${\displaystyle {\vec {e}}_{3}}$ .

Problem 21

This question uses material from the optional Determinant Functions Exist subsection. Prove Theorem 1.5 by using the permutation expansion formula for the determinant.

We will compare ${\displaystyle \det({\vec {s}}_{1},\dots ,{\vec {s}}_{n})}$  with ${\displaystyle \det(t({\vec {s}}_{1}),\dots ,t({\vec {s}}_{n}))}$  to show that the second differs from the first by a factor of ${\displaystyle \left|T\right|}$ . We represent the ${\displaystyle {\vec {s}}\,}$ 's with respect to the standard bases

${\displaystyle {\rm {Rep}}_{{\mathcal {E}}_{n}}({\vec {s}}_{i})={\begin{pmatrix}s_{1,i}\\s_{2,i}\\\vdots \\s_{n,i}\end{pmatrix}}}$

and then we represent the map application with matrix-vector multiplication

${\displaystyle {\begin{array}{rl}{\rm {Rep}}_{{\mathcal {E}}_{n}}(\,t({\vec {s}}_{i})\,)&=\left({\begin{array}{cccc}t_{1,1}&t_{1,2}&\ldots &t_{1,n}\\t_{2,1}&t_{2,2}&\ldots &t_{2,n}\\&\vdots \\t_{n,1}&t_{n,2}&\ldots &t_{n,n}\end{array}}\right){\begin{pmatrix}s_{1,j}\\s_{2,j}\\\vdots \\s_{n,j}\end{pmatrix}}\\&=s_{1,j}{\begin{pmatrix}t_{1,1}\\t_{2,1}\\\vdots \\t_{n,1}\end{pmatrix}}+s_{2,j}{\begin{pmatrix}t_{1,2}\\t_{2,2}\\\vdots \\t_{n,2}\end{pmatrix}}+\dots +s_{n,j}{\begin{pmatrix}t_{1,n}\\t_{2,n}\\\vdots \\t_{n,n}\end{pmatrix}}\\&=s_{1,j}{\vec {t}}_{1}+s_{2,j}{\vec {t}}_{2}+\dots +s_{n,j}{\vec {t}}_{n}\end{array}}}$

where ${\displaystyle {\vec {t}}_{i}}$  is column ${\displaystyle i}$  of ${\displaystyle T}$ . Then ${\displaystyle \det(t({\vec {s}}_{1}),\,\dots ,\,t({\vec {s}}_{n}))}$  equals ${\displaystyle \det(s_{1,1}{\vec {t}}_{1}\!+\!s_{2,1}{\vec {t}}_{2}\!+\!\dots \!+\!s_{n,1}{\vec {t}}_{n},\,\dots ,\,s_{1,n}{\vec {t}}_{1}\!+\!s_{2,n}{\vec {t}}_{2}\!+\!\dots \!+\!s_{n,n}{\vec {t}}_{n})}$ .

As in the derivation of the permutation expansion formula, we apply multilinearity, first splitting along the sum in the first argument

${\displaystyle \det(s_{1,1}{\vec {t}}_{1},\,\dots ,\,s_{1,n}{\vec {t}}_{1}+s_{2,n}{\vec {t}}_{2}+\dots +s_{n,n}{\vec {t}}_{n})+\cdots {}+\det(s_{n,1}{\vec {t}}_{n},\,\ldots ,\,s_{1,n}{\vec {t}}_{1}+s_{2,n}{\vec {t}}_{2}+\dots +s_{n,n}{\vec {t}}_{n})}$

and then splitting each of those ${\displaystyle n}$  summands along the sums in the second arguments, etc. We end with, as in the derivation of the permutation expansion, ${\displaystyle n^{n}}$  summand determinants, each of the form ${\displaystyle \det(s_{i_{1},1}{\vec {t}}_{i_{1}},s_{i_{2},2}{\vec {t}}_{i_{2}},\,\dots ,\,s_{i_{n},n}{\vec {t}}_{i_{n}})}$ . Factor out each of the ${\displaystyle s_{i,j}}$ 's ${\displaystyle =s_{i_{1},1}s_{i_{2},2}\dots s_{i_{n},n}\cdot \det({\vec {t}}_{i_{1}},{\vec {t}}_{i_{2}},\,\dots ,\,{\vec {t}}_{i_{n}})}$ .

As in the permutation expansion derivation, whenever two of the indices in ${\displaystyle i_{1}}$ , ..., ${\displaystyle i_{n}}$  are equal then the determinant has two equal arguments, and evaluates to ${\displaystyle 0}$ . So we need only consider the cases where ${\displaystyle i_{1}}$ , ..., ${\displaystyle i_{n}}$  form a permutation of the numbers ${\displaystyle 1}$ , ..., ${\displaystyle n}$ . We thus have

${\displaystyle \det(t({\vec {s}}_{1}),\dots ,t({\vec {s}}_{n}))=\sum _{{\text{permutations }}\phi }s_{\phi (1),1}\dots s_{\phi (n),n}\det({\vec {t}}_{\phi (1)},\dots ,{\vec {t}}_{\phi (n)}).}$

Swap the columns in ${\displaystyle \det({\vec {t}}_{\phi (1)},\ldots ,{\vec {t}}_{\phi (n)})}$  to get the matrix ${\displaystyle T}$  back, which changes the sign by a factor of ${\displaystyle \operatorname {sgn} {\phi }}$ , and then factor out the determinant of ${\displaystyle T}$ .

${\displaystyle =\sum _{\phi }s_{\phi (1),1}\dots s_{\phi (n),n}\det({\vec {t}}_{1},\dots ,{\vec {t}}_{n})\cdot \operatorname {sgn} {\phi }=\det(T)\sum _{\phi }s_{\phi (1),1}\dots s_{\phi (n),n}\cdot \operatorname {sgn} {\phi }.}$

As in the proof that the determinant of a matrix equals the determinant of its transpose, we commute the ${\displaystyle s}$ 's so they are listed by ascending row number instead of by ascending column number (and we substitute ${\displaystyle \operatorname {sgn}(\phi ^{-1})}$  for ${\displaystyle \operatorname {sgn}(\phi )}$ ).

${\displaystyle =\det(T)\sum _{\phi }s_{1,\phi ^{-1}(1)}\dots s_{n,\phi ^{-1}(n)}\cdot \operatorname {sgn} {\phi ^{-1}}=\det(T)\det({\vec {s}}_{1},{\vec {s}}_{2},\dots ,{\vec {s}}_{n})}$
This exercise is recommended for all readers.
Problem 22
1. Show that this gives the equation of a line in ${\displaystyle \mathbb {R} ^{2}}$  thru ${\displaystyle (x_{2},y_{2})}$  and ${\displaystyle (x_{3},y_{3})}$ .
${\displaystyle {\begin{vmatrix}x&x_{2}&x_{3}\\y&y_{2}&y_{3}\\1&1&1\end{vmatrix}}=0}$
2. (Peterson 1955) Prove that the area of a triangle with vertices ${\displaystyle (x_{1},y_{1})}$ , ${\displaystyle (x_{2},y_{2})}$ , and ${\displaystyle (x_{3},y_{3})}$  is
${\displaystyle {\frac {1}{2}}{\begin{vmatrix}x_{1}&x_{2}&x_{3}\\y_{1}&y_{2}&y_{3}\\1&1&1\end{vmatrix}}.}$
3. (Bittinger 1973) Prove that the area of a triangle with vertices at ${\displaystyle (x_{1},y_{1})}$ , ${\displaystyle (x_{2},y_{2})}$ , and ${\displaystyle (x_{3},y_{3})}$  whose coordinates are integers has an area of ${\displaystyle N}$  or ${\displaystyle N/2}$  for some positive integer ${\displaystyle N}$ .
1. An algebraic check is easy.
${\displaystyle 0=xy_{2}+x_{2}y_{3}+x_{3}y-x_{3}y_{2}-xy_{3}-x_{2}y=x\cdot (y_{2}-y_{3})+y\cdot (x_{3}-x_{2})+x_{2}y_{3}-x_{3}y_{2}}$
simplifies to the familiar form
${\displaystyle y=x\cdot (x_{3}-x_{2})/(y_{3}-y_{2})+(x_{2}y_{3}-x_{3}y_{2})/(y_{3}-y_{2})}$
(the ${\displaystyle y_{3}-y_{2}=0}$  case is easily handled). For geometric insight, this picture shows that the box formed by the three vectors. Note that all three vectors end in the ${\displaystyle z=1}$  plane. Below the two vectors on the right is the line through ${\displaystyle (x_{2},y_{2})}$  and ${\displaystyle (x_{3},y_{3})}$ .

The box will have a nonzero volume unless the triangle formed by the ends of the three is degenerate. That only happens (assuming that ${\displaystyle (x_{2},y_{3})\neq (x_{3},y_{3})}$ ) if ${\displaystyle (x,y)}$  lies on the line through the other two.

2. This is how the answer was given in the cited source. The altitude through ${\displaystyle (x_{1},y_{1})}$  of a triangle with vertices ${\displaystyle (x_{1},y_{1})}$  ${\displaystyle (x_{2},y_{2})}$  and ${\displaystyle (x_{3},y_{3})}$  is found in the usual way from the normal form of the above:
${\displaystyle {\frac {1}{\sqrt {(x_{2}-x_{3})^{2}+(y_{2}-y_{3})^{2}}}}{\begin{vmatrix}x_{1}&x_{2}&x_{3}\\y_{1}&y_{2}&y_{3}\\1&1&1\end{vmatrix}}.}$
Another step shows the area of the triangle to be
${\displaystyle {\frac {1}{2}}{\begin{vmatrix}x_{1}&x_{2}&x_{3}\\y_{1}&y_{2}&y_{3}\\1&1&1\end{vmatrix}}.}$
This exposition reveals the modus operandi more clearly than the usual proof of showing a collection of terms to be identitical with the determinant.
3. This is how the answer was given in the cited source. Let
${\displaystyle D={\begin{vmatrix}x_{1}&x_{2}&x_{3}\\y_{1}&y_{2}&y_{3}\\1&1&1\end{vmatrix}}}$
then the area of the triangle is ${\displaystyle (1/2)\left|D\right|}$ . Now if the coordinates are all integers, then ${\displaystyle D}$  is an integer.

References

• Bittinger, Marvin (proposer) (Jan. 1973), "Quickie 578", Mathematics Magazine (American Mathematical Society) 46 (5): 286,296 .
• Gardner, Martin (1990), The New Ambidextrous Univers, W. H. Freeman and Company .
• Peterson, G. M. (Apr. 1955), "Area of a triangle", American Mathematical Monthly (American Mathematical Society) 62 (4): 249 .
• Weston, J. D. (Aug./Sept. 1959), "Volume in Vector Spaces", American Mathematical Monthly (American Mathematical Society) 66 (7): 575-577 .