# Linear Algebra/Row Equivalence/Solutions

## Solutions

This exercise is recommended for all readers.
Problem 1

Decide if the matrices are row equivalent.

1. ${\displaystyle {\begin{pmatrix}1&2\\4&8\end{pmatrix}},{\begin{pmatrix}0&1\\1&2\end{pmatrix}}}$
2. ${\displaystyle {\begin{pmatrix}1&0&2\\3&-1&1\\5&-1&5\end{pmatrix}},{\begin{pmatrix}1&0&2\\0&2&10\\2&0&4\end{pmatrix}}}$
3. ${\displaystyle {\begin{pmatrix}2&1&-1\\1&1&0\\4&3&-1\end{pmatrix}},{\begin{pmatrix}1&0&2\\0&2&10\\\end{pmatrix}}}$
4. ${\displaystyle {\begin{pmatrix}1&1&1\\-1&2&2\end{pmatrix}},{\begin{pmatrix}0&3&-1\\2&2&5\end{pmatrix}}}$
5. ${\displaystyle {\begin{pmatrix}1&1&1\\0&0&3\end{pmatrix}},{\begin{pmatrix}0&1&2\\1&-1&1\end{pmatrix}}}$

Bring each to reduced echelon form and compare.

1. The first gives
${\displaystyle {\xrightarrow[{}]{-4\rho _{1}+\rho _{2}}}{\begin{pmatrix}1&2\\0&0\end{pmatrix}}}$
while the second gives
${\displaystyle {\xrightarrow[{}]{\rho _{1}\leftrightarrow \rho _{2}}}{\begin{pmatrix}1&2\\0&1\end{pmatrix}}{\xrightarrow[{}]{-2\rho _{2}+\rho _{1}}}{\begin{pmatrix}1&0\\0&1\end{pmatrix}}}$
The two reduced echelon form matrices are not identical, and so the original matrices are not row equivalent.
2. The first is this.
${\displaystyle {\xrightarrow[{-5\rho _{1}+\rho _{3}}]{-3\rho _{1}+\rho _{2}}}{\begin{pmatrix}1&0&2\\0&-1&-5\\0&-1&-5\end{pmatrix}}{\xrightarrow[{}]{-\rho _{2}+\rho _{3}}}{\begin{pmatrix}1&0&2\\0&-1&-5\\0&0&0\end{pmatrix}}{\xrightarrow[{}]{-\rho _{2}}}{\begin{pmatrix}1&0&2\\0&1&5\\0&0&0\end{pmatrix}}}$
The second is this.
${\displaystyle {\xrightarrow[{}]{-2\rho _{1}+\rho _{3}}}{\begin{pmatrix}1&0&2\\0&2&10\\0&0&0\end{pmatrix}}{\xrightarrow[{}]{(1/2)\rho _{2}}}{\begin{pmatrix}1&0&2\\0&1&5\\0&0&0\end{pmatrix}}}$
These two are row equivalent.
3. These two are not row equivalent because they have different sizes.
4. The first,
${\displaystyle {\xrightarrow[{}]{\rho _{1}+\rho _{2}}}{\begin{pmatrix}1&1&1\\0&3&3\end{pmatrix}}{\xrightarrow[{}]{(1/3)\rho _{2}}}{\begin{pmatrix}1&1&1\\0&1&1\end{pmatrix}}{\xrightarrow[{}]{-\rho _{2}+\rho _{1}}}{\begin{pmatrix}1&0&0\\0&1&1\end{pmatrix}}}$
and the second.
${\displaystyle {\xrightarrow[{}]{\rho _{1}\leftrightarrow \rho _{2}}}{\begin{pmatrix}2&2&5\\0&3&-1\end{pmatrix}}{\xrightarrow[{(1/3)\rho _{2}}]{(1/2)\rho _{1}}}{\begin{pmatrix}1&1&5/2\\0&1&-1/3\end{pmatrix}}{\xrightarrow[{}]{-\rho _{2}+\rho _{1}}}{\begin{pmatrix}1&0&17/6\\0&1&-1/3\end{pmatrix}}}$
These are not row equivalent.
5. Here the first is
${\displaystyle {\xrightarrow[{}]{(1/3)\rho _{2}}}{\begin{pmatrix}1&1&1\\0&0&1\end{pmatrix}}{\xrightarrow[{}]{-\rho _{2}+\rho _{1}}}{\begin{pmatrix}1&1&0\\0&0&1\end{pmatrix}}}$
while this is the second.
${\displaystyle {\xrightarrow[{}]{\rho _{1}\leftrightarrow \rho _{2}}}{\begin{pmatrix}1&-1&1\\0&1&2\end{pmatrix}}{\xrightarrow[{}]{\rho _{2}+\rho _{1}}}{\begin{pmatrix}1&0&3\\0&1&2\end{pmatrix}}}$
These are not row equivalent.
Problem 2

Describe the matrices in each of the classes represented in Example 2.10.

First, the only matrix row equivalent to the matrix of all ${\displaystyle 0}$ 's is itself (since row operations have no effect).

Second, the matrices that reduce to

${\displaystyle {\begin{pmatrix}1&a\\0&0\end{pmatrix}}}$

have the form

${\displaystyle {\begin{pmatrix}b&ba\\c&ca\end{pmatrix}}}$

(where ${\displaystyle a,b,c\in \mathbb {R} }$ , and ${\displaystyle b}$  and ${\displaystyle c}$  are not both zero).

Next, the matrices that reduce to

${\displaystyle {\begin{pmatrix}0&1\\0&0\end{pmatrix}}}$

have the form

${\displaystyle {\begin{pmatrix}0&a\\0&b\end{pmatrix}}}$

(where ${\displaystyle a,b\in \mathbb {R} }$ , and not both are zero).

Finally, the matrices that reduce to

${\displaystyle {\begin{pmatrix}1&0\\0&1\end{pmatrix}}}$

are the nonsingular matrices. That's because a linear system for which this is the matrix of coefficients will have a unique solution, and that is the definition of nonsingular. (Another way to say the same thing is to say that they fall into none of the above classes.)

Problem 3

Describe all matrices in the row equivalence class of these.

1. ${\displaystyle {\begin{pmatrix}1&0\\0&0\end{pmatrix}}}$
2. ${\displaystyle {\begin{pmatrix}1&2\\2&4\end{pmatrix}}}$
3. ${\displaystyle {\begin{pmatrix}1&1\\1&3\end{pmatrix}}}$
1. They have the form
${\displaystyle {\begin{pmatrix}a&0\\b&0\end{pmatrix}}}$
where ${\displaystyle a,b\in \mathbb {R} }$ .
2. They have this form (for ${\displaystyle a,b\in \mathbb {R} }$ ).
${\displaystyle {\begin{pmatrix}1a&2a\\1b&2b\end{pmatrix}}}$
3. They have the form
${\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}}$
(for ${\displaystyle a,b,c,d\in \mathbb {R} }$ ) where ${\displaystyle ad-bc\neq 0}$ . (This is the formula that determines when a ${\displaystyle 2\!\times \!2}$  matrix is nonsingular.)
Problem 4

How many row equivalence classes are there?

Infinitely many. For instance, in

${\displaystyle {\begin{pmatrix}1&k\\0&0\end{pmatrix}}}$

each ${\displaystyle k\in \mathbb {R} }$  gives a different class.

Problem 5

Can row equivalence classes contain different-sized matrices?

No. Row operations do not change the size of a matrix.

Problem 6

How big are the row equivalence classes?

1. Show that the class of any zero matrix is finite.
2. Do any other classes contain only finitely many members?
1. A row operation on a zero matrix has no effect. Thus each zero matrix is alone in its row equivalence class.
2. No. Any nonzero entry can be rescaled.
This exercise is recommended for all readers.
Problem 7

Give two reduced echelon form matrices that have their leading entries in the same columns, but that are not row equivalent.

Here are two.

${\displaystyle {\begin{pmatrix}1&1&0\\0&0&1\end{pmatrix}}\quad {\text{and}}\quad {\begin{pmatrix}1&0&0\\0&0&1\end{pmatrix}}}$
This exercise is recommended for all readers.
Problem 8

Show that any two ${\displaystyle n\!\times \!n}$  nonsingular matrices are row equivalent. Are any two singular matrices row equivalent?

Any two ${\displaystyle n\!\times \!n}$  nonsingular matrices have the same reduced echelon form, namely the matrix with all ${\displaystyle 0}$ 's except for ${\displaystyle 1}$ 's down the diagonal.

${\displaystyle {\begin{pmatrix}1&0&&0\\0&1&&0\\&&\ddots &\\0&0&&1\end{pmatrix}}}$

Two same-sized singular matrices need not be row equivalent. For example, these two ${\displaystyle 2\!\times \!2}$  singular matrices are not row equivalent.

${\displaystyle {\begin{pmatrix}1&1\\0&0\end{pmatrix}}\quad {\text{and}}\quad {\begin{pmatrix}1&0\\0&0\end{pmatrix}}}$
This exercise is recommended for all readers.
Problem 9

Describe all of the row equivalence classes containing these.

1. ${\displaystyle 2\!\times \!2}$  matrices
2. ${\displaystyle 2\!\times \!3}$  matrices
3. ${\displaystyle 3\!\times \!2}$  matrices
4. ${\displaystyle 3\!\times \!3}$  matrices

Since there is one and only one reduced echelon form matrix in each class, we can just list the possible reduced echelon form matrices.

For that list, see the answer for Problem 1.5.

Problem 10
1. Show that a vector ${\displaystyle {\vec {\beta }}_{0}}$  is a linear combination of members of the set ${\displaystyle \{{\vec {\beta }}_{1},\ldots ,{\vec {\beta }}_{n}\}}$  if and only if there is a linear relationship ${\displaystyle {\vec {0}}=c_{0}{\vec {\beta }}_{0}+\cdots +c_{n}{\vec {\beta }}_{n}}$  where ${\displaystyle c_{0}}$  is not zero. (Hint. Watch out for the ${\displaystyle {\vec {\beta }}_{0}={\vec {0}}}$  case.)
2. Use that to simplify the proof of Lemma 2.5.
1. If there is a linear relationship where ${\displaystyle c_{0}}$  is not zero then we can subtract ${\displaystyle c_{0}{\vec {\beta }}_{0}}$  from both sides and divide by ${\displaystyle -c_{0}}$  to get ${\displaystyle {\vec {\beta }}_{0}}$  as a linear combination of the others. (Remark: if there are no other vectors in the set— if the relationship is, say, ${\displaystyle {\vec {0}}=3\cdot {\vec {0}}}$ — then the statement is still true because the zero vector is by definition the sum of the empty set of vectors.) Conversely, if ${\displaystyle {\vec {\beta }}_{0}}$  is a combination of the others ${\displaystyle {\vec {\beta }}_{0}=c_{1}{\vec {\beta }}_{1}+\dots +c_{n}{\vec {\beta }}_{n}}$  then subtracting ${\displaystyle {\vec {\beta }}_{0}}$  from both sides gives a relationship where at least one of the coefficients is nonzero; namely, the ${\displaystyle -1}$  in front of ${\displaystyle {\vec {\beta }}_{0}}$ .
2. The first row is not a linear combination of the others for the reason given in the proof: in the equation of components from the column containing the leading entry of the first row, the only nonzero entry is the leading entry from the first row, so its coefficient must be zero. Thus, from the prior part of this exercise, the first row is in no linear relationship with the other rows. Thus, when considering whether the second row can be in a linear relationship with the other rows, we can leave the first row out. But now the argument just applied to the first row will apply to the second row. (That is, we are arguing here by induction.)
This exercise is recommended for all readers.
Problem 11

Finish the proof of Lemma 2.5.

1. First illustrate the inductive step by showing that ${\displaystyle c_{2}=0}$ .
2. Do the full inductive step: where ${\displaystyle 1\leq n , assume that ${\displaystyle c_{k}=0}$  for ${\displaystyle 1  and deduce that ${\displaystyle c_{n+1}=0}$  also.
1. In the equation
${\displaystyle \rho _{i}=c_{1}\rho _{1}+c_{2}\rho _{2}+\ldots +c_{i-1}\rho _{i-1}+c_{i+1}\rho _{i+1}+\ldots +c_{m}\rho _{m}}$
we already know that ${\displaystyle c_{1}=0}$ . Let ${\displaystyle \ell _{2}}$  be the column number of the leading entry of the second row. Consider the prior equation on entries in that column.
${\displaystyle \rho _{i,\ell _{1}}=c_{2}\rho _{2,\ell _{2}}+\ldots +c_{i-1}\rho _{i-1,\ell _{2}}+c_{i+1}\rho _{i+1,\ell _{2}}+\ldots +c_{m}\rho _{m,\ell _{2}}}$
Because ${\displaystyle \ell _{2}}$  is the column of the leading entry in the second row, ${\displaystyle \rho _{i,\ell _{2}}=0}$  for ${\displaystyle i>2}$ . Thus the equation reduces to
${\displaystyle 0=c_{2}\rho _{2,\ell _{2}}+0+\ldots +0}$
and since ${\displaystyle \rho _{2,\ell _{2}}}$  is not ${\displaystyle 0}$  we have that ${\displaystyle c_{2}=0}$ .
2. In the equation
${\displaystyle \rho _{i}=c_{1}\rho _{1}+c_{2}\rho _{2}+\ldots +c_{i-1}\rho _{i-1}+c_{i+1}\rho _{i+1}+\ldots +c_{m}\rho _{m}}$
we already know that ${\displaystyle 0=c_{1}=c_{2}=\dots =c_{n}}$ . Let ${\displaystyle \ell _{n+1}}$  be the column number of the leading entry of row ${\displaystyle n+1}$ . Consider the above equation on entries in that column.
${\displaystyle \rho _{i,\ell _{n+1}}=c_{n+1}\rho _{n+1,\ell _{n+1}}+\ldots +c_{i-1}\rho _{i-1,\ell _{n+1}}+c_{i+1}\rho _{i+1,\ell _{n+1}}+\dots +c_{m}\rho _{m,\ell _{n+1}}}$
Because ${\displaystyle \ell _{n+1}}$  is the column of the leading entry in the row ${\displaystyle n+1}$ , we have that ${\displaystyle \rho _{j,\ell _{n+1}}=0}$  for ${\displaystyle j>{n+1}}$ . Thus the equation reduces to
${\displaystyle 0=c_{n+1}\rho _{n+1,\ell _{n+1}}+0+\ldots +0}$
and since ${\displaystyle \rho _{n+1,\ell _{n+1}}}$  is not ${\displaystyle 0}$  we have that ${\displaystyle c_{n+1}=0}$ .
3. From the prior item in this exercise we know that in the equation
${\displaystyle \rho _{i}=c_{1}\rho _{1}+c_{2}\rho _{2}+\ldots +c_{i-1}\rho _{i-1}+c_{i+1}\rho _{i+1}+\ldots +c_{m}\rho _{m}}$
we already know that ${\displaystyle 0=c_{1}=c_{2}=\dots =c_{i-1}}$ . Let ${\displaystyle \ell _{i}}$  be the column number of the leading entry of row ${\displaystyle i}$ . Rewrite the above equation on entries in that column.
${\displaystyle \rho _{i,\ell _{i}}=c_{i+1}\rho _{i+1,\ell _{i}}+\dots +c_{m}\rho _{m,\ell _{i}}}$
Because ${\displaystyle \ell _{i}}$  is the column of the leading entry in the row ${\displaystyle i}$ , we have that ${\displaystyle \rho _{j,\ell _{i}}=0}$  for ${\displaystyle j>i}$ . That makes the right side of the equation sum to ${\displaystyle 0}$ , but the left side is not ${\displaystyle 0}$  since it is the leading entry of the row. That's the contradiction.
Problem 12

Finish the induction argument in Lemma 2.6.

1. State the inductive hypothesis, Also state what must be shown to follow from that hypothesis.
2. Check that the inductive hypothesis implies that in the relationship ${\displaystyle \beta _{r+1}=s_{r+1,1}\delta _{1}+s_{r+2,2}\delta _{2}+\dots +s_{r+1,m}\delta _{m}}$  the coefficients ${\displaystyle s_{r+1,1},\,\ldots \,,s_{r+1,r}}$  are each zero.
3. Finish the inductive step by arguing, as in the base case, that ${\displaystyle \ell _{r+1}  and ${\displaystyle k_{r+1}<\ell _{r+1}}$  are impossible.
1. The inductive step is to show that if the statement holds on rows ${\displaystyle 1}$  through ${\displaystyle r}$  then it also holds on row ${\displaystyle r+1}$ . That is, we assume that ${\displaystyle \ell _{1}=k_{1}}$ , and ${\displaystyle \ell _{2}=k_{2}}$ , ..., and ${\displaystyle \ell _{r}=k_{r}}$ , and we will show that ${\displaystyle \ell _{r+1}=k_{r+1}}$  also holds (for ${\displaystyle r}$  in ${\displaystyle 1\;..\;m-1}$ ).
2. Corollary 2.3 gives the relationship ${\displaystyle \beta _{r+1}=s_{r+1,1}\delta _{1}+s_{r+2,2}\delta _{2}+\dots +s_{r+1,m}\delta _{m}}$  between rows. Inside of those row vectors, consider the relationship between the entries in the column ${\displaystyle \ell _{1}=k_{1}}$ . Because by the induction hypothesis this is a row greater than the first ${\displaystyle r+1>1}$ , the row ${\displaystyle \beta _{r+1}}$  has a zero in entry ${\displaystyle \ell _{1}}$  (the matrix ${\displaystyle B}$  is in echelon form). But the row ${\displaystyle \delta _{1}}$  has a nonzero entry in column ${\displaystyle k_{1}}$ ; by definition of ${\displaystyle k_{1}}$  it is the leading entry in the first row of ${\displaystyle D}$ . Thus, in that column, the above relationship among rows resolves to this equation among numbers: ${\displaystyle 0=s_{r+1,1}\cdot d_{1,k_{1}}}$ , with ${\displaystyle d_{1,k_{1}}\neq 0}$ . Therefore ${\displaystyle s_{r+1,1}=0}$ . With ${\displaystyle s_{r+1,1}=0}$ , a similar argument shows that ${\displaystyle s_{r+1,2}=0}$ . With those two, another turn gives that ${\displaystyle s_{r+1,3}=0}$ . That is, inside of the larger induction argument used to prove the entire lemma, here is an subargument by induction that shows ${\displaystyle s_{r+1,j}=0}$  for all ${\displaystyle j}$  in ${\displaystyle 1\,..\,r}$ . (We won't write out the details since it is just like the induction done in Problem 11.)
3. Note that the prior item of this exercise shows that the relationship between rows ${\displaystyle \beta _{r+1}=s_{r+1,1}\delta _{1}+s_{r+2,2}\delta _{2}+\dots +s_{r+1,m}\delta _{m}}$  reduces to ${\displaystyle \beta _{r+1}=s_{r+1,r+1}\delta _{r+1}+\dots +s_{r+1,m}\delta _{m}}$ . Consider the column ${\displaystyle \ell _{r+1}}$  entries in this equation. By definition of ${\displaystyle k_{r+1}}$  as the column number of the leading entry of ${\displaystyle \delta _{r+1}}$ , the entries in this column of the other rows ${\displaystyle \delta _{r+2}\;\;\delta _{m}}$  are zeros. Now if ${\displaystyle \ell _{r+1}  then the equation of entries from column ${\displaystyle \ell _{k+1}}$  would be ${\displaystyle b_{r+1,\ell _{r+1}}=s_{r+1,1}\cdot 0+\dots +s_{r+1,m}\cdot 0}$ , which is impossible as ${\displaystyle b_{r+1,\ell _{r+1}}}$  isn't zero as it leads its row. A symmetric argument shows that ${\displaystyle k_{r+1}<\ell _{r+1}}$  also is impossible.
Problem 13

Why, in the proof of Theorem 2.7, do we bother to restrict to the nonzero rows? Why not just stick to the relationship that we began with, ${\displaystyle \beta _{i}=c_{i,1}\delta _{1}+\dots +c_{i,m}\delta _{m}}$ , with ${\displaystyle m}$  instead of ${\displaystyle r}$ , and argue using it that the only nonzero coefficient is ${\displaystyle c_{i,i}}$ , which is ${\displaystyle 1}$ ?

The zero rows could have nonzero coefficients, and so the statement would not be true.

This exercise is recommended for all readers.
Problem 14

Three truck drivers went into a roadside cafe. One truck driver purchased four sandwiches, a cup of coffee, and ten doughnuts for $${\displaystyle 8.45}$ . Another driver purchased three sandwiches, a cup of coffee, and seven doughnuts for$${\displaystyle 6.30}$ . What did the third truck driver pay for a sandwich, a cup of coffee, and a doughnut? (Trono 1991)

We know that ${\displaystyle 4s+c+10d=8.45}$  and that ${\displaystyle 3s+c+7d=6.30}$ , and we'd like to know what ${\displaystyle s+c+d}$  is. Fortunately, ${\displaystyle s+c+d}$  is a linear combination of ${\displaystyle 4s+c+10d}$  and ${\displaystyle 3s+c+7d}$ . Calling the unknown price ${\displaystyle p}$ , we have this reduction.

${\displaystyle \left({\begin{array}{*{3}{c}|c}4&1&10&8.45\\3&1&7&6.30\\1&1&1&p\end{array}}\right){\xrightarrow[{-(1/4)\rho _{1}+\rho _{3}}]{-(3/4)\rho _{1}+\rho _{2}}}\left({\begin{array}{*{3}{c}|c}4&1&10&8.45\\0&1/4&-1/2&-0.037\,5\\0&3/4&-3/2&p-2.112\,5\end{array}}\right){\xrightarrow[{}]{-3\rho _{2}+\rho _{3}}}\left({\begin{array}{*{3}{c}|c}4&1&10&8.45\\0&1/4&-1/2&-0.037\,5\\0&0&0&p-2.00\end{array}}\right)}$

The price paid is \$${\displaystyle 2.00}$ .

Problem 15

The fact that Gaussian reduction disallows multiplication of a row by zero is needed for the proof of uniqueness of reduced echelon form, or else every matrix would be row equivalent to a matrix of all zeros. Where is it used?

If multiplication of a row by zero were allowed then Lemma 2.6 would not hold. That is, where

${\displaystyle {\begin{pmatrix}1&3\\2&1\end{pmatrix}}{\xrightarrow[{}]{0\rho _{2}}}{\begin{pmatrix}1&3\\0&0\end{pmatrix}}}$

all the rows of the second matrix can be expressed as linear combinations of the rows of the first, but the converse does not hold. The second row of the first matrix is not a linear combination of the rows of the second matrix.

This exercise is recommended for all readers.
Problem 16

The Linear Combination Lemma says which equations can be gotten from Gaussian reduction from a given linear system.

1. Produce an equation not implied by this system.
${\displaystyle {\begin{array}{*{2}{rc}r}3x&+&4y&=&8\\2x&+&y&=&3\end{array}}}$
2. Can any equation be derived from an inconsistent system?
1. An easy answer is this:
${\displaystyle 0=3.}$
For a less wise-guy-ish answer, solve the system:
${\displaystyle \left({\begin{array}{*{2}{c}|c}3&-1&8\\2&1&3\end{array}}\right){\xrightarrow[{}]{-(2/3)\rho _{1}+\rho _{2}}}\left({\begin{array}{*{2}{c}|c}3&-1&8\\0&5/3&-7/3\end{array}}\right)}$
gives ${\displaystyle y=-7/5}$  and ${\displaystyle x=11/5}$ . Now any equation not satisfied by ${\displaystyle (-7/5,11/5)}$  will do, e.g., ${\displaystyle 5x+5y=3}$ .
2. Every equation can be derived from an inconsistent system. For instance, here is how to derive "${\displaystyle 3x+2y=4}$ " from "${\displaystyle 0=5}$ ". First,
${\displaystyle 0=5{\xrightarrow[{}]{(3/5)\rho _{1}}}0=3{\xrightarrow[{}]{x\rho _{1}}}0=3x}$
(validity of the ${\displaystyle x=0}$  case is separate but clear). Similarly, ${\displaystyle 0=2y}$ . Ditto for ${\displaystyle 0=4}$ . But now, ${\displaystyle 0+0=0}$  gives ${\displaystyle 3x+2y=4}$ .
Problem 17

Extend the definition of row equivalence to linear systems. Under your definition, do equivalent systems have the same solution set? (Hoffman & Kunze 1971)

Define linear systems to be equivalent if their augmented matrices are row equivalent. The proof that equivalent systems have the same solution set is easy.

This exercise is recommended for all readers.
Problem 18

In this matrix

${\displaystyle {\begin{pmatrix}1&2&3\\3&0&3\\1&4&5\end{pmatrix}}}$

the first and second columns add to the third.

1. Show that remains true under any row operation.
2. Make a conjecture.
3. Prove that it holds.
1. The three possible row swaps are easy, as are the three possible rescalings. One of the six possible pivots is ${\displaystyle k\rho _{1}+\rho _{2}}$ :
${\displaystyle {\begin{pmatrix}1&2&3\\k\cdot 1+3&k\cdot 2+0&k\cdot 3+3\\1&4&5\end{pmatrix}}}$