# Problems in Mathematics

The problems are listed in increasing order of difficulty. When a problem is simply a mathematical statement, the reader is supposed to supply a proof. Answers are given (or will be given) to all of the problems. This is mostly for quality control; the answers allow contributors other than the initial writer of the problem to check the validity of the problems. In other words, the reader is strongly discouraged from seeing the answers before they successfully solve the problems themselves.

## Commutative algebra

Problem: A finite integral domain is a ﬁeld.

Let ${\displaystyle a}$  be an element in the finite integral domain A. Then the map

${\displaystyle x\mapsto ax:A\to A}$

is injective (since A is an integral domain) and is surjective by finiteness.${\displaystyle \square }$

Problem: A polynomial has integer values for sufficiently large integer arguments if and only if it is a linear combination (over ${\displaystyle \mathbf {Z} }$ ) of binomial coefficients ${\displaystyle {\binom {t}{n}}}$ .

Since ${\displaystyle \deg {\binom {t}{n}}=n}$ , for dimensional reason, if ${\displaystyle f\in R[t]}$ , then we can write:

${\displaystyle f(t)=a_{0}+a_{1}{\binom {t}{1}}+...+a_{d}{\binom {t}{d}},a_{n}\in \mathbf {R} .}$

Applying finite differential operator ${\displaystyle \Delta g(t)=g(t+1)-g(t)}$  to both sides d times, one finds that ${\displaystyle a_{d}}$  is an integer. By induction, ${\displaystyle a_{n}}$  are all integers then.${\displaystyle \square }$

Problem: An integral domain is a PID if its prime ideals are principal. (Hint: apply Zorn's lemma to the set S of all non-principal prime ideals.)

Suppose, on the contrary, that S is nonempty. Then there is a a maximal element ${\displaystyle {\mathfrak {i}}\in S}$ . We will reach a contradiction once we show ${\displaystyle {\mathfrak {i}}\in \operatorname {Spec} (A)}$ . For that end, let ${\displaystyle xy\in {\mathfrak {i}}}$ . If ${\displaystyle x\not \in {\mathfrak {i}}}$ , then, by maximality, ${\displaystyle ({\mathfrak {i}},x)\not \in S}$ . That is, it is principal; say,

${\displaystyle ({\mathfrak {i}},x)=(d)}$ .

Let ${\displaystyle {\mathfrak {j}}}$  be an ideal consisting of ${\displaystyle a\in A}$  such that ${\displaystyle ax\in {\mathfrak {i}}}$ . It turns out that

${\displaystyle {\mathfrak {j}}d={\mathfrak {i}}}$ .

Indeed, ${\displaystyle {\mathfrak {j}}d=({\mathfrak {ji}},{\mathfrak {j}}x)\subset {\mathfrak {i}}}$ . Conversely, if ${\displaystyle z\in {\mathfrak {i}}}$ , then ${\displaystyle z=z'd}$  and ${\displaystyle z'x\in z'(d)\subset {\mathfrak {i}}}$ . Thus, ${\displaystyle z\in {\mathfrak {j}}d}$ . We now conclude that ${\displaystyle y\in {\mathfrak {i}}}$ , for, otherwise, ${\displaystyle {\mathfrak {j}}d}$  is principal. ${\displaystyle \square }$

Problem: A ring is noetherian if and only if its prime ideals are finitely generated. (Hint: Zorn's lemma.)

The direction (${\displaystyle \Rightarrow }$ ) is obvious. For the converse, let ${\displaystyle S}$  be the set of all proper ideals of ${\displaystyle A}$  that are not finitely generated. We want to show ${\displaystyle S}$  is empty then. Suppose not. Then, by Zorn's lemma, ${\displaystyle S}$  contains a maximal element ${\displaystyle {\mathfrak {i}}}$ . It follows that ${\displaystyle {\mathfrak {i}}}$  is prime. To see that, let ${\displaystyle xy\in {\mathfrak {i}}}$ . If ${\displaystyle x\not \in {\mathfrak {i}}}$ , then, by maximality, ${\displaystyle ({\mathfrak {i}},x)\not \in S}$ . That is, it is finitely generated; say,

${\displaystyle ({\mathfrak {i}},x)=(i_{1}+a_{1}x,...,i_{n}+a_{n}x)}$ .

Let ${\displaystyle {\mathfrak {j}}}$  be an ideal consisting of ${\displaystyle a\in A}$  such that ${\displaystyle ax\in {\mathfrak {i}}}$ . It turns out that

${\displaystyle {\mathfrak {i}}=(i_{1},...,i_{n},{\mathfrak {j}}x)}$ .

In fact, if ${\displaystyle z\in {\mathfrak {i}}}$ , then

${\displaystyle z=b_{1}(i_{1}+a_{1}x)+...+b_{n}(i_{n}+a_{n}x)}$

Here, ${\displaystyle b_{1}a_{1}+...+b_{n}a_{n}\in J}$ . We conclude that ${\displaystyle y\in {\mathfrak {i}}}$ , for, otherwise, ${\displaystyle {\mathfrak {j}}}$  and thus ${\displaystyle {\mathfrak {i}}}$  are finitely generated.${\displaystyle \square }$

Problem: Every nonempty set of prime ideals has a minimal element with respect to inclusion.

Problem: If an integral domain A is algebraic over a field F, then A is a field.

Since A is an integral domain, it is a subring of some field. Let ${\displaystyle u\in A}$ . Then u is invertible in ${\displaystyle F(u)\subset A}$ . ${\displaystyle \square }$

Problem: Every two elements in a UFD have a gcd.

Problem: If ${\displaystyle f\in A[X]}$  is a unit, then ${\displaystyle f-a_{0}}$  is nilpotent, where ${\displaystyle a_{0}=f(0)}$  is the constant term of f.

Let ${\displaystyle g=b_{0}+b_{1}x+...+b_{m}x^{m}}$  be the inverse of f. Then ${\displaystyle a_{n}b_{m}=0}$ . Since ${\displaystyle a_{n}b_{m-1}+a_{n-1}b_{m}=0}$ , it follows ${\displaystyle {a_{n}}^{2}b_{m-1}=0}$ . By obvious induction, for some r, we see ${\displaystyle {a_{n}}^{r}}$  kills every coefficient of ${\displaystyle g}$ ; hence, g. Thus, ${\displaystyle {a_{n}}^{r}={a_{n}}^{r}fg=0}$ , meaning ${\displaystyle a_{n}}$  is nilpotent. Recall that the sum of a unit and a nilpotent element is a unit. Since ${\displaystyle a_{n}x^{n}-f}$  is a unit, by applying the above argument, we see that ${\displaystyle a_{n-1}x^{n-1}}$  is nilpotent. In the end, we conclude that ${\displaystyle a_{0}-f}$  is a sum of nilpotent elements; thus, nilpotent.

Problem: The nilradical and the Jacobson radical of ${\displaystyle A[X]}$  coincide.

We only have to prove: if ${\displaystyle f}$  is in the Jacobson radical, then ${\displaystyle f}$  is nilpotent, since the converse is true for any ring. Recall that ${\displaystyle 1-fg}$  is a unit for every ${\displaystyle g}$ . In particular, ${\displaystyle 1-xf}$  is unit. Now use the previous problem to conclude ${\displaystyle f}$  is nilpotent.

Problem: Let A be a ring such that every ideal not contained in its nilradical contains an element e such that ${\displaystyle e^{2}=e\neq 0}$ . Then the nilradical and the Jacobson radical of ${\displaystyle A}$  coincide.

In general, the nilradical is contained in the Jacobson radical. Suppose this inclusion is strict. Then by hypothesis there is a nonzero e such that ${\displaystyle e(1-e)=0}$ . Since ${\displaystyle 1-e}$  is a unit, ${\displaystyle e=0}$ , a contradiction.

Problem: ${\displaystyle f\in A[[X]]}$  is a unit if and only if the constant term of f is a unit.

## Real analysis

Problem: ${\displaystyle {\sqrt {3}}+2^{1/3}}$  is irrational.

Let ${\displaystyle x={\sqrt {3}}+2^{1/3}}$ . Then ${\displaystyle 2=(x-{\sqrt {3}})^{3}}$ . The equation can then be solved for ${\displaystyle {\sqrt {3}}}$  ${\displaystyle \square }$

Problem: Is ${\displaystyle {\sqrt {2}}^{\sqrt {2}}}$  irrational?

Problem: Compute ${\displaystyle \int _{-\infty }^{\infty }{\sin x \over x}}$

Problem: If ${\displaystyle \lim _{x\to c}f(x)+f'(x)}$  exists, then ${\displaystyle \lim _{x\to c}f(x)}$  exists and ${\displaystyle \lim _{x\to c}f'(x)=0}$

Apply L'Hospital's rule to ${\displaystyle e^{x}f(x) \over e^{x}}$

Problem: Let ${\displaystyle f:\mathbb {R} \to [0,+\infty )}$  nonvanishing and such that ${\displaystyle f(x)f''(x)\geq 0}$ , then ${\displaystyle \int _{-\infty }^{+\infty }f(x)^{2}dx=+\infty }$

Derive twice ${\displaystyle g(x)=f(x)^{2}}$ , look at the asymptotic behaviour of ${\displaystyle g'(x)}$ .

Problem Let ${\displaystyle X}$  be a complete metric space, and ${\displaystyle f:X\to X}$  be a function such that ${\displaystyle f\circ f}$  is a contraction. Then ${\displaystyle f}$  admits a fixed point.

By Banach's fixed point theorem, ${\displaystyle f\circ f}$  has a unique fixed point ${\displaystyle x_{0}}$ ; i.e., ${\displaystyle x_{0}=(f\circ f)(x_{0})}$ . But then

${\displaystyle f(x_{0})=(f\circ f)(f(x_{0}))}$

In other words, ${\displaystyle f(x_{0})}$  is also a fixed point of ${\displaystyle f\circ f}$ . By uniqueness, ${\displaystyle x_{0}=f(x_{0})}$ . ${\displaystyle \square }$

Problem Let ${\displaystyle X}$  be a compact metric space, and ${\displaystyle f:X\to X}$  be such that

${\displaystyle d(f(x),f(y))

for all ${\displaystyle x\neq y\in X}$ . Then ${\displaystyle f}$  admits a unique fixed point. (Do not use Banach's fixed point theorem.)

Let ${\displaystyle c=\inf\{d(x,f(x))|x\in X\}}$ . By compactness, there is ${\displaystyle x_{0}}$  such that ${\displaystyle c=d(x_{0},f(x_{0}))}$ . If ${\displaystyle x_{0}\neq f(x_{0})}$ , then, by hypothesis, we have:

${\displaystyle d(x_{0},f(x_{0}))\leq d(f(x_{0}),f\circ f(x_{0})) ,

which is absurd. Thus, ${\displaystyle d(x_{0},f(x_{0}))=0}$ . For uniqueness, suppose ${\displaystyle y_{0}=f(y_{0})}$ . If ${\displaystyle x_{0}\neq y_{0}}$ , then

${\displaystyle d(x_{0},y_{0})\leq d(x_{0},f(x_{0}))+d(f(x_{0}),f(y_{0}))+d(f(y_{0}),y_{0})=d(f(x_{0}),f(y_{0})) ,

which is absurd. Hence, ${\displaystyle x_{0}}$  is the unique fixed point. ${\displaystyle \square }$

Problem Let ${\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}$  be such that

${\displaystyle d(f(x),f(y))\geq Ad(x,y),\qquad A>1}$

then ${\displaystyle f}$  admits a unique fixed point.

Problem Let ${\displaystyle X}$  be a compact metric space, and ${\displaystyle f:X\to X}$  be a contraction. Then

${\displaystyle \bigcap _{n}^{\infty }f^{n}(X)}$

consists of exactly one point.

Since f is a contraction, it admits a fixed point ${\displaystyle x_{0}}$ . Thus, ${\displaystyle x_{0}\in \cap f^{n}(X)}$ . Let ${\displaystyle y\in \cap f^{n}(X)}$ . Then

${\displaystyle y=f(x_{1})=f^{2}(x_{2})=f^{3}(x_{3})=...}$

for some sequence ${\displaystyle x_{1},x_{2},...}$ . Let c be the Lipschitz constant of f. Now,

${\displaystyle d(x_{0},y)=d(x_{0},f^{n}(x_{n}))\leq d(x_{0},f^{n}(x_{1}))+d(f^{n}(x_{1}),f^{n}(x_{n}))\leq d(x_{0},f^{n}(x_{1}))+c^{n}d(f(x_{1}),f(x_{n}))}$

which goes to 0 as ${\displaystyle n\to \infty }$  since ${\displaystyle d(f(x_{1}),f(x_{n}))}$  is bounded and ${\displaystyle c<1}$  and since for any ${\displaystyle y\in X}$  ${\displaystyle f^{n}(y)\to x_{0}}$  by Banach's fixed point theorem. ${\displaystyle \square }$

Problem: Every closed subset of ${\displaystyle \mathbf {R} ^{n}}$  is separable.

Let ${\displaystyle E_{n}}$  be a countable dense subset of ${\displaystyle A\cap {\overline {B}}(0,k)}$ , and let

${\displaystyle E=\bigcup _{k=1}^{\infty }E_{k}}$

Then ${\displaystyle A={\overline {E}}}$ . In fact, since ${\displaystyle E_{k}}$  is a subset of ${\displaystyle A}$  for any ${\displaystyle k}$ , ${\displaystyle E\subset A}$  and so ${\displaystyle {\overline {E}}\subset {\overline {A}}=A}$ . Conversely, if ${\displaystyle x\in A}$ , then for some ${\displaystyle k}$ ,

${\displaystyle x\in A\cap {\overline {B}}(0,k)={\overline {E_{k}}}\subset {\overline {E}}}$ .

${\displaystyle \square }$

Problem: Any connected nonempty subset of ${\displaystyle \mathbf {R} }$  either consists of a single point or contains an irrational number.

Let ${\displaystyle E}$  be a connected nonempty subset of ${\displaystyle \mathbf {R} }$ . Then ${\displaystyle E}$  is an interval with end-points a, b. If ${\displaystyle E}$  has more than one point, then ${\displaystyle E}$  contains a nonempty interval (a, b), which contains an irrational. ${\displaystyle \square }$

Problem: Let ${\displaystyle f:\mathbf {R} \to \mathbf {R} }$  be a bounded function. ${\displaystyle f}$  is continuous if and only if ${\displaystyle f}$  has closed graph.

Problem: Let ${\displaystyle f:\mathbf {R} \to \mathbf {R} }$  be a homeomorphism, then ${\displaystyle f}$  is monotone.

Problem Let ${\displaystyle f:[0,1]^{2}\to \mathbf {R} }$  be a continuous function. Then

${\displaystyle g(x)=\sup\{f(x,y)|y\in [0,1]\}\quad (x\in [0,1])}$

is continuous.

Let ${\displaystyle \epsilon >0}$ . Since ${\displaystyle f}$  is uniformly continuous, there is ${\displaystyle \delta >0}$  so that

${\displaystyle |f(x',y')-f(x,y)|<\epsilon }$  whenever ${\displaystyle |(x',y')-(x,y)|<\delta }$

It follows that ${\displaystyle g(x)<\epsilon +g(x')}$  as well as ${\displaystyle g(x')<\epsilon +g(x)}$  when ${\displaystyle |x'-x|<\delta }$ . Hence,

${\displaystyle |g(x')-g(x)|<\epsilon }$ . ${\displaystyle \square }$

Problem Let ${\displaystyle f,g:\mathbf {R} \to \mathbf {R} }$  be continuous functions such that: ${\displaystyle f(g(x))=g(f(x))}$  for every ${\displaystyle x}$ . The equation ${\displaystyle f(f(x))=g(g(x))}$  has a solution if and only if ${\displaystyle f(x)=g(x)}$  has one.

(${\displaystyle \Rightarrow }$ ) is trivial. For (${\displaystyle \Leftarrow }$ ), suppose we have ${\displaystyle x}$  so that ${\displaystyle f(f(x))=g(g(x))}$ . Define ${\displaystyle h(y)=f(y)-g(y)}$  for ${\displaystyle y\in \mathbf {R} }$ . Then

${\displaystyle h(f(x))=f(f(x))-g(f(x))=g(g(x))-f(g(x))=-h(g(x))}$ .

Thus, ${\displaystyle h(f(x))+h(g(x))=0}$ . If ${\displaystyle h(f(x))=0}$ , then we are done. If ${\displaystyle h(f(x))<0}$ , then, since ${\displaystyle h(g(x))>0}$ , by the intermediate value theorem, ${\displaystyle h(z)=0}$  for some ${\displaystyle z}$ . The same argument works for the case when ${\displaystyle h(f(x))>0}$ . ${\displaystyle \square }$

Problem Suppose ${\displaystyle f:\mathbf {R} \to \mathbf {R} }$  is uniformly continuous. Then there are constants ${\displaystyle a,b}$  such that:

${\displaystyle |f(x)|\leq a|x|+b}$

for all ${\displaystyle x\in \mathbf {R} }$ .

There exists ${\displaystyle \delta >0}$  such that

${\displaystyle |f(x)-f(y)|<1}$  whenever ${\displaystyle |x-y|<\delta }$ .

Let ${\displaystyle x\geq 0}$ . Then ${\displaystyle n-1\leq {x \over \delta }\leq n}$  for some integer ${\displaystyle n\geq 1}$ . It follows:

${\displaystyle |f(x)|\leq |f(0)|+|f(0)-f(\delta )|+...+|f((n-1)\delta )-f(x)|\leq |f(0)|+n}$

Here, ${\displaystyle n<1+{1 \over \delta }|x|}$ . The estimate for ${\displaystyle x<0}$  is analogous. ${\displaystyle \square }$

Problem Let X be a compact metric space, and ${\displaystyle f:X\to X}$  be an isometry: i.e., ${\displaystyle d(f(x),f(y))=d(x,y)}$ . Then f is a bijection.

f is clearly injective. To show f surjective, let ${\displaystyle x\in X}$ . Since X is compact, ${\displaystyle f^{n}(x)}$  contains a convergent subsequence, say, ${\displaystyle f^{n_{j}}(x)}$ . Then

${\displaystyle d(x,f^{n_{j}}(x))=d(f^{n_{k}}(x),f^{n_{j+k}}(x))\to 0}$

In other words, ${\displaystyle x}$  is in the closure of ${\displaystyle f(X)}$ . Since the image of a closed set under an isometry is closed, we conclude: ${\displaystyle x\in f(X)}$ . ${\displaystyle \square }$

Problem Let ${\displaystyle p_{n}}$  be a sequence of polynomials with degree ≤ some fixed D. If ${\displaystyle p_{n}}$  converges pointwise to 0 on [0, 1], then ${\displaystyle p_{n}}$  converges uniformly on [0, 1].

We first prove a weaker statement:

If ${\displaystyle p_{n}}$  converges pointwise on all but finitely many points in [0,1], then ${\displaystyle p_{n}}$  converges uniformly on all but finitely many points in [0,1].

We proceed by inducting on D. If D = 1, then the claim is obvious. Suppose it is true for D - 1. We write:

${\displaystyle p_{n}(x)=p_{n}(x_{0})+(x-x_{0})q_{n}(x)}$

where ${\displaystyle x_{0}}$  is a point such that ${\displaystyle p_{n}(x_{0})\to 0}$ . Since the degree of ${\displaystyle q_{n}}$  is strictly less than that of ${\displaystyle p_{n}}$ , by inductive hypothesis, ${\displaystyle q_{n}}$  converges uniformly on the complement of some finite subset F of [0, 1]. Thus, ${\displaystyle p_{n}}$  converges on the complement of ${\displaystyle F\cup \{x_{0}\}}$ . This completes the proof of the claim. By the claim, ${\displaystyle p_{n}}$  converges uniformly except at some finitely many points. But since ${\displaystyle p_{n}}$  converges pointwise everywhere, it converges uniformly everywhere. ${\displaystyle \square }$

Problem On a closed interval a monotone function has at most countably many discontinuous points.

Problem Prove that in Rn the relation ${\displaystyle B_{r}(x)\supset B_{s}(y)}$  implies r > s and find a metric space when the implication doesn't hold.

## Linear algebra

Throughout the section ${\displaystyle V}$  denotes a finite-dimensional vector space over the field of complex numbers.

Problem Given an ${\displaystyle n}$ , find a matrix with integer entries such that ${\displaystyle A\neq I}$  but ${\displaystyle A^{n}=I}$

A permutation matrix. ${\displaystyle \square }$

Problem Let A be a real symmetric positive-definite matrix and b some fixed vector. Let ${\displaystyle \phi (x)=\langle Ax,x\rangle -2\langle x,b\rangle }$ . Then ${\displaystyle Az=b}$  if and only if ${\displaystyle \phi (z)\leq \phi (x)}$

Note that A is invertible. Fix ${\displaystyle x\neq 0}$  and let ${\displaystyle f(t)=\phi (A^{-1}b+tx)}$ . Then

${\displaystyle f'(t)=2t\langle Ax,x\rangle }$

Since ${\displaystyle f''(0)>0}$ , ${\displaystyle t=0}$  is the minimum of f. ${\displaystyle \square }$

Problem If ${\displaystyle \operatorname {tr} (AB)=0}$  for all square matrices ${\displaystyle B}$ , then ${\displaystyle A=0}$

Take ${\displaystyle B=A^{T}}$ . ${\displaystyle \square }$

Problem Let x be a square matrix over a field of characteristic zero. If ${\displaystyle \operatorname {tr} (x^{k})=0}$  for all ${\displaystyle k>0}$ , then ${\displaystyle x}$  is nilpotent.

We may assume the field is algebraically closed. Suppose ${\displaystyle x}$  has nonzero distinct eigenvalues ${\displaystyle \lambda _{1},\lambda _{2},...,\lambda _{n}}$ . The hypothesis then means that we have the system of linear equations:

${\displaystyle \{\lambda _{1}^{k}+\lambda _{2}^{k}+...+\lambda _{n}^{k}=0\}_{1\leq k\leq n}}$

Computing the determinant, we see the system has no nonzero solution, a contradiction.

Problem Let ${\displaystyle S,T}$  be square matrices of the same size. Then ${\displaystyle ST}$  and ${\displaystyle TS}$  have the same eigenvalues.

Let ${\displaystyle \lambda }$  be an eigenvalue of ${\displaystyle ST}$ . If ${\displaystyle \lambda =0}$ , then ${\displaystyle 0=\det(ST)=\det(TS)}$ . Thus, ${\displaystyle \lambda }$  is an eigenvalue of ${\displaystyle TS}$ . If ${\displaystyle \lambda \neq 0}$ , then ${\displaystyle STx=\lambda x}$  for some nonzero ${\displaystyle x}$ . Thus, ${\displaystyle (TS)Tx=\lambda Tx}$ . Since ${\displaystyle Tx=0}$  implies ${\displaystyle \lambda x=0}$ , a contradiction, ${\displaystyle Tx}$  is an eigenvector. Hence, ${\displaystyle \lambda }$  is an eigenvalue of ${\displaystyle TS}$ . We thus proved that every eigenvalue of ${\displaystyle ST}$  is an eigenvalue of ${\displaystyle TS}$ . By the same argument, every eigenvalue of ${\displaystyle TS}$  is an eigenvalue of ${\displaystyle ST}$ . ${\displaystyle \square }$

Problem Let ${\displaystyle S,T}$  be square matrices of the same size. Then ${\displaystyle ST}$  and ${\displaystyle TS}$  have the same eigenvalues with same multiplicity.

If S is invertible, then

${\displaystyle ST=S(TS)S^{-1}}$

and thus

${\displaystyle \operatorname {det} (TS-\lambda I)=\operatorname {det} (ST-\lambda I)}$ .

If S is not invertible, then ${\displaystyle S+tI}$  is invertible when ${\displaystyle t>0}$  is sufficiently small. Thus,

${\displaystyle \operatorname {det} (T(S+tI)-\lambda I)=\operatorname {det} ((S+tI)T-\lambda I)}$

and letting ${\displaystyle t\to 0}$  gives the same identity. In any case, TS and ST share the same eigenvalues with same multiplicity.${\displaystyle \square }$

Problem Let ${\displaystyle A}$  be a square matrix over complex numbers. A is a real symmetric matrix if and only if

${\displaystyle \langle Ax,x\rangle }$

is real for every x.

(${\displaystyle \Rightarrow }$ ) is obvious. (${\displaystyle \Leftarrow }$ ) By hypothesis

${\displaystyle \langle Ax,x\rangle =\langle A^{*}x,x\rangle }$

Now recall that the numerical radius

${\displaystyle w(T)=\sup _{\|x\|=1}|\langle Tx,x\rangle |}$

is a norm. ${\displaystyle \square }$

Problem Suppose the square matrix ${\displaystyle a_{ij}}$  satisfies:

${\displaystyle |a_{ii}|>\sum _{j\neq i}|a_{ij}|}$

for all ${\displaystyle i}$ . Then ${\displaystyle A}$  is invertible.

Suppose ${\displaystyle Ax=0}$ . Then, in particular, each component of ${\displaystyle Ax}$  is zero; i.e.,

${\displaystyle 0=\sum _{j}a_{ij}x_{j}=a_{ii}x_{i}+\sum _{j\neq i}a_{ij}x_{j}}$

The inequality ${\displaystyle ||a|-|b||\leq |a+b|}$  thus gives:

${\displaystyle |a_{ii}||x_{i}|=|\sum _{j\neq i}a_{ij}x_{j}|\leq \sum _{j\neq i}|a_{ij}||x_{j}|}$

for all ${\displaystyle i}$ . Pick ${\displaystyle k}$  such that ${\displaystyle \max\{|x_{1}|,|x_{2}|,...|x_{n}|\}=|x_{k}|}$ . Then, by hypothesis,

${\displaystyle |a_{kk}||x_{k}|\leq (\sum _{j\neq i}|a_{ij}|)|x_{k}|<|a_{kk}||x_{k}|}$ ,

which is absurd, unless ${\displaystyle |x_{k}|=0}$ . Hence, ${\displaystyle x=0}$ . ${\displaystyle \square }$

Problem Let ${\displaystyle T,S\in \operatorname {End} (V)}$ . If ${\displaystyle V}$  is finite-dimensional, then prove ${\displaystyle TS}$  is invertible if and only if ${\displaystyle ST}$  is invertible. Is this also true when ${\displaystyle V}$  is infinite-dimensional?

For the first part, use determinant. For the second, consider a shift operator. ${\displaystyle \square }$

Problem: Let ${\displaystyle T,S}$  be linear operators on ${\displaystyle V}$ . Then

${\displaystyle \operatorname {dim} \operatorname {ker} (TS)\leq \operatorname {dim} \operatorname {ker} (S)+\operatorname {dim} \operatorname {ker} (T)}$

The map

${\displaystyle S:\operatorname {ker} (TS)\to \operatorname {ker} T}$

is well-defined. Hence,

${\displaystyle \operatorname {dim} \operatorname {ker} (T)\geq \operatorname {dim} \operatorname {ker} (TS)-\operatorname {dim} \operatorname {ker} (S|_{\operatorname {ker} (TS)})}$  ${\displaystyle \square }$

Problem Every matrix (over an arbitrary field) is similar to its transpose.

The shortest proof would be to use a Smith normal form: a matrix A is similar to another matrix "B if and only if ${\displaystyle XI-A}$  and ${\displaystyle XI-B}$  have the same Smith normal form. Evidently, this is the case if B is the transpose of A.

Problem Every nonzero eigenvalue of a skew-symmetric matrix is pure imaginary.

Problem If the transpose of a matrix ${\displaystyle A}$  is zero, then ${\displaystyle A}$  is similar to a matrix with the main diagonal consisting of only zeros.

Problem ${\displaystyle \operatorname {rank} (A^{n})-\operatorname {rank} (A^{n-1})\leq \operatorname {rank} (A^{n+1})-\operatorname {rank} (A^{n})}$  for any square matrix ${\displaystyle A}$ .

Problem: Every square matrix is similar to an upper-triangular matrix.

Jordan form or Schur form.

Problem: Let A be a normal matrix. Then ${\displaystyle A^{*}}$  is a polynomial in A.

Problem: Let A be a normal matrix. Then:

${\displaystyle \|A\|=\max _{|x|=1}|(Ax\mid x)|=\sup _{\lambda \in \operatorname {Sp} (A)}|\lambda |}$

Problem: Let A be a square matrix. Then ${\displaystyle A\to 0}$  (in operator norm) if and only if the spectral radius of ${\displaystyle A<1}$

Problem: Let A be a square matrix. Then ${\displaystyle \|A\|=\|A^{*}A\|^{1/2}}$

Problem: ${\displaystyle T\mapsto \sup _{\|x\|=1}(Tx\mid x)}$  is a norm for bounded operators T on a "complex" Hilbert space.

It is clear that the map is a seminorm. To see it is a norm, suppose ${\displaystyle (Tx\mid x)=0}$  for all x. In particular,
${\displaystyle 0=(Tx+y\mid x+y)=(Tx\mid y)+(Ty\mid x)}$
${\displaystyle 0=(Tx+iy\mid x+iy)=-i(Tx\mid y)+i(Ty\mid x)}$
${\displaystyle (Tx\mid y)=0}$
for all x and y. Take ${\displaystyle y=Tx}$  and we get ${\displaystyle Tx=0}$  for all x.