# Topological Modules/Printable version

Topological Modules

The current, editable version of this book is available in Wikibooks, the open-content textbooks collection, at
https://en.wikibooks.org/wiki/Topological_Modules

Permission is granted to copy, distribute, and/or modify this document under the terms of the Creative Commons Attribution-ShareAlike 3.0 License.

# Constructions

Definition (quotient topological module):

Let ${\displaystyle M}$ be a topological module over the topological ring ${\displaystyle R}$, and let ${\displaystyle N\leq M}$ a submodule with the subspace topology. Then the module ${\displaystyle M/N}$ together with the quotient topology, ie. the final topology induced by the quotient map ${\displaystyle q:M\to M/N}$, is called the quotient module of ${\displaystyle M}$.

Proposition (quotient map of topological quotient is open):

Let ${\displaystyle M}$ be a topological module and ${\displaystyle N\leq M}$ a submodule. Then the map ${\displaystyle q:M\to M/N}$ is open.

Proof: Let ${\displaystyle U\subseteq M}$ be any open set. We have

${\displaystyle p^{-1}(p(U))=\bigcup _{n\in \mathbb {N} }U+n}$

which is open as the union of open sets. ${\displaystyle \Box }$

Proposition (quotient topological module is topological module):

Let ${\displaystyle M}$ be a topological module and ${\displaystyle N\leq M}$ a submodule. Then the quotient module ${\displaystyle M/N}$ is a topological module with the subspace topology.

Proof: ${\displaystyle \Box }$

# Banach spaces

Definition (Banach space):

A Banach space is a complete normed space.

Proposition (series criterion for Banach spaces):

Let ${\displaystyle X}$ be a normed space with norm ${\displaystyle \|\cdot \|}$. Then ${\displaystyle X}$ is a Banach space if and only if

${\displaystyle \sum _{n=1}^{\infty }\|x_{n}\|<\infty }$ implies that ${\displaystyle \lim _{N\to \infty }\sum _{n=1}^{N}x_{n}}$ exists in ${\displaystyle X}$,

whenever ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$ is a sequence in ${\displaystyle X}$.

Proof: Suppose first that ${\displaystyle X}$ is a Banach space. Then suppose that ${\displaystyle \sum _{n=1}^{\infty }\|x_{n}\|}$ converges, where ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$ is a sequence in ${\displaystyle X}$. Then set ${\displaystyle S_{N}:=\sum _{n=1}^{N}x_{n}}$; we claim that ${\displaystyle (S_{N})_{N\in \mathbb {N} }}$ is a Cauchy sequence. Indeed, for ${\displaystyle M>0}$ sufficiently large, we have

${\displaystyle N\geq M\Rightarrow \|S_{M}-S_{N}\|=\left\|\sum _{n=M+1}^{N}x_{n}\right\|\leq \sum _{n=M+1}^{N}\|x_{n}\|\leq \sum _{n=M+1}^{\infty }\|x_{n}\|<\epsilon }$.

Hence, ${\displaystyle (S_{N})_{N\in \mathbb {N} }}$ also converges, because ${\displaystyle X}$ is a Banach space.

Now suppose that for all sequences ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$ the implication

${\displaystyle \sum _{n=1}^{\infty }\|x_{n}\|<\infty \Rightarrow \lim _{N\to \infty }\sum _{n=1}^{N}x_{n}}$

holds. Let then ${\displaystyle (y_{n})_{n\in \mathbb {N} }}$ be a Cauchy sequence in ${\displaystyle X}$. By the Cauchy property, choose, for all ${\displaystyle k\in \mathbb {N} }$, a number ${\displaystyle N_{k}\in \mathbb {N} }$ such that ${\displaystyle \|y_{m}-y_{n}\|<1/2^{k}}$ whenever ${\displaystyle m,n\geq N_{k}}$. We may assume that ${\displaystyle N_{1}\leq N_{2}\leq \cdots \leq N_{k}\leq \cdots }$, ie. ${\displaystyle (N_{k})_{k\in \mathbb {N} }}$ is an ascending sequence of natural numbers. Then define ${\displaystyle x_{1}:=y_{N_{1}}}$ and for ${\displaystyle k\geq 2}$ set ${\displaystyle x_{k}:=y_{N_{k}}-y_{N_{k-1}}}$. Then

${\displaystyle \sum _{j=1}^{k}x_{j}=y_{N_{k}}}$.

Moreover,

${\displaystyle \sum _{j=1}^{k}\|x_{j}\|\leq \|y_{N_{1}}\|+\sum _{j=2}^{k}2^{-(k-1)}}$,

so that

${\displaystyle \sum _{j=1}^{\infty }\|x_{j}\|}$

converges as a monotonely increasing, bounded sequence. By the assumption, the sequence ${\displaystyle (S_{k})_{k\in \mathbb {N} }}$ converges, where

${\displaystyle S_{k}=\sum _{j=1}^{k}x_{j}=y_{N_{k}}}$.

Thus, ${\displaystyle (y_{n})_{n\in \mathbb {N} }}$ is a Cauchy sequence that has a convergent subsequence and is hence convergent. ${\displaystyle \Box }$

# Hahn–Banach theorems

Theorem (geometrical Hahn–Banach theorem):

Let ${\displaystyle V}$ be a real topological vector space, and let ${\displaystyle U\subseteq V}$ be open and convex so that ${\displaystyle 0\notin V}$. Then there exists a hyperplane ${\displaystyle W\leq V}$ not intersecting ${\displaystyle U}$.

(On the condition of the axiom of choice.)

Proof: The set of all vector subspaces of ${\displaystyle V}$ that do not intersect is inductive and also nonempty (because of the zero subspace). Hence, by Zorn's lemma, pick a maximal vector subspace ${\displaystyle W\leq V}$ that does not intersect ${\displaystyle U}$. Claim that ${\displaystyle W}$ is a hyperplane. If not, ${\displaystyle V/W}$ has dimension ${\displaystyle \geq 2}$. Now the canonical map ${\displaystyle p:V\to V/W}$ is open, so that ${\displaystyle U':=p(U)}$ is an open, convex subset of ${\displaystyle V/W}$. We consider the cone

${\displaystyle C:=\bigcup _{\lambda >0}\lambda U'}$

and note that it has a nonzero boundary point; for otherwise ${\displaystyle C}$ would be clopen in ${\displaystyle V/W\setminus \{0\}}$ which is path-connected (indeed by assumption ${\displaystyle \operatorname {dim} V/W\geq 2}$, so that for any two points ${\displaystyle x,y\in V/W}$ we find a 2-dimensional plane containing both, and by using a "corner point" ${\displaystyle z}$ when ${\displaystyle x,y}$ do lie on a line through the origin, we may connect them in ${\displaystyle V/W\setminus \{0\}}$, because a segment in a TVS yields a continuous path by continuity of addition and scalar multiplication), so that ${\displaystyle C=V/W\setminus \{0\}}$, which is impossible because for any ${\displaystyle x\neq 0}$ in ${\displaystyle V/W}$, we then have ${\displaystyle x\in \mu U'}$, ${\displaystyle -x\in \lambda U'}$ for ${\displaystyle \mu ,\lambda >0}$, so that ${\displaystyle 0\in U'}$ by convexity, a contradiction. Hence, let ${\displaystyle w\in \partial C}$. Then the line ${\displaystyle L}$ generated by ${\displaystyle w}$ does not intersect ${\displaystyle C}$ and hence not ${\displaystyle U'}$, and ${\displaystyle p^{-1}(L)}$ is a larger subspace of ${\displaystyle V}$ that does not intersect ${\displaystyle U}$ than ${\displaystyle W}$ in contradiction to the maximality of the latter. ${\displaystyle \Box }$

# Barrelled spaces

Proposition (pointwise limit of continuous linear functions from a barrelled LCTVS into a Hausdorff TVS is continuous and linear):

Let ${\displaystyle X}$ be a barrelled LCTVS over a field ${\displaystyle \mathbb {K} }$, let ${\displaystyle Y}$ be a locally closed Hausdorff TVS over the same field, and suppose that a net ${\displaystyle T_{\lambda }:X\to Y}$ (${\displaystyle \lambda \in \Lambda }$) of linear and continuous functions is given. Suppose further that ${\displaystyle T:X\to Y}$ is a function such that

${\displaystyle \lim _{\lambda \in \Lambda }T_{\lambda }(x)=T(x)}$.

Then ${\displaystyle T}$ is itself a linear and continuous functional.

Proof: First note that ${\displaystyle T}$ is linear, since whenever ${\displaystyle \alpha \in \mathbb {K} }$ and ${\displaystyle v,w\in X}$, we have

${\displaystyle T(v+\alpha w)=\lim _{\lambda \in \Lambda }(T_{\lambda }(v)+\alpha T_{n}(w))=T(v)+\alpha T(w)}$

since ${\displaystyle Y}$ is a Hausdorff space, where limits are well-defined, and by continuity of addition. Then note that ${\displaystyle T}$ is continuous, since for all ${\displaystyle v\in X}$ the set ${\displaystyle \{T_{\lambda }|\lambda \in \Lambda \}}$ is bounded, so that the Banach—Steinhaus theorem applies and the family ${\displaystyle \{T_{\lambda }|\lambda \in \Lambda \}}$ is uniformly bounded. Hence, suppose that ${\displaystyle V\subseteq Y}$ is a closed neighbourhood of the origin. By uniform boundedness, select ${\displaystyle U\subseteq X}$ to be an open neighbourhood of the origin so that

${\displaystyle \forall \lambda \in \Lambda :T_{n}(U)\subseteq V}$.

We conclude that ${\displaystyle T(U)\subseteq V}$, since closed sets contain their net limits. We conclude since ${\displaystyle Y}$ is locally closed, so that ${\displaystyle V}$ represents a generic neighbourhood. ${\displaystyle \Box }$

# Topological tensor products

## Tensor product of Hilbert spaces

Proposition (tensor product of orthonormal bases is orthonormal basis of tensor product):

Let ${\displaystyle H_{1},H_{2}}$ be Hilbert spaces, and suppose that ${\displaystyle (e_{\lambda })_{\lambda \in \Lambda }}$ is an orthonormal basis of ${\displaystyle H_{1}}$ and ${\displaystyle (f_{\mu })_{\mu \in \mathrm {M} }}$ is an orthonormal basis of ${\displaystyle H_{2}}$. Then ${\displaystyle (e_{\lambda }\otimes e_{\mu })_{(\lambda ,\mu )\in \Lambda \times \mathrm {M} }}$ is an orthonormal basis of ${\displaystyle H_{1}\otimes H_{2}}$.

Proof: Let any element

${\displaystyle \sum _{j=1}^{n}f_{j}\otimes g_{j}}$

of ${\displaystyle H_{1}\otimes H_{2}}$ be given; by definition, each element of ${\displaystyle H_{1}\otimes H_{2}}$ may be approximated by such elements. Let ${\displaystyle \epsilon >0}$. Then by definition of an orthonormal basis, we find ${\displaystyle m_{j},l_{j}}$ for ${\displaystyle j\in [n]}$ and ${\displaystyle \alpha _{j,k},\beta _{j,k},\lambda _{j,k},\mu _{j,k}}$ for ${\displaystyle j\in [n]}$ and then ${\displaystyle k\in [m_{j}]}$ resp. ${\displaystyle [l_{j}]}$ such that

${\displaystyle \left\|\sum _{k=1}^{m_{j}}\alpha _{j,k}e_{\lambda _{j,k}}-f_{j}\right\|<\epsilon }$ and ${\displaystyle \left\|\sum _{k=1}^{l_{j}}\beta _{j,k}f_{\mu _{j,k}}-g_{j}\right\|<\epsilon }$.

Then note that by the triangle inequality,

${\displaystyle \left\|\sum _{j=1}^{n}f_{j}\otimes g_{j}-\sum _{j=1}^{n}\left(\sum _{k=1}^{m_{j}}\alpha _{j,k}e_{\lambda _{j,k}}\right)\otimes \left(\sum _{k=1}^{l_{j}}\beta _{j,k}f_{\mu _{j,k}}\right)\right\|\leq \sum _{j=1}^{n}\left\|f_{j}\otimes g_{j}-\left(\sum _{k=1}^{m_{j}}\alpha _{j,k}e_{\lambda _{j,k}}\right)\otimes \left(\sum _{k=1}^{l_{j}}\beta _{j,k}f_{\mu _{j,k}}\right)\right\|}$.

Now fix ${\displaystyle j\in [n]}$. Then by the triangle inequality,

{\displaystyle {\begin{aligned}\left\|f_{j}\otimes g_{j}-\left(\sum _{k=1}^{m_{j}}\alpha _{j,k}e_{\lambda _{j,k}}\right)\otimes \left(\sum _{k=1}^{l_{j}}\beta _{j,k}f_{\mu _{j,k}}\right)\right\|&\leq \left\|f_{j}\otimes g_{j}-\left(\sum _{k=1}^{m_{j}}\alpha _{j,k}e_{\lambda _{j,k}}\right)\otimes g_{j}\right\|+\left\|\left(\sum _{k=1}^{m_{j}}\alpha _{j,k}e_{\lambda _{j,k}}\right)\otimes g_{j}-\left(\sum _{k=1}^{m_{j}}\alpha _{j,k}e_{\lambda _{j,k}}\right)\otimes \left(\sum _{k=1}^{l_{j}}\beta _{j,k}f_{\mu _{j,k}}\right)\right\|\\&=\left\|f_{j}-\left(\sum _{k=1}^{m_{j}}\alpha _{j,k}e_{\lambda _{j,k}}\right)\right\|\|g_{j}\|+\left\|\sum _{k=1}^{m_{j}}\alpha _{j,k}e_{\lambda _{j,k}}\right\|\left\|g_{j}-\left(\sum _{k=1}^{l_{j}}\beta _{j,k}f_{\mu _{j,k}}\right)\right\|\\&\leq \epsilon \left(\|g_{j}\|+\left\|\sum _{k=1}^{m_{j}}\alpha _{j,k}e_{\lambda _{j,k}}\right\|\right).\end{aligned}}}

In total, we obtain that

${\displaystyle \left\|\sum _{j=1}^{n}f_{j}\otimes g_{j}-\sum _{j=1}^{n}\left(\sum _{k=1}^{m_{j}}\alpha _{j,k}e_{\lambda _{j,k}}\right)\otimes \left(\sum _{k=1}^{l_{j}}\beta _{j,k}f_{\mu _{j,k}}\right)\right\|\leq \epsilon \sum _{j=1}^{n}\left(\|g_{j}\|+2\|f_{j}\|\right)}$

(assuming that the given sum approximates ${\displaystyle f_{j}}$ well enough) which is arbitrarily small, so that the span of tensors of the form ${\displaystyle e_{\lambda }\otimes f_{\mu }}$ is dense in ${\displaystyle H_{1}\otimes H_{2}}$. Now we claim that the basis is orthonormal. Indeed, suppose that ${\displaystyle (\lambda ,\mu )\neq (\lambda ',\mu ')}$. Then

${\displaystyle \langle e_{\lambda }\otimes f_{\mu },e_{\lambda '}\otimes f_{\mu '}\rangle =\langle e_{\lambda },e_{\lambda '}\rangle \langle f_{\mu },f_{\mu '}\rangle =0}$.

Similarly, the above expression evaluates to ${\displaystyle 1}$ when ${\displaystyle \lambda =\lambda '}$ and ${\displaystyle \mu =\mu '}$. Hence, ${\displaystyle (e_{\lambda }\otimes e_{\mu })_{(\lambda ,\mu )\in \Lambda \times \mathrm {M} }}$ does constitute an orthonormal basis of ${\displaystyle H_{1}\otimes H_{2}}$. ${\displaystyle \Box }$

# Orthogonal projection

Theorem (Von Neumann ergodic theorem):

Let ${\displaystyle H}$ be Hilbert space, and let ${\displaystyle U:H\to H}$ be a unitary operator. Further, let the orthogonal projection onto the space ${\displaystyle W:=\{x\in H|Ux=x\}}$ be given by ${\displaystyle P_{W}:H\to H}$. Then

${\displaystyle \lim _{n\to \infty }{\frac {1}{n}}\sum _{k=1}^{n}U^{k}=P_{W}}$,

where the limit is taken with respect to the operator norm on ${\displaystyle B(H)}$, the space of bounded operators on ${\displaystyle H}$. Moreover, the inequality

${\displaystyle \left\|{\frac {1}{n}}\sum _{k=1}^{n}U^{k}-P_{W}\right\|\leq {\frac {2}{n}}}$

is a valid estimate for the convergence rate.

Proof: Suppose first that ${\displaystyle x\in H}$ and ${\displaystyle z\in W}$. Then

{\displaystyle {\begin{aligned}\langle {\frac {1}{n}}\sum _{k=1}^{n}U^{k}x-P_{W}(x),z\rangle &=\langle {\frac {1}{n}}\sum _{k=1}^{n}U^{k}x-P_{W}(x),z\rangle +{\frac {n}{n}}\langle P_{W}(x)-x,z\rangle \\&=\langle {\frac {1}{n}}\sum _{k=1}^{n}U^{k}x-x,z\rangle \\&={\frac {1}{n}}\sum _{k=1}^{n}\left(\langle U^{k}x,z\rangle -\langle x,z\rangle \right)\\&{\overset {z\in W}{=}}{\frac {1}{n}}\sum _{k=1}^{n}\left(\overbrace {\langle U^{k}x,U^{k}z\rangle } ^{=\langle x,z\rangle }-\langle x,z\rangle \right)=0.\end{aligned}}}

Further, if we set

${\displaystyle y_{n}:={\frac {1}{n}}\sum _{k=1}^{n}U^{k}x}$,

we obtain

{\displaystyle {\begin{aligned}\|Uy_{n}-y_{n}\|^{2}&=\langle y_{n}-Uy_{n},y_{n}-Uy_{n}\rangle \\&={\frac {1}{n^{2}}}\langle Ux-U^{n+1}x,Ux-U^{n+1}x\rangle \\&={\frac {1}{n^{2}}}\left(\langle Ux,Ux\rangle -\langle U^{n+1}x,Ux\rangle -\langle Ux,U^{n+1}x\rangle +\langle U^{n+1}x,U^{n+1}x\rangle \right)\\&{\overset {\text{Cauchy‒Schwarz}}{\leq }}{\frac {4\|x\|^{2}}{n^{2}}}.\end{aligned}}}

If now the sequence ${\displaystyle (y_{n})_{n\in \mathbb {N} }}$ is convergent, we see that its limit is indeed contained within ${\displaystyle W}$. From the respective former consideration, we may hence infer that the sequence ${\displaystyle (y_{n})_{n\in \mathbb {N} }}$ does in fact converge to ${\displaystyle P_{W}(x)}$. We are thus reduced to proving the convergence of the sequence in operator norm. Since ${\displaystyle H}$ is Hilbert space, proving that ${\displaystyle \left({\frac {1}{n}}\sum _{k=1}^{n}U^{k}\right)_{n\in \mathbb {N} }}$ is a Cauchy sequence will be sufficient. But since

{\displaystyle {\begin{aligned}\|y_{n}-y_{mn}\|&=\left\|y_{n}-\sum _{k=1}^{m}{\frac {1}{m}}U^{nk}y_{n}\right\|\\&\leq \sum _{k=1}^{m}{\frac {1}{m}}\|y_{n}-U^{nk}y_{n}\|\\&\leq {\frac {2\|x\|}{n}}\end{aligned}}}

for ${\displaystyle m\geq 1}$ this is the case; the gaps are closed using that

{\displaystyle {\begin{aligned}\|y_{n}-y_{m}\|&=\left\|{\frac {m-n}{mn}}\sum _{k=1}^{n}U^{k}x-{\frac {1}{m}}\sum _{k=n+1}^{m}U^{k}x\right\|\\&\leq {\frac {m-n}{mn}}\|x\|+{\frac {m-n}{m}}\|x\|.\end{aligned}}}

Taking ${\displaystyle m\to \infty }$ in the next to last computation yields the desired rate of convergence. These computations also reveal the underlying cause of convergence: The sequence becomes more and more uniform, since applying ${\displaystyle U^{m}}$ to it does not change it by a large amount. ${\displaystyle \Box }$