Cauchy-Hadamard theorem in several complex variables
edit
Let
α
{\displaystyle \alpha }
be an n -dimensional vector of natural numbers (
α
=
(
α
1
,
⋯
,
α
n
)
∈
N
n
{\displaystyle \alpha =(\alpha _{1},\cdots ,\alpha _{n})\in \mathbb {N} ^{n}}
) with
|
|
α
|
|
=
α
1
+
⋯
+
α
n
{\displaystyle ||\alpha ||=\alpha _{1}+\cdots +\alpha _{n}}
, then
f
(
z
)
{\displaystyle f(z)}
converges with radius of convergence
ρ
=
(
ρ
1
,
⋯
,
ρ
n
)
∈
R
n
{\displaystyle \rho =(\rho _{1},\cdots ,\rho _{n})\in \mathbb {R} ^{n}}
with
ρ
α
=
ρ
1
α
1
⋯
ρ
n
α
n
{\displaystyle \rho ^{\alpha }=\rho _{1}^{\alpha _{1}}\cdots \rho _{n}^{\alpha _{n}}}
if and only if
lim sup
|
|
α
|
|
→
∞
|
c
α
|
ρ
α
|
|
α
|
|
=
1
{\displaystyle \limsup _{||\alpha ||\to \infty }{\sqrt[{||\alpha ||}]{|c_{\alpha }|\rho ^{\alpha }}}=1}
where
f
(
z
)
=
∑
α
≥
0
c
α
(
z
−
a
)
α
:=
∑
α
1
≥
0
,
…
,
α
n
≥
0
c
α
1
,
…
,
α
n
(
z
1
−
a
1
)
α
1
⋯
(
z
n
−
a
n
)
α
n
{\displaystyle f(z)=\sum _{\alpha \geq 0}c_{\alpha }(z-a)^{\alpha }:=\sum _{\alpha _{1}\geq 0,\ldots ,\alpha _{n}\geq 0}c_{\alpha _{1},\ldots ,\alpha _{n}}(z_{1}-a_{1})^{\alpha _{1}}\cdots (z_{n}-a_{n})^{\alpha _{n}}}
Set
z
=
a
+
t
ρ
{\displaystyle z=a+t\rho }
(
z
i
=
a
i
+
t
ρ
i
)
{\displaystyle (z_{i}=a_{i}+t\rho _{i})}
, then[1]
∑
α
≥
0
c
α
(
z
−
a
)
α
=
∑
α
≥
0
c
α
ρ
α
t
|
|
α
|
|
=
∑
μ
≥
0
(
∑
|
|
α
|
|
=
μ
|
c
α
|
ρ
α
)
t
μ
{\displaystyle \sum _{\alpha \geq 0}c_{\alpha }(z-a)^{\alpha }=\sum _{\alpha \geq 0}c_{\alpha }\rho ^{\alpha }t^{||\alpha ||}=\sum _{\mu \geq 0}\left(\sum _{||\alpha ||=\mu }|c_{\alpha }|\rho ^{\alpha }\right)t^{\mu }}
This is a power series in one variable
t
{\displaystyle t}
which converges for
|
t
|
<
1
{\displaystyle |t|<1}
and diverges for
|
t
|
>
1
{\displaystyle |t|>1}
. Therefore, by the Cauchy-Hadamard theorem for one variable
lim sup
μ
→
∞
∑
|
|
α
|
|
=
μ
|
c
α
|
ρ
α
μ
=
1
{\displaystyle \limsup _{\mu \to \infty }{\sqrt[{\mu }]{\sum _{||\alpha ||=\mu }|c_{\alpha }|\rho ^{\alpha }}}=1}
Setting
|
c
m
|
ρ
m
=
max
|
|
α
|
|
=
μ
|
c
α
|
ρ
α
{\displaystyle |c_{m}|\rho ^{m}=\max _{||\alpha ||=\mu }|c_{\alpha }|\rho ^{\alpha }}
gives us an estimate
|
c
m
|
ρ
m
≤
∑
|
|
α
|
|
=
μ
|
c
α
|
ρ
α
≤
(
μ
+
1
)
n
|
c
m
|
ρ
m
{\displaystyle |c_{m}|\rho ^{m}\leq \sum _{||\alpha ||=\mu }|c_{\alpha }|\rho ^{\alpha }\leq (\mu +1)^{n}|c_{m}|\rho ^{m}}
Because
(
μ
+
1
)
n
μ
→
1
{\displaystyle {\sqrt[{\mu }]{(\mu +1)^{n}}}\to 1}
as
μ
→
∞
{\displaystyle \mu \to \infty }
|
c
m
|
ρ
m
μ
≤
∑
|
|
α
|
|
=
μ
|
c
α
|
ρ
α
μ
≤
|
c
m
|
ρ
m
μ
⟹
∑
|
|
α
|
|
=
μ
|
c
α
|
ρ
α
μ
=
|
c
m
|
ρ
m
μ
(
μ
→
∞
)
{\displaystyle {\sqrt[{\mu }]{|c_{m}|\rho ^{m}}}\leq {\sqrt[{\mu }]{\sum _{||\alpha ||=\mu }|c_{\alpha }|\rho ^{\alpha }}}\leq {\sqrt[{\mu }]{|c_{m}|\rho ^{m}}}\implies {\sqrt[{\mu }]{\sum _{||\alpha ||=\mu }|c_{\alpha }|\rho ^{\alpha }}}={\sqrt[{\mu }]{|c_{m}|\rho ^{m}}}\qquad (\mu \to \infty )}
Therefore
lim sup
|
|
α
|
|
→
∞
|
c
α
|
ρ
α
|
|
α
|
|
=
lim sup
μ
→
∞
|
c
m
|
ρ
m
μ
=
1
{\displaystyle \limsup _{||\alpha ||\to \infty }{\sqrt[{||\alpha ||}]{|c_{\alpha }|\rho ^{\alpha }}}=\limsup _{\mu \to \infty }{\sqrt[{\mu }]{|c_{m}|\rho ^{m}}}=1}
For the central diagonal of our example,
Δ
1
1
−
x
−
y
=
∑
n
≥
0
f
n
,
n
x
n
y
n
{\displaystyle \Delta {\frac {1}{1-x-y}}=\sum _{n\geq 0}f_{n,n}x^{n}y^{n}}
:
lim sup
n
→
∞
|
f
n
,
n
|
x
n
y
n
n
=
1
⟹
lim sup
n
→
∞
|
f
n
,
n
|
=
1
(
x
y
)
n
{\displaystyle \limsup _{n\to \infty }{\sqrt[{n}]{|f_{n,n}|x^{n}y^{n}}}=1\implies \limsup _{n\to \infty }|f_{n,n}|={\frac {1}{(xy)^{n}}}}
x
n
y
n
{\displaystyle x^{n}y^{n}}
is at its largest when
x
=
y
=
1
2
{\displaystyle x=y={\frac {1}{2}}}
so that
lim sup
n
→
∞
|
f
n
,
n
|
=
4
n
{\displaystyle \limsup _{n\to \infty }|f_{n,n}|=4^{n}}
.
We know by Stirling's approximation that this is a good estimate.
But what about a diagonal along an arbitrary ray, like the above example
Δ
(
2
,
1
)
1
1
−
x
−
y
{\displaystyle \Delta ^{(2,1)}{\frac {1}{1-x-y}}}
?
lim sup
|
n
r
|
→
∞
|
f
2
n
,
n
|
x
2
n
y
n
|
n
r
|
=
1
⟹
lim sup
|
n
r
|
→
∞
|
f
2
n
,
n
|
=
1
(
x
2
y
)
n
{\displaystyle \limsup _{|n{\textbf {r}}|\to \infty }{\sqrt[{|n{\textbf {r}}|}]{|f_{2n,n}|x^{2n}y^{n}}}=1\implies \limsup _{|n{\textbf {r}}|\to \infty }|f_{2n,n}|={\frac {1}{(x^{2}y)^{n}}}}
If we keep
x
=
y
=
1
2
{\displaystyle x=y={\frac {1}{2}}}
then
lim sup
|
n
r
|
→
∞
|
f
2
n
,
n
|
=
8
n
{\displaystyle \limsup _{|n{\textbf {r}}|\to \infty }|f_{2n,n}|=8^{n}}
This isn't a good estimate.
Better to use
x
=
2
3
,
y
=
1
3
{\displaystyle x={\frac {2}{3}},y={\frac {1}{3}}}
then
lim sup
|
n
r
|
→
∞
|
f
2
n
,
n
|
=
(
27
4
)
n
=
(
6.75
)
n
{\displaystyle \limsup _{|n{\textbf {r}}|\to \infty }|f_{2n,n}|=\left({\frac {27}{4}}\right)^{n}=(6.75)^{n}}
In the below, the function we are interested in is
F
(
z
)
=
G
(
z
)
H
(
z
)
{\displaystyle F({\textbf {z}})={\frac {G({\textbf {z}})}{H({\textbf {z}})}}}
.
We therefore want to find the
w
{\displaystyle {\textbf {w}}}
on the domain of convergence of
F
(
z
)
{\displaystyle F({\textbf {z}})}
that minimises
w
−
r
{\displaystyle {\textbf {w}}^{-{\textbf {r}}}}
.
The subject of convex optimisation already has the tools for this, but in order to use it we need to transform the domain of convergence to be a convex set and
w
−
r
{\displaystyle {\textbf {w}}^{-{\textbf {r}}}}
to be a convex function.
Give examples of how useful convex is...
Fortunately, the logarithmic image of the domain of convergence of a power series of a complex function is convex.[2]
Therefore, we define[3]
R
e
l
o
g
(
z
)
=
(
log
|
z
1
|
,
⋯
,
log
|
z
d
|
)
{\displaystyle Relog({\textbf {z}})=(\log |z_{1}|,\cdots ,\log |z_{d}|)}
a
m
o
e
b
a
(
H
)
=
{
R
e
l
o
g
(
z
)
:
H
(
z
)
=
0
}
{\displaystyle amoeba(H)=\{Relog({\textbf {z}}):H({\textbf {z}})=0\}}
The domain of convergence of our function can now be defined as the complement of this amoeba[4]
a
m
o
e
b
a
(
H
)
c
=
R
d
∖
a
m
o
e
b
a
(
H
)
{\displaystyle amoeba(H)^{c}=\mathbb {R} ^{d}\setminus amoeba(H)}
This may leave us with multiple unconnected components, each one for a different Laurent series expansion. Denote the component we are interested in as
B
{\displaystyle B}
D
=
R
e
l
o
g
−
1
(
B
)
{\displaystyle {\mathcal {D}}=Relog^{-1}(B)}
The logarithmic image of
w
−
r
{\displaystyle {\textbf {w}}^{-{\textbf {r}}}}
is
h
(
w
)
=
−
r
⋅
R
e
l
o
g
(
w
)
{\displaystyle h({\textbf {w}})=-{\textbf {r}}\cdot Relog({\textbf {w}})}
. Because
log
{\displaystyle \log }
is a concave function,
−
log
{\displaystyle -\log }
is convex.
So we now have a problem of minimising a convex function over a convex set.
We want to find the supporting hyperplane to
B
¯
{\displaystyle {\bar {B}}}
with outward-facing normal
−
∇
h
(
w
)
{\displaystyle -\nabla h({\textbf {w}})}
.
Critical point equations
edit
This happens when the supporting hyperplane defined above coincides with the tangent plane with normal
∇
H
(
w
)
{\displaystyle \nabla H({\textbf {w}})}
.
This means they are not linearly independent and therefore the matrix
(
∂
H
∂
z
1
(
w
)
⋯
∂
H
∂
z
d
(
w
)
r
1
/
w
1
⋯
r
d
/
w
d
)
{\displaystyle {\begin{pmatrix}{\frac {\partial H}{\partial z_{1}}}({\textbf {w}})&\cdots &{\frac {\partial H}{\partial z_{d}}}({\textbf {w}})\\r_{1}/w_{1}&\cdots &r_{d}/w_{d}\end{pmatrix}}}
is rank deficient, or its 2 x 2 submatrices have zero determinants. This is equivalent to a system of equations referred to as the critical point equations [5] [6]
H
(
w
)
=
0
r
j
w
1
∂
H
∂
z
1
(
w
)
−
r
1
w
j
∂
H
∂
z
j
(
w
)
=
0
(
2
≤
j
≤
d
)
.
{\displaystyle H({\textbf {w}})=0\quad r_{j}w_{1}{\frac {\partial H}{\partial z_{1}}}({\textbf {w}})-r_{1}w_{j}{\frac {\partial H}{\partial z_{j}}}({\textbf {w}})=0\quad (2\leq j\leq d).}
↑ Shabat 1992, pp. 32-33.
↑ Shabat 1992, pp. 31.
↑ Pemantle, Wilson and Melczer 2024, pp. 151, 157.
↑ Melczer 2021, pp. 116.
↑ Melczer 2021, pp. 203.
↑ Pemantle, Wilson and Melczer 2024, pp. 200.
As of 29th June 2024, this article is derived in whole or in part from Wikipedia . The copyright holder has licensed the content in a manner that permits reuse under CC BY-SA 3.0 and GFDL. All relevant terms must be followed.