Let
f
:
R
n
→
R
{\displaystyle f:R^{n}\rightarrow R\!\,}
be a nonlinear smooth function. To determine a (local) minimum of
f
{\displaystyle f\!\,}
one can use a descent method of the form
x
k
+
1
=
x
k
+
α
k
d
k
(
k
≥
0
)
(
1
)
{\displaystyle x_{k+1}=x_{k}+\alpha _{k}d_{k}\quad (k\geq 0)\quad \quad (1)\!\,}
where
0
≤
α
k
≤
1
{\displaystyle 0\leq \alpha _{k}\leq 1\!\,}
is a suitable parameter obtained by backtracking and
d
k
{\displaystyle d_{k}}
is a descent direction i.e. it satisfies
f
(
x
k
+
1
)
<
f
(
x
k
)
(
2
)
{\displaystyle f(x_{k+1})<f(x_{k})\quad \quad (2)\!\,}
Write the steepest descent method (or gradient) method and show that there exist
0
<
α
k
≤
1
{\displaystyle 0<\alpha _{k}\leq 1\!\,}
such that the resulting method satisfies (2)
Steepest Descent Method
edit
x
k
+
1
=
x
k
−
α
k
∇
f
(
x
k
)
{\displaystyle x_{k+1}=x_{k}-\alpha _{k}\nabla f(x_{k})\!\,}
Choose
α
k
∈
(
0
,
1
]
{\displaystyle \alpha _{k}\in (0,1]\!\,}
such that
α
k
{\displaystyle \alpha _{k}\!\,}
minimizes
f
(
x
k
−
α
k
∇
f
(
x
k
)
)
{\displaystyle f(x_{k}-\alpha _{k}\nabla f(x_{k}))\!\,}
i.e.
α
k
:
min
0
<
α
k
≤
1
f
(
x
k
−
α
k
∇
f
(
x
k
)
)
{\displaystyle \alpha _{k}:\min _{0<\alpha _{k}\leq 1}f(x_{k}-\alpha _{k}\nabla f(x_{k}))\!\,}
Directional Derivative Negative
edit
To satisfy (2), the directional derivative should be negative i.e.
∇
d
k
f
(
x
k
)
=
lim
α
k
→
0
f
(
x
k
+
α
k
d
k
)
−
f
(
x
k
)
α
k
<
0
{\displaystyle \nabla _{d_{k}}f(x_{k})=\lim _{\alpha _{k}\rightarrow 0}{\frac {f(x_{k}+\alpha _{k}d_{k})-f(x_{k})}{\alpha _{k}}}<0\!\,}
which implies
f
(
x
k
+
α
k
d
k
)
⏟
f
(
x
k
+
1
)
<
f
(
x
k
)
{\displaystyle \underbrace {f(x_{k}+\alpha _{k}d_{k})} _{f(x_{k+1})}<f(x_{k})\!\,}
since
α
k
∈
(
0
,
1
]
{\displaystyle \alpha _{k}\in (0,1]\!\,}
x
k
+
1
=
x
k
−
H
−
1
(
f
(
x
k
)
)
∇
f
(
x
k
)
{\displaystyle x_{k+1}=x_{k}-H^{-1}(f(x_{k}))\nabla f(x_{k})\!\,}
Directional Derivative Negative
edit
To have descent, we want the directional derivative to be negative i.e.
∇
α
k
d
k
f
(
x
k
)
=
∇
f
(
x
k
)
⋅
α
k
d
k
=
−
∇
f
(
x
k
)
⋅
H
−
1
(
f
(
x
k
)
)
∇
f
(
x
k
)
<
0
{\displaystyle \nabla _{\alpha _{k}d_{k}}f(x_{k})=\nabla f(x_{k})\cdot \alpha _{k}d_{k}=-\nabla f(x_{k})\cdot H^{-1}(f(x_{k}))\nabla f(x_{k})<0\!\,}
which implies
∇
f
(
x
k
)
T
H
−
1
(
f
(
x
k
)
)
∇
f
(
x
k
)
>
0
{\displaystyle \nabla f(x_{k})^{T}H^{-1}(f(x_{k}))\nabla f(x_{k})>0\!\,}
Therefore
H
−
1
(
f
(
x
k
)
)
{\displaystyle H^{-1}(f(x_{k}))\!\,}
is positive definite and therefore
H
(
f
(
x
k
)
)
{\displaystyle H(f(x_{k}))\!\,}
is positive definite.
We now want
H
~
(
f
(
x
k
)
)
=
H
(
f
(
x
k
)
)
+
γ
k
I
{\displaystyle {\tilde {H}}(f(x_{k}))=H(f(x_{k}))+\gamma _{k}I\!\,}
to be positive definite.
Let
λ
i
{\displaystyle \lambda _{i}\!\,}
,
i
=
1
,
2
,
…
,
n
{\displaystyle i=1,2,\ldots ,n\!\,}
be the eigenvalues of
H
(
f
(
x
k
)
)
{\displaystyle H(f(x_{k}))\!\,}
.
Then
λ
i
+
γ
k
⋅
1
{\displaystyle \lambda _{i}+\gamma _{k}\cdot 1\!\,}
,
i
=
1
,
2
,
…
,
n
{\displaystyle i=1,2,\ldots ,n\!\,}
are eigenvalues of
H
~
(
f
(
x
k
)
)
{\displaystyle {\tilde {H}}(f(x_{k}))\!\,}
.
Since we want
H
~
(
f
(
x
k
)
)
{\displaystyle {\tilde {H}}(f(x_{k}))\!\,}
to be positive definite, we equivalently have for
i
=
1
,
2
,
…
,
n
{\displaystyle i=1,2,\ldots ,n\!\,}
λ
i
+
γ
k
>
0
{\displaystyle \lambda _{i}+\gamma _{k}>0\!\,}
i.e.
γ
k
>
−
λ
i
i
=
1
,
2
,
…
,
n
{\displaystyle \gamma _{k}>-\lambda _{i}\quad i=1,2,\ldots ,n\!\,}
Consider the following nonlinear autonomous initial value problem in
R
{\displaystyle R}
with
f
∈
C
2
{\displaystyle f\in C^{2}}
.
y
′
=
f
(
y
)
y
(
0
)
=
y
0
{\displaystyle y'=f(y)\quad \quad y(0)=y_{0}}
Write the ODE in integral form and use the mid-point quadrature rule to derive the mid-point method with uniform time-step
h
=
T
N
{\displaystyle h={\frac {T}{N}}}
:
y
n
+
1
=
y
n
+
h
f
(
y
n
+
1
+
y
n
2
)
{\displaystyle y_{n+1}=y_{n}+hf({\frac {y_{n+1}+y_{n}}{2}})}
For
x
∈
[
x
n
,
x
n
+
1
]
{\displaystyle x\in [x_{n},x_{n+1}]\!\,}
,
y
(
x
n
+
1
)
−
y
(
x
n
)
=
∫
x
n
x
n
+
1
f
(
y
(
x
)
)
d
x
≈
h
⋅
f
(
y
n
+
1
+
y
n
2
)
{\displaystyle y(x_{n+1})-y(x_{n})=\int _{x_{n}}^{x_{n+1}}f(y(x))dx\approx h\cdot f\left({\frac {y_{n+1}+y_{n}}{2}}\right)\!\,}
|
e
n
+
1
|
=
|
y
(
t
n
+
1
)
−
y
n
+
1
|
≤
|
y
(
t
n
)
−
y
n
|
+
h
|
f
(
y
(
t
n
+
1
)
−
y
(
t
n
)
2
)
−
f
(
y
n
+
1
−
y
n
2
)
|
⏟
≤
γ
2
|
y
(
t
n
+
1
)
−
y
(
t
n
)
−
y
n
+
1
+
y
n
|
+
O
(
h
3
)
≤
|
e
n
|
+
h
γ
2
(
|
e
n
+
1
|
+
|
e
n
|
)
+
O
(
h
3
)
{\displaystyle {\begin{aligned}|e_{n+1}|&=|y(t_{n+1})-y_{n+1}|\\&\leq |y(t_{n})-y_{n}|+h\underbrace {\left|f\left({\frac {y(t_{n+1})-y(t_{n})}{2}}\right)-f\left({\frac {y_{n+1}-y_{n}}{2}}\right)\right|} _{\leq {\frac {\gamma }{2}}|y(t_{n+1})-y(t_{n})-y_{n+1}+y_{n}|}+{\mathcal {O}}(h^{3})\\&\leq |e_{n}|+{\frac {h\gamma }{2}}\left(|e_{n+1}|+|e_{n}|\right)+{\mathcal {O}}(h^{3})\end{aligned}}}
where
γ
{\displaystyle \gamma \!\,}
is the Lipschitz constant of
f
{\displaystyle f\!\,}
. Rearranging terms we get
|
e
n
+
1
|
≤
1
+
h
γ
2
1
−
h
γ
2
⏟
C
|
e
n
|
+
O
(
h
3
)
{\displaystyle |e_{n+1}|\leq \underbrace {\frac {1+{\frac {h\gamma }{2}}}{1-{\frac {h\gamma }{2}}}} _{C}|e_{n}|+{\mathcal {O}}(h^{3})\!\,}
In particular,
|
e
1
|
≤
C
|
e
0
|
⏟
0
+
O
(
h
3
)
{\displaystyle |e_{1}|\leq C\underbrace {|e_{0}|} _{0}+{\mathcal {O}}(h^{3})\!\,}
Then
e
N
{\displaystyle e_{N}\!\,}
is given by
e
N
=
(
C
N
−
1
+
C
N
−
2
+
⋯
+
C
+
1
)
O
(
h
3
)
=
C
N
−
1
C
−
1
O
(
h
3
)
≤
K
O
(
h
2
)
{\displaystyle {\begin{aligned}e_{N}&=\left(C^{N-1}+C^{N-2}+\cdots +C+1\right){\mathcal {O}}(h^{3})\\&={\frac {C^{N}-1}{C-1}}{\mathcal {O}}(h^{3})\\&\leq K{\mathcal {O}}(h^{2})\end{aligned}}\!\,}
The midpoint method may be rewritten as follows:
y
(
t
n
+
1
)
=
y
(
t
n
)
+
h
2
y
′
(
t
n
+
1
)
+
h
2
y
′
(
t
n
)
+
T
n
{\displaystyle y(t_{n+1})=y(t_{n})+{\frac {h}{2}}y'(t_{n+1})+{\frac {h}{2}}y'(t_{n})+T_{n}\!\,}
which implies
y
(
t
n
+
1
)
−
y
(
t
n
)
−
h
2
y
′
(
t
n
+
1
)
−
h
2
y
′
(
t
n
)
=
T
n
{\displaystyle y(t_{n+1})-y(t_{n})-{\frac {h}{2}}y'(t_{n+1})-{\frac {h}{2}}y'(t_{n})=T_{n}\!\,}
Expanding each term around
τ
n
=
t
n
+
h
2
{\displaystyle \tau _{n}=t_{n}+{\frac {h}{2}}\!\,}
yields
T
n
{\displaystyle T_{n}\!\,}
.
Order
h
y
(
t
n
+
1
)
−
y
(
t
n
)
−
h
2
y
′
(
t
n
+
1
)
−
h
2
y
′
(
t
n
)
Σ
0
y
(
t
n
+
h
2
)
−
y
(
t
n
+
h
2
)
−
−
−
−
−
−
0
1
y
′
(
t
n
+
h
2
)
h
2
−
y
′
(
t
n
+
h
2
)
(
−
h
2
)
−
h
2
y
′
(
t
n
+
h
2
)
−
h
2
y
′
(
t
n
+
h
2
)
0
2
y
″
(
t
n
+
h
2
)
2
h
2
4
−
y
″
(
t
n
+
h
2
2
h
2
4
−
h
2
y
″
(
t
n
+
h
2
)
h
2
−
h
2
y
″
(
t
n
+
h
2
)
(
−
h
2
)
0
3
O
(
h
3
)
O
(
h
3
)
O
(
h
3
)
O
(
h
3
)
O
(
h
3
)
{\displaystyle {\begin{array}{|c|c|c|c|c|c|}{\mbox{Order }}h&y(t_{n+1})&-y(t_{n})&-{\frac {h}{2}}y'(t_{n+1})&-{\frac {h}{2}}y'(t_{n})&\Sigma \\\hline &&&&&\\0&y(t_{n}+{\frac {h}{2}})&-y(t_{n}+{\frac {h}{2}})&---&---&0\\&&&&&\\1&y'(t_{n}+{\frac {h}{2}}){\frac {h}{2}}&-y'(t_{n}+{\frac {h}{2}})(-{\frac {h}{2}})&-{\frac {h}{2}}y'(t_{n}+{\frac {h}{2}})&-{\frac {h}{2}}y'(t_{n}+{\frac {h}{2}})&0\\&&&&&\\2&{\frac {y''(t_{n}+{\frac {h}{2}})}{2}}{\frac {h^{2}}{4}}&-{\frac {y''(t_{n}+{\frac {h}{2}}}{2}}{\frac {h^{2}}{4}}&-{\frac {h}{2}}y''(t_{n}+{\frac {h}{2}}){\frac {h}{2}}&-{\frac {h}{2}}y''(t_{n}+{\frac {h}{2}})(-{\frac {h}{2}})&0\\&&&&&\\3&O(h^{3})&O(h^{3})&O(h^{3})&O(h^{3})&O(h^{3})\\\hline \end{array}}}
Consider the following boundary two-point boundary value problem in
(
0
,
1
)
{\displaystyle (0,1)\!\,}
with
ϵ
<<
1
≤
b
:
{\displaystyle \epsilon <<1\leq b:\!\,}
−
ϵ
u
x
x
+
b
u
x
=
1
,
u
(
0
)
=
u
(
1
)
=
0
{\displaystyle -\epsilon u_{xx}+bu_{x}=1,\quad u(0)=u(1)=0\!\,}
Write the finite element method with piecewise linear elements over a uniform partition of
(
0
,
1
)
{\displaystyle (0,1)\!\,}
with meshsize
h
{\displaystyle h\!\,}
. If
U
=
U
i
i
=
0
I
+
1
{\displaystyle U={U_{i}}_{i=0}^{I+1}\!\,}
is the vector of nodal values of the finite element solution, find the (stiffness) matrix
A
{\displaystyle A\!\,}
and right-hand side
F
{\displaystyle F\!\,}
such that
A
U
=
F
{\displaystyle AU=F\!\,}
. Is
A
{\displaystyle A\!\,}
symmetric? Is
A
{\displaystyle A\!\,}
positive definite?
Multiplying the given equation by a test function
v
∈
H
0
1
[
0
,
1
]
{\displaystyle v\in H_{0}^{1}[0,1]\!\,}
and integrating from 0 to 1 gives the equivalent weak variational formulation:
Find
u
∈
H
0
1
[
0
,
1
]
{\displaystyle u\in H_{0}^{1}[0,1]\!\,}
such that for all
v
∈
H
0
1
[
0
,
1
]
{\displaystyle v\in H_{0}^{1}[0,1]\!\,}
the following holds
ϵ
∫
0
1
u
′
v
′
d
x
+
b
∫
0
1
u
′
v
d
x
⏟
a
(
u
,
v
)
=
∫
0
1
v
d
x
⏟
f
(
v
)
{\displaystyle \underbrace {\epsilon \int _{0}^{1}u'v'dx+b\int _{0}^{1}u'vdx} _{a(u,v)}=\underbrace {\int _{0}^{1}vdx} _{f(v)}\!\,}
Let
V
h
=
{
v
∈
H
0
1
[
0
,
1
]
:
v
h
|
[
x
i
−
1
,
x
i
]
linear
{\displaystyle V_{h}=\{v\in H_{0}^{1}[0,1]:\left.v_{h}\right|_{[x_{i-1},x_{i}]}{\mbox{ linear }}\!\,}
for
i
=
1
,
2
,
…
,
n
}
{\displaystyle i=1,2,\ldots ,n\}\!\,}
Then we have the discrete variational formulation which is an approximation to the weak variational formulation.
Find
u
h
∈
V
h
{\displaystyle u_{h}\in V_{h}\!\,}
such that for all
v
h
∈
V
h
{\displaystyle v_{h}\in V_{h}\!\,}
a
(
u
h
,
v
h
)
=
f
(
v
h
)
{\displaystyle a(u_{h},v_{h})=f(v_{h})\!\,}
Define basis for V_h
edit
Let
{
ϕ
i
}
i
=
1
n
{\displaystyle \{\phi _{i}\}_{i=1}^{n}\!\,}
be the linear "hat" functions which defines a basis for
V
h
{\displaystyle V_{h}\!\,}
.
ϕ
i
(
x
)
=
{
x
−
x
i
−
1
h
if
x
∈
[
x
i
−
1
,
x
i
]
x
i
+
1
−
x
h
if
x
∈
[
x
i
,
x
i
+
1
]
0
otherwise
{\displaystyle \phi _{i}(x)={\begin{cases}{\frac {x-x_{i-1}}{h}}&{\mbox{ if }}x\in [x_{i-1},x_{i}]\\{\frac {x_{i+1}-x}{h}}&{\mbox{ if }}x\in [x_{i},x_{i+1}]\\0&{\mbox{ otherwise }}\end{cases}}\!\,}
Then calculation yields the following: (draw pictures)
∫
0
1
ϕ
i
(
x
)
d
x
=
h
{\displaystyle \int _{0}^{1}\phi _{i}(x)dx=h\!\,}
ϕ
i
′
(
x
)
=
{
1
h
if
x
∈
[
x
i
−
1
,
x
i
]
−
1
h
if
x
∈
[
x
i
,
x
i
+
1
]
0
otherwise
{\displaystyle \phi _{i}'(x)={\begin{cases}{\frac {1}{h}}&{\mbox{ if }}x\in [x_{i-1},x_{i}]\\-{\frac {1}{h}}&{\mbox{ if }}x\in [x_{i},x_{i+1}]\\0&{\mbox{ otherwise }}\end{cases}}\!\,}
∫
0
1
ϕ
i
′
(
x
)
ϕ
j
′
(
x
)
d
x
=
{
2
h
if
i
=
j
−
1
h
if
|
i
−
j
|
=
1
0
otherwise
{\displaystyle \int _{0}^{1}\phi _{i}'(x)\phi _{j}'(x)dx={\begin{cases}{\frac {2}{h}}&{\mbox{ if }}i=j\\-{\frac {1}{h}}&{\mbox{ if }}|i-j|=1\\0&{\mbox{ otherwise }}\end{cases}}\!\,}
∫
0
1
ϕ
i
(
x
)
ϕ
j
′
(
x
)
d
x
=
{
0
if
i
=
j
1
2
if
i
−
j
=
1
−
1
2
if
i
−
j
=
−
1
0
otherwise
{\displaystyle \int _{0}^{1}\phi _{i}(x)\phi _{j}'(x)dx={\begin{cases}0&{\mbox{ if }}i=j\\{\frac {1}{2}}&{\mbox{ if }}i-j=1\\-{\frac {1}{2}}&{\mbox{ if }}i-j=-1\\0&{\mbox{ otherwise }}\end{cases}}\!\,}
Since
{
ϕ
i
}
i
=
1
n
{\displaystyle \{\phi _{i}\}_{i=1}^{n}\!\,}
is a basis for
V
h
{\displaystyle V_{h}\!\,}
,
u
h
=
∑
i
=
1
n
u
h
i
ϕ
i
{\displaystyle u_{h}=\sum _{i=1}^{n}u_{hi}\phi _{i}\!\,}
Also, the discrete variational formulation may be expressed as
ϵ
∑
i
=
1
n
u
h
i
∫
0
1
ϕ
i
′
(
x
)
ϕ
j
′
(
x
)
d
x
+
b
∑
i
=
1
n
u
h
i
∫
0
1
ϕ
i
′
(
x
)
ϕ
j
(
x
)
d
x
=
∫
0
1
ϕ
j
(
x
)
d
x
for
j
=
1
,
2
,
…
,
n
{\displaystyle \epsilon \sum _{i=1}^{n}u_{hi}\int _{0}^{1}\phi _{i}'(x)\phi _{j}'(x)dx+b\sum _{i=1}^{n}u_{hi}\int _{0}^{1}\phi _{i}'(x)\phi _{j}(x)dx=\int _{0}^{1}\phi _{j}(x)dx{\mbox{ for }}j=1,2,\ldots ,n\!\,}
which in matrix form is
[
2
ϵ
h
−
ϵ
h
+
b
2
0
⋯
⋯
0
−
ϵ
h
−
b
2
2
ϵ
h
−
ϵ
h
+
b
2
0
⋯
0
0
⋱
⋱
⋱
⋯
0
0
⋯
⋯
0
−
ϵ
h
−
b
2
2
ϵ
h
]
⏟
A
[
u
1
u
2
⋮
u
n
]
=
[
h
h
⋮
h
]
{\displaystyle \underbrace {\begin{bmatrix}2{\frac {\epsilon }{h}}&-{\frac {\epsilon }{h}}+{\frac {b}{2}}&0&\cdots &\cdots &0\\-{\frac {\epsilon }{h}}-{\frac {b}{2}}&2{\frac {\epsilon }{h}}&-{\frac {\epsilon }{h}}+{\frac {b}{2}}&0&\cdots &0\\0&\ddots &\ddots &\ddots &\cdots &0\\0&\cdots &\cdots &0&-{\frac {\epsilon }{h}}-{\frac {b}{2}}&2{\frac {\epsilon }{h}}\end{bmatrix}} _{A}{\begin{bmatrix}u_{1}\\u_{2}\\\vdots \\u_{n}\end{bmatrix}}={\begin{bmatrix}h\\h\\\vdots \\h\end{bmatrix}}\!\,}
A
{\displaystyle A\!\,}
is not symmetric.
A
{\displaystyle A\!\,}
is positive definite if
ϵ
h
>
b
2
{\displaystyle {\frac {\epsilon }{h}}>{\frac {b}{2}}\!\,}
Find a relation between the three parameters
h
,
ϵ
,
b
{\displaystyle h,\epsilon ,b\!\,}
for
A
{\displaystyle A\!\,}
to be an
M
−
{\displaystyle M-\!\,}
matrix, i.e. to have
a
i
i
>
0
,
a
i
j
≤
0
{\displaystyle a_{ii}>0,a_{ij}\leq 0\!\,}
if
i
≠
j
{\displaystyle i\neq j\!\,}
and
a
i
i
≥
−
∑
i
≠
j
a
i
j
{\displaystyle a_{ii}\geq -\sum _{i\neq j}a_{ij}\!\,}
The first, second, and last rows all yield the same inequality for
A
{\displaystyle A\!\,}
to be an
M
{\displaystyle M\!\,}
-matrix:
ϵ
h
>
b
2
{\displaystyle {\frac {\epsilon }{h}}>{\frac {b}{2}}\!\,}
Consider the upwind modification of the ODE
−
(
ϵ
+
b
2
h
)
u
x
x
+
b
u
x
=
1
{\displaystyle -(\epsilon +{\frac {b}{2}}h)u_{xx}+bu_{x}=1\!\,}
Show that the resulting matrix
A
{\displaystyle A\!\,}
is an M-matrix without restrictions on
h
,
ϵ
{\displaystyle h,\epsilon \!\,}
and
b
{\displaystyle b\!\,}
.
Substituting
ϵ
+
b
2
h
{\displaystyle \epsilon +{\frac {b}{2}}h\!\,}
for
ϵ
{\displaystyle \epsilon \!\,}
yields the following matrix
A
{\displaystyle A\!\,}
:
[
2
ϵ
h
+
b
−
ϵ
h
0
⋯
⋯
0
−
ϵ
h
−
b
2
ϵ
h
+
b
−
ϵ
h
0
⋯
0
0
⋱
⋱
⋱
⋯
0
0
⋯
⋯
0
−
ϵ
h
−
b
2
ϵ
h
+
b
]
{\displaystyle {\begin{bmatrix}2{\frac {\epsilon }{h}}+b&-{\frac {\epsilon }{h}}&0&\cdots &\cdots &0\\-{\frac {\epsilon }{h}}-b&2{\frac {\epsilon }{h}}+b&-{\frac {\epsilon }{h}}&0&\cdots &0\\0&\ddots &\ddots &\ddots &\cdots &0\\0&\cdots &\cdots &0&-{\frac {\epsilon }{h}}-b&2{\frac {\epsilon }{h}}+b\end{bmatrix}}\!\,}
All off diagonal entries are
≤
0
{\displaystyle \leq 0\!\,}
. Diagonal entries are
>
0
{\displaystyle >0\!\,}
The first row meets the last condition since
2
ϵ
h
+
b
≥
ϵ
h
{\displaystyle 2{\frac {\epsilon }{h}}+b\geq {\frac {\epsilon }{h}}\!\,}
The second row through (n-1) rows meets the last condition since
2
ϵ
h
+
b
≥
ϵ
h
+
b
+
ϵ
b
{\displaystyle 2{\frac {\epsilon }{h}}+b\geq {\frac {\epsilon }{h}}+b+{\frac {\epsilon }{b}}\!\,}
The last row meets the last condition since
2
ϵ
h
+
b
≥
ϵ
h
+
b
{\displaystyle 2{\frac {\epsilon }{h}}+b\geq {\frac {\epsilon }{h}}+b\!\,}