The batch settling process of flocculated suspensions is modeled as an initial value problem
∂
u
∂
t
+
∂
f
(
u
)
∂
x
=
∂
A
2
(
u
)
∂
x
2
,
u
(
0
,
x
)
=
u
0
(
)
,
x
∈
[
0
,
L
]
{\displaystyle {\frac {\partial u}{\partial t}}+{\frac {\partial f(u)}{\partial x}}={\frac {\partial A^{2}(u)}{\partial x^{2}}},\quad u(0,x)=u_{0}(),\quad x\in [0,L]}
where
u
{\displaystyle u}
denotes the volume fraction of the dispersed solids phase.
For the closure, the convective flux function is given by the Kynch batch settling function with Richardson-Zaki hindrance function
f
(
u
)
=
{
u
∞
u
(
1
−
u
)
C
for
0
≤
u
<
u
max
0
for
u
≤
0
and
u
≥
u
max
{\displaystyle f(u)=\left\{{\begin{matrix}&u_{\infty }u(1-u)^{C}&{\mbox{for }}0\leq u<u_{\max }\\&0&{\mbox{for }}u\leq 0{\mbox{ and }}u\geq u_{\max }\end{matrix}}\right.}
and the diffusive flux is given by
A
(
u
)
=
∫
0
u
a
(
s
)
d
s
,
a
(
u
)
=
a
0
(
1
−
u
)
C
u
k
−
1
,
{\displaystyle A(u)=\int _{0}^{u}a(s)ds,\quad a(u)=a_{0}(1-u)^{C}u^{k-1},}
which results from the insertion of the power law
σ
e
(
u
)
=
{
σ
0
(
(
u
u
c
)
k
−
1
)
for
u
c
<
u
0
for
u
≤
u
c
{\displaystyle \sigma _{\mathrm {e} }(u)=\left\{{\begin{matrix}\sigma _{0}\left(\left({\frac {u}{u_{\mathrm {c} }}}\right)^{k}-1\right)&{\mbox{for }}u_{\mathrm {c} }<u\\0&{\mbox{for }}u\leq u_{\mathrm {c} }\end{matrix}}\right.}
into
a
(
u
)
=
f
(
u
)
σ
e
′
(
u
)
Δ
ϱ
g
u
,
a
0
=
u
∞
σ
0
k
Δ
ϱ
g
u
c
k
.
{\displaystyle a(u)={\frac {f(u)\sigma _{\mathrm {e} }'(u)}{\Delta \varrho gu}},\quad a_{0}={\frac {u_{\infty }\sigma _{0}k}{\Delta \varrho gu_{\mathrm {c} }^{k}}}.}
In the closure, the constants
u
∞
,
C
,
a
0
,
u
max
,
k
{\displaystyle u_{\infty },C,a_{0},u_{\max },k}
are partly known.
The numerical scheme for the solution of the direct problem is written in conservative form as a marching formula for the interior points ("interior scheme") as
u
j
n
+
1
=
u
j
n
−
Δ
t
Δ
x
(
f
j
+
1
/
2
n
+
1
−
f
j
−
1
/
2
n
+
1
)
+
Δ
t
Δ
x
2
(
A
(
u
j
−
1
n
+
1
)
−
2
A
(
u
j
n
+
1
)
+
A
(
u
j
+
1
n
+
1
)
)
,
j
=
2
,
…
,
J
−
1
{\displaystyle u_{j}^{n+1}=u_{j}^{n}-{\frac {\Delta t}{\Delta x}}\left(f_{j+1/2}^{n+1}-f_{j-1/2}^{n+1}\right)+{\frac {\Delta t}{\Delta x^{2}}}\left(A(u_{j-1}^{n+1})-2A(u_{j}^{n+1})+A(u_{j+1}^{n+1})\right),\quad j=2,\dots ,J-1}
and at the boundaries ("boundary scheme") as
u
1
n
+
1
=
u
1
n
−
Δ
t
Δ
x
f
j
+
1
/
2
n
+
1
+
Δ
t
Δ
x
2
(
A
(
u
j
+
1
n
+
1
)
−
A
(
u
j
n
+
1
)
)
,
u
J
n
+
1
=
u
J
n
+
Δ
t
Δ
x
f
J
−
1
/
2
n
+
1
−
Δ
t
Δ
x
2
(
A
(
u
J
n
+
1
)
−
A
(
u
J
−
1
n
+
1
)
)
{\displaystyle u_{1}^{n+1}=u_{1}^{n}-{\frac {\Delta t}{\Delta x}}f_{j+1/2}^{n+1}+{\frac {\Delta t}{\Delta x^{2}}}\left(A(u_{j+1}^{n+1})-A(u_{j}^{n+1})\right),\quad u_{J}^{n+1}=u_{J}^{n}+{\frac {\Delta t}{\Delta x}}f_{J-1/2}^{n+1}-{\frac {\Delta t}{\Delta x^{2}}}\left(A(u_{J}^{n+1})-A(u_{J-1}^{n+1})\right)}
For a first-order scheme, the numerical flux function becomes
f
j
+
1
/
2
n
=
f
(
u
j
n
,
u
j
+
1
n
)
.
{\displaystyle f_{j+1/2}^{n}=f(u_{j}^{n},u_{j+1}^{n}).}
If the flux function has one single maximum, denoted by
u
m
{\displaystyle u_{\mathrm {m} }}
,
the Engquist-Osher numerical flux function can be stated as
f
E
O
(
u
,
v
)
=
{
f
(
u
)
,
for
u
≤
u
m
,
v
≤
u
m
,
f
(
u
)
+
f
(
v
)
−
f
(
u
m
)
,
for
u
≤
u
m
,
v
>
u
m
,
f
(
u
m
)
,
for
u
>
u
m
,
v
≤
u
m
,
f
(
v
)
,
for
u
>
u
m
,
v
>
u
m
.
{\displaystyle f^{\mathrm {EO} }(u,v)=\left\{{\begin{matrix}&f(u),&{\mbox{for }}u\leq u_{\mathrm {m} },v\leq u_{\mathrm {m} },\\&f(u)+f(v)-f(u_{\mathrm {m} }),&{\mbox{for }}u\leq u_{\mathrm {m} },v>u_{\mathrm {m} },\\&f(u_{\mathrm {m} }),&{\mbox{for }}u>u_{\mathrm {m} },v\leq u_{\mathrm {m} },\\&f(v),&{\mbox{for }}u>u_{\mathrm {m} },v>u_{\mathrm {m} }.\end{matrix}}\right.}
For linearization, the Taylor formulae
f
(
u
j
n
+
1
,
u
j
+
1
n
+
1
)
=
f
(
u
j
n
,
u
j
+
1
n
)
+
∂
f
(
u
j
n
,
u
j
+
1
)
∂
u
j
n
(
u
j
n
+
1
−
u
j
n
)
+
∂
f
(
u
j
n
,
u
j
+
1
)
∂
u
j
+
1
n
(
u
j
+
1
n
+
1
−
u
j
+
1
n
)
,
j
=
1
,
…
,
J
{\displaystyle f(u_{j}^{n+1},u_{j+1}^{n+1})=f(u_{j}^{n},u_{j+1}^{n})+{\frac {\partial f(u_{j}^{n},u_{j+1})}{\partial u_{j}^{n}}}(u_{j}^{n+1}-u_{j}^{n})+{\frac {\partial f(u_{j}^{n},u_{j+1})}{\partial u_{j+1}^{n}}}(u_{j+1}^{n+1}-u_{j+1}^{n}),\quad j=1,\dots ,J}
and
A
(
u
j
n
+
1
)
=
A
(
u
j
n
)
+
∂
A
(
u
j
n
)
∂
u
j
n
(
u
j
n
+
1
−
u
j
n
)
,
j
=
1
,
…
,
J
{\displaystyle A(u_{j}^{n+1})=A(u_{j}^{n})+{\frac {\partial A(u_{j}^{n})}{\partial u_{j}^{n}}}(u_{j}^{n+1}-u_{j}^{n}),\quad j=1,\dots ,J}
are inserted.
Abbreviating the time evolution step as
Δ
u
j
n
=
u
j
n
+
1
−
u
j
n
,
j
=
1
,
…
,
J
,
{\displaystyle \Delta u_{j}^{n}=u_{j}^{n+1}-u_{j}^{n},\quad j=1,\dots ,J,}
the linearized marching formula for the interior scheme becomes
Δ
u
j
n
=
−
Δ
t
Δ
x
(
f
j
+
1
/
2
n
+
J
j
+
1
/
2
−
Δ
u
j
n
+
J
j
+
1
/
2
+
Δ
u
j
+
1
n
+
f
j
−
1
/
2
n
+
J
j
−
1
/
2
−
Δ
u
j
−
1
n
+
J
j
−
1
/
2
+
Δ
u
j
n
)
{\displaystyle \Delta u_{j}^{n}=-{\frac {\Delta t}{\Delta x}}\left(f_{j+1/2}^{n}+{\mathcal {J}}_{j+1/2}^{-}\Delta u_{j}^{n}+{\mathcal {J}}_{j+1/2}^{+}\Delta u_{j+1}^{n}+f_{j-1/2}^{n}+{\mathcal {J}}_{j-1/2}^{-}\Delta u_{j-1}^{n}+{\mathcal {J}}_{j-1/2}^{+}\Delta u_{j}^{n}\right)}
+
Δ
t
Δ
x
2
(
A
(
u
j
−
1
)
−
2
A
(
u
j
)
+
A
(
u
j
+
1
)
+
a
(
u
j
−
1
)
Δ
u
j
−
1
−
2
a
(
u
j
)
Δ
u
j
+
a
(
u
j
+
1
)
Δ
u
j
+
1
)
,
{\displaystyle +{\frac {\Delta t}{\Delta x^{2}}}\left(A(u_{j-1})-2A(u_{j})+A(u_{j+1})+a(u_{j-1})\Delta u_{j-1}-2a(u_{j})\Delta u_{j}+a(u_{j+1})\Delta u_{j+1}\right),}
where
J
j
+
1
/
2
+
=
∂
f
(
u
j
n
,
u
j
+
1
n
)
∂
u
j
+
1
n
,
J
j
+
1
/
2
−
=
∂
f
(
u
j
n
,
u
j
+
1
n
)
∂
u
j
n
.
{\displaystyle J_{j+1/2}^{+}={\frac {\partial f(u_{j}^{n},u_{j+1}^{n})}{\partial u_{j+1}^{n}}},\quad J_{j+1/2}^{-}={\frac {\partial f(u_{j}^{n},u_{j+1}^{n})}{\partial u_{j}^{n}}}.}
Rearrangement leads to a block-triangular linear system
−
(
Δ
t
Δ
x
(
J
j
−
1
/
2
−
+
Δ
t
Δ
x
2
a
(
u
j
−
1
n
)
)
Δ
u
j
−
1
n
+
(
I
+
Δ
t
Δ
x
(
J
j
+
1
/
2
−
−
J
j
−
1
/
2
+
)
+
2
Δ
t
Δ
x
2
a
(
u
j
n
)
)
Δ
u
j
n
{\displaystyle -\left({\frac {\Delta t}{\Delta x}}({\mathcal {J}}_{j-1/2}^{-}+{\frac {\Delta t}{\Delta x^{2}}}a(u_{j-1}^{n})\right)\Delta u_{j-1}^{n}+\left(I+{\frac {\Delta t}{\Delta x}}\left({\mathcal {J}}_{j+1/2}^{-}-{\mathcal {J}}_{j-1/2}^{+}\right)+{\frac {2\Delta t}{\Delta x^{2}}}a(u_{j}^{n})\right)\Delta u_{j}^{n}}
+
(
Δ
t
Δ
x
J
j
+
1
/
2
+
−
Δ
t
Δ
x
2
a
(
u
j
+
1
n
)
)
Δ
u
j
+
1
n
{\displaystyle +\left({\frac {\Delta t}{\Delta x}}{\mathcal {J}}_{j+1/2}^{+}-{\frac {\Delta t}{\Delta x^{2}}}a(u_{j+1}^{n})\right)\Delta u_{j+1}^{n}}
=
Δ
t
Δ
x
(
f
(
u
j
n
,
u
j
+
1
n
)
−
f
(
u
j
−
1
n
,
u
j
n
)
)
−
Δ
t
Δ
x
2
(
A
(
u
j
−
1
n
)
−
2
A
(
u
j
n
)
+
A
(
u
j
+
1
n
)
)
,
j
=
2
,
…
,
J
−
2
{\displaystyle ={\frac {\Delta t}{\Delta x}}\left(f(u_{j}^{n},u_{j+1}^{n})-f(u_{j-1}^{n},u_{j}^{n})\right)-{\frac {\Delta t}{\Delta x^{2}}}\left(A(u_{j-1}^{n})-2A(u_{j}^{n})+A(u_{j+1}^{n})\right),j=2,\dots ,J-2}
which is of the form
m
j
,
j
−
1
Δ
u
j
−
1
n
+
m
j
,
j
Δ
u
j
n
+
m
j
,
j
+
1
Δ
u
j
+
1
=
b
j
,
{\displaystyle m_{j,j-1}\Delta u_{j-1}^{n}+m_{j,j}\Delta u_{j}^{n}+m_{j,j+1}\Delta u_{j+1}=b_{j},}
or, in more compact notation,
M
Δ
u
n
=
b
,
[
m
11
m
12
0
⋯
⋯
⋯
⋯
0
m
21
m
22
m
23
0
⋯
⋯
⋯
0
⋮
⋮
⋮
⋱
⋮
0
⋯
⋯
⋯
⋯
0
m
J
,
J
−
1
m
J
,
J
]
,
b
=
[
b
1
⋮
b
J
]
,
Δ
u
n
=
[
Δ
u
1
n
⋮
Δ
u
J
n
]
.
{\displaystyle M\Delta u^{n}=b,\quad {\begin{bmatrix}m_{11}&m_{12}&0&\cdots &\cdots &\cdots &\cdots &0\\m_{21}&m_{22}&m_{23}&0&\cdots &\cdots &\cdots &0\\\vdots &&&&&&&\vdots \\\vdots &&&&&&\ddots &\vdots \\0&\cdots &\cdots &\cdots &\cdots &0&m_{J,J-1}&m_{J,J}\end{bmatrix}},\quad b={\begin{bmatrix}b_{1}\\\vdots \\b_{J}\end{bmatrix}},\quad \Delta u^{n}={\begin{bmatrix}\Delta u_{1}^{n}\\\vdots \\\Delta u_{J}^{n}\end{bmatrix}}.}
The goal of the parameter identification by optimization is to minimize the cost function over the parameter space
min
e
q
(
e
)
,
q
(
e
)
=
‖
h
(
e
)
−
H
‖
2
{\displaystyle \min _{e}q(e),\quad q(e)=\|h(e)-H\|^{2}}
,
where h(e) denotes the interface that is computed by simulations and H is the measured interface.
Without loss of generality we consider a parameter set e=(e_1, e_2) consisting of two parameters.
The optimization can be iteratively done by employing the Newton method as
Q
(
e
k
)
Δ
e
k
=
∇
e
q
(
e
k
)
,
e
⊂
{
C
,
v
∞
,
a
0
,
u
c
,
k
}
,
{\displaystyle Q(e_{k})\Delta e_{k}=\nabla _{e}q(e_{k}),\quad e\subset \{C,v_{\infty },a_{0},u_{\mathrm {c} },k\},}
where
Q
=
[
∂
2
q
∂
e
1
2
∂
2
q
∂
e
1
e
2
∂
2
q
∂
e
2
e
1
∂
2
q
∂
e
2
2
]
,
{\displaystyle Q={\begin{bmatrix}{\frac {\partial ^{2}q}{\partial e_{1}^{2}}}&{\frac {\partial ^{2}q}{\partial e_{1}e_{2}}}\\{\frac {\partial ^{2}q}{\partial e_{2}e_{1}}}&{\frac {\partial ^{2}q}{\partial e_{2}^{2}}}\end{bmatrix}},}
is the Hessian of q. The Newton method is motivated by the Taylor expansion
0
≈
Q
(
e
∗
)
=
Q
(
e
)
+
∇
e
Q
(
e
)
Δ
e
+
1
2
Δ
e
T
Q
Δ
e
,
Δ
e
=
e
∗
−
e
,
{\displaystyle 0\approx Q(e^{*})=Q(e)+\nabla _{e}Q(e)\Delta e+{\frac {1}{2}}\Delta e^{\mathrm {T} }Q\Delta e,\quad \Delta e=e^{*}-e,}
where
e
∗
{\displaystyle e^{*}}
is the optimal parameter choice.