Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/Jan08 667

Problem 4

 Consider the system $x=1+h{\frac {e^{-x^{2}}}{1+y^{2}}}{\mbox{,}}\qquad y=.5+h\arctan(x^{2}+y^{2})\!\,$ .

Problem 4a

 Show that if the parameter $h>0\!\,$ is chosen sufficiently small, then this system has a unique solution $(x^{*},y^{*})\!\,$ within some rectangular region.

Solution 4a

The system of equations may be expressed in matrix notation.

$f(x,y)={\begin{bmatrix}1+h{\frac {e^{-x^{2}}}{1+y^{2}}}\\\\.5+h\arctan(x^{2}+y^{2})\end{bmatrix}}\!\,$

The Jacobian of $f(x,y)\!\,$ , $J(f)\!\,$ , is computed using partial deriatives

$J(f)={\begin{bmatrix}{\frac {h}{1+y^{2}}}e^{-x^{2}}(-2x)&{\frac {-h2ye^{-x^{2}}}{(1+y^{2})^{2}}}\\h\cdot {\frac {2x}{1+(x^{2}+y^{2})^{2}}}&h\cdot {\frac {2y}{1+(x^{2}+y^{2})^{2}}}\end{bmatrix}}\!\,$

If $h\!\,$  is sufficiently small and $x,y\!\,$  are restricted to a bounded region $B\!\,$ ,

$\underbrace {\|J(f)\|} _{C}<1\!\,$

From the mean value theorem, for ${\vec {x_{1}}},{\vec {x_{2}}}\in B\!\,$ , there exists ${\vec {\xi }}\!\,$  such that

$f({\vec {x_{1}}})-f({\vec {x_{2}}})=J(f({\vec {\xi }}))({\vec {x_{1}}}-{\vec {x_{2}}})\!\,$

Since $\|J(f)\|\!\,$  is bounded in the region $B\!\,$  give sufficiently small $h\!\,$

$\|f({\vec {x_{1}}})-f({\vec {x_{2}}})\|\leq C\|{\vec {x_{1}}}-{\vec {x_{2}}}\|\!\,$

Therefore, $f\!\,$  is a contraction and from the contraction mapping theorem there exists an unique fixed point (solution) in a rectangular region.

Problem 4b

 Derive a fixed point iteration scheme for solving the system and show that it converges. $\!\,$ Solution 4b

Use Newton Method

To solve this problem, we can use the Newton Method. In fact, we want to find the zeros of

$g(x,y)={\begin{bmatrix}x\\\\y\end{bmatrix}}-{\begin{bmatrix}1+h{\frac {e^{-x^{2}}}{1+y^{2}}}\\\\.5+h\arctan(x^{2}+y^{2})\end{bmatrix}}\!\,$

The Jacobian of $g(x,y)\!\,$ , $J(g)\!\,$ , is computed using partial deriatives

$J(g)={\begin{bmatrix}1-{\frac {h}{1+y^{2}}}e^{-x^{2}}(-2x)&{\frac {-he^{-x^{2}}}{1+y^{2}}}\\h\cdot {\frac {2x}{1+(x^{2}+y^{2})^{2}}}&1-h\cdot {\frac {2y}{1+(x^{2}+y^{2})^{2}}}\end{bmatrix}}\!\,$

Then, the Newton method for solving this linear system of equations is given by

${\begin{bmatrix}x_{k+1}\\\\y_{k+1}\end{bmatrix}}={\begin{bmatrix}x_{k}\\\\y_{k}\end{bmatrix}}-\underbrace {{\frac {1}{det(J(g))}}{\begin{bmatrix}1-h\cdot {\frac {2y}{1+(x^{2}+y^{2})^{2}}}&{\frac {he^{-x^{2}}}{1+y^{2}}}\\-h\cdot {\frac {2x}{1+(x^{2}+y^{2})^{2}}}&1-{\frac {h}{1+y^{2}}}e^{-x^{2}}(-2x)\end{bmatrix}}} _{J(g)^{-1}}g(x_{k},y_{k})\!\,$

Show convergence by showing Newton hypothesis hold

Jacobian of g is Lipschitz

We want to show that $J(g)\!\,$  is a Lipschitz function. In fact,

{\begin{aligned}\|J(g)({\vec {x_{1}}})-J(g)({\vec {x_{2}}})\|&=\|(I-J(f))({\vec {x_{1}}})-(I-J(f))({\vec {x_{2}}})\|\\&=\|{\vec {x_{1}}}-{\vec {x_{2}}}-J(f){\vec {x_{1}}}+J(f){\vec {x_{2}}}\|\\&\leq \|{\vec {x_{1}}}-{\vec {x_{2}}}\|+\|J(f){\vec {x_{2}}}-J(f){\vec {x_{1}}}\|\end{aligned}}\!\,

and now, using that $J(f)\!\,$  is Lipschitz, we have

$\|J(g)({\vec {x_{1}}})-J(g)({\vec {x_{2}}})\|\leq (1+C)\|({\vec {x_{1}}})-({\vec {x_{2}}})\|\!\,$

Jacobian of g is invertible

Since $J(f)\!\,$  is a contraction, the spectral radius of the Jacobian of $f\!\,$  is less than 1 i.e.

$\rho (J(f))<1\!\,$ .

On the other hand, we know that the eigenvalues of $J(g)\!\,$  are $\lambda (J(g))=1-\lambda (J(f))\!\,$ .

Then, it follows that $det(I-(J(g)))\neq 0\!\,$  or equivalently $J(g)\!\,$  is invertible.

inverse of Jacobian of g is bounded

$\underbrace {{\frac {1}{det(J(g))}}{\begin{bmatrix}1-h\cdot {\frac {2y}{1+(x^{2}+y^{2})^{2}}}&{\frac {he^{-x^{2}}}{1+y^{2}}}\\-h\cdot {\frac {2x}{1+(x^{2}+y^{2})^{2}}}&1-{\frac {h}{1+y^{2}}}e^{-x^{2}}(-2x)\end{bmatrix}}} _{J(g)^{-1}}\!\,$

Since $J(g)^{-1}\!\,$  exists, ${\mbox{det}}(J(g))\neq 0\!\,$ . Given a bounded region (bounded $x,y\!\,$ ), each entry of the above matrix is bounded. Therefore the norm is bounded.

norm of (Jacobian of g)^-1 (x_0) * g(x_0) bounded

$\|J(g)^{-1}(x_{0})*g(x_{0})\|<\infty \!\,$  since $J(g)^{-1}\!\,$  and $g(x)\!\,$  is bounded.

Then, using a good enough approximation $(x_{0},y_{0})\!\,$ , we have that the Newton's method is at least quadratically convergent, i.e,

$\|(x_{k-1},y_{k+1})-(x^{*},y^{*})\|\leq K\|(x_{k},y_{k})-(x^{*},y^{*})\|^{2}.\!\,$

Problem 5a

 Outline the derivation of the Adams-Bashforth methods for the numerical solution of the initial value problem $y'=f(x,y){\mbox{, }}y(x_{0})=y_{0}\!\,$ .

Solution 5a

We want to solve the following initial value problem: $y'=f(x,y){\mbox{, }}y(x_{0})=y_{0}\!\,$ .

First, we integrate this expression over $[t_{n},t_{n+1}]\!\,$ , to obtain

$y(t_{n+1})=y(t_{n})+\int _{t_{n}}^{t_{n+1}}f(t,y(t))dt\!\,$ ,

To approximate the integral on the right hand side, we approximate its integrand $f(t,y(t))\!\,$  using an appropriate interpolation polynomial of degree $p\!\,$  at $t_{n},t_{n-1},...,t_{n-p}\!\,$ .

This idea generates the Adams-Bashforth methods.

$y_{n+1}=y_{n}+\int _{t_{n}}^{t_{n-1}}\sum _{k=0}^{p}g(t_{n-k})l_{k}(t)dt\!\,$ ,

where, $y_{n}\!\,$  denotes the approximated solution, $g(t)\equiv f(t,y)\!\,$  and $l_{k}(t)\!\,$  denotes the associated Lagrange polynomial.

Problem 5b

 Derive the Adams-Bashforth formula $y_{i+1}=y_{i}+h\left[-{\frac {1}{2}}f(x_{i-1},y_{i-1})+{\frac {3}{2}}f(x_{i},y_{i})\right]\qquad \qquad (1)\!\,$ Solution 5b

From (a) we have

$y_{i+1}=y_{i}+\int _{x_{i}}^{x_{i+1}}fdx\!\,$  where $\int _{x_{i}}^{x_{i+1}}fdx\approx \int _{x_{i}}^{x_{i+1}}\left[{\frac {x-x_{i}}{x_{i-1}-x_{i}}}f_{i-1}+{\frac {x-x_{i-1}}{x_{i}-x_{i-1}}}f_{i}\right]dx\!\,$

Then if we let $x=x_{i}+sh\!\,$ , where h is the fixed step size we get

{\begin{aligned}dx&=hds\\x-x_{i}&=sh\\x-x_{i-1}&=(1+s)h\\x_{i}-x_{i-1}&=h\end{aligned}}

$h\int _{0}^{1}\left[{\frac {sh}{-h}}f_{i-1}+{\frac {(1+s)h}{h}}f_{i}\right]ds=h\left[-{\frac {1}{2}}f_{i-1}+{\frac {3}{2}}f_{i}\right]\!\,$

So we have the Adams-Bashforth method as desired

$y_{i+1}=y_{i}+h\left[-{\frac {1}{2}}f(x_{i-1},y_{i-1})+{\frac {3}{2}}f(x_{i},y_{i})\right]\!\,$

Problem 5c

 Analyze the method (1). To be specific, find the local truncation error, prove convergence and find the order of convergence.

Solution 5c

Find Local Truncation Error Using Taylor Expansion

Note that $y_{i}\approx y(t_{i})\!\,$ . Also, denote the uniform step size $\Delta t\!\,$  as h. Hence,

$t_{i-1}=t_{i}-h\!\,$

$t_{i+1}=t_{i}+h\!\,$

Therefore, the given equation may be written as

$y(t_{i}+h)-y(t_{i})\approx h\left[-{\frac {1}{2}}y'(t_{i}-h)+{\frac {3}{2}}y'(t_{i})\right]\!\,$

Expand Left Hand Side

Expanding about $t_{i}\!\,$ , we get

${\begin{array}{|c|c|c|c|}{\mbox{Order}}&y(t_{i}+h)&y(t_{i})&\Sigma \\\hline &&&\\0&y(t_{i})&y(t_{i})&0\\&&&\\1&-y'(t_{i})h&0&-y'(t_{i})h\\&&&\\2&{\frac {1}{2!}}y''(t_{i})h^{2}&0&{\frac {1}{2}}y''(t_{i})h^{2}\\&&&\\3&-{\frac {1}{3!}}y'''(t_{i})h^{3}&0&-{\frac {1}{6}}y'''(t_{i})h^{3}\\&&&\\4&{\mathcal {O}}(h^{4})&0&{\mathcal {O}}(h^{4})\\&&&\\\hline \end{array}}$

Expand Right Hand side

Also expanding about $t_{i}\!\,$  gives

${\begin{array}{|c|c|c|c|c|}{\mbox{Order}}&{\frac {13}{12}}hx'(t_{i}+2h)&-{\frac {5}{3}}hx'(t_{i}+h)&-{\frac {5}{12}}hx'(t_{i})&\Sigma \\\hline &&&&\\0&0&0&0&0\\&&&&\\1&{\frac {13}{12}}hx'(t_{i})&-{\frac {5}{3}}hx'(t_{i})&-{\frac {5}{12}}hx'(t_{i})&-1\cdot hx'(t_{i})\\&&&&\\2&{\frac {13}{12}}(2h)hx''(t_{i})&-{\frac {5}{3}}hhx''(t_{i})&0&{\frac {1}{2}}h^{2}x''(t_{i})\\&&&&\\3&{\frac {13}{12}}(2h)^{2}{\frac {hx'''(t_{i})}{2}}&-{\frac {5}{3}}h^{2}{\frac {hx'''(t_{i})}{2}}&0&{\frac {4}{3}}h^{3}x'''(t_{i})\\&&&&\\4&{\mathcal {O}}(h^{4})&{\mathcal {O}}(h^{4})&0&{\mathcal {O}}(h^{4})\\\hline \end{array}}$

Show Convergence by Showing Stability and Consistency

A method is convergent if and only if it is both stable and consistent.

Stability

It is easy to show that the method is zero stable as it satisfies the root condition. So stable.

Consistency

Truncation error is of order 2. Truncation error tends to zero as h tends to zeros. So the method is consistent.

Order of Convergence

Dahlquist principle: consistency + stability = convergent. Order of convergence will be 2.

Problem 6

 Consider the problem $-u''+u=f(x),\;\;0\leq x\leq 1,\;\;u(0)=u(1)=0.\qquad \qquad (2)$ Problem 6a

 Give a variational formulation of (2), i.e. express (2) as $u\in H,\;\;B(x,y)=F(v),\;\;\forall v\in H\qquad \qquad (3)\!\,$ Define the Space H, the bilinear form B and the linear functional F, and state the relation between (2) and (3).

Solution 6a

Multiplying (2) by a test function and using integration by parts we obtain:

$-\int _{0}^{1}u''vdx+\int _{0}^{1}uvdx=\int _{0}^{1}fvdx\!\,$

$\underbrace {\left.-u'v\right|_{0}^{1}} _{0}+\int _{0}^{1}u'v'dx+\int _{0}^{1}uvdx=\int _{0}^{1}fvdx\!\,$

Thus, the weak form or variational form associated with the problem (2) reads as follows: Find $u\in H\!\,$  such that

$B(u,v)=\int _{0}^{1}u'v'dx+\int _{0}^{1}uvdx=\int _{0}^{1}fvdx=F(v)\!\,$  for all $v\in H,\!\,$

where $H:=H_{0}^{1}(0,1)=\{v:v\in H^{1},v(0)=v(1)=0\}\!\,$ .

Problem 6b

 Let $0=x_{0} be a mesh on $[0,1]\!\,$ with $h=\max _{j=0,1,\dots ,n-1}(x_{j+1}-x_{j})\!\,$ , and let $V_{h}=\left\{u:u{\mbox{ continuous on }}[0,1],u|_{[x_{j},x_{j+1}]}{\mbox{ is linear for each }}j,u(0)=u(1)=0\right\}\!\,$ .Define the finite element approximation, $u_{h}{\mbox{ to }}u\!\,$ based on the approximation space $V_{h}\!\,$ . What can be said about $\|u-u_{h}\|_{1}\!\,$ the error on the Sobolev norm on $H^{1}(0,1)\!\,$ ?

Solution 6b

Define piecewise linear basis

For our basis of $V_{h}\!\,$ , we use the set of hat functions $\{\phi _{j}\}_{j=1}^{n-1}\!\,$ , i.e., for $j=1,2,\ldots ,n-1\!\,$

$\phi _{j}(x)={\begin{cases}{\frac {x-x_{j-1}}{x_{j}-x_{j-1}}}&{\mbox{for }}x\in [x_{j-1},x_{j}]\\{\frac {x_{j+1}-x}{x_{j+1}-x_{j}}}&{\mbox{for }}x\in [x_{j},x_{j+1}]\\0&{\mbox{otherwise}}\end{cases}}\!\,$

Define u_h

Since $\{\phi _{j}\}_{j=1}^{n-1}\!\,$  is a basis for $V_{h}\!\,$ , and $u_{h}\in V_{h}\!\,$  we have

$u_{h}=\sum _{j=1}^{n-1}u_{hj}\phi _{j}\!\,$ .

Discrete Problem

Now, we can write the discrete problem: Find $u_{h}\in V_{h}\!\,$  such that

$B(u_{h},v_{h})=F(v_{h})\!\,$  for all $v_{h}\in V_{h}.\!\,$

If we consider that $\{\phi _{j}\}_{j=1}^{n-1}\!\,$  is a basis of $V_{h}\!\,$  and the linearity of the bilinear form $B\!\,$  and the functional $F\!\,$ , we obtain the equivalent problem:

Find $u_{h}=(u_{h,1},\dots ,u_{h,{n-1}})\in \mathbb {R} ^{n-1}\!\,$  such that

$\sum _{j=1}^{n-1}u_{h,j}\left\{\int _{0}^{1}\phi _{j}'\phi _{i}'dx+\int _{0}^{1}\phi _{j}\phi _{i}dx\right\}=\int _{0}^{1}f\phi _{i}dx,\quad i=1,\dots ,n-1.\!\,$

This last problem can be formulated as a matrix problem as follows:

Find $u_{h}=(u_{h,1},\dots ,u_{h,{n-1}})\in \mathbb {R} ^{n-1}\!\,$  such that

$Au_{h}=F,\!\,$

where $A_{ij}=\int _{0}^{1}\phi _{j}'\phi _{i}'dx+\int _{0}^{1}\phi _{j}\phi _{i}dx\!\,$  and $F_{j}=\int _{0}^{1}f\phi _{i}dx\!\,$ .

Bound error

In general terms, we can use Cea's Lemma to obtain

$\|u-u_{h}\|_{1}\leq C\inf _{v_{h}\in V_{h}}\|u-v_{h}\|_{1}$

In particular, we can consider $v_{h}\!\,$  as the Lagrange interpolant, which we denote by $v_{I}\!\,$ . Then,

$\|u-u_{h}\|_{1}\leq C\|u-v_{I}\|_{1}\leq Ch\|u\|_{H^{2}(0,1)}$ .

It's easy to prove that the finite element solution is nodally exact. Then it coincides with the Lagrange interpolant, and we have the following punctual estimation:

$\|u(x)-u_{h}(x)\|_{L^{\infty }([x_{i-1},x_{i}])}\leq \max _{x\in [x_{i-1},x_{i}]}{\frac {u^{(2)}(x)}{(2)!}}\left({\frac {h}{2}}\right)^{2}$

Problem 6c

 Derive the estimate for $\|u-u_{h}\|_{0}\!\,$ , the error in $L_{2}(0,1)\!\,$ . Hint: Let w solve (#) :${\begin{cases}-w''+w=u-u_{h},\\w(0)=w(1)=0.\end{cases}}$ We characterize $w\!\,$ variationally as $w\in H_{0}^{1},\;\;B(w,v)=\int (u-u_{h})vdx,\forall v\in H_{0}^{1}\!\,$ .Let $v=u-u_{h}\!\,$ to get $B(w,u-u_{h})=\int (u-u_{h})^{2}dx=\|u-u_{h}\|_{L_{2}}^{2}.\qquad \qquad (4)\!\,$ Use formula (4) to estimate $\|u-u_{h}\|_{L_{2}}\!\,$ .

Solution 6c

Continuity of Bilinear Form

{\begin{aligned}B(u,v)&=\int uv+\int u'v'\\&\leq \|u\|_{0}\|v\|_{0}+\|u'\|_{0}\|v'\|_{0}{\mbox{ (Cauchy-Schwartz in L2) }}\\&\leq (\|u\|_{0}^{2}+\|u'\|_{0}^{2})^{\frac {1}{2}}(\|v\|_{0}^{2}+\|v'\|^{2})^{\frac {1}{2}}{\mbox{ ( Cauchy-Schwartz in R2 ) }}\\&=\|u\|_{1}\cdot \|v\|_{1}\end{aligned}}\!\,

Orthogonality of the Error

$B(u-u_{h},v_{h})=0,\quad \forall v_{h}\in V_{h}\!\,$ .

Bound for L2 norm of w

{\begin{aligned}\|w\|_{0}^{2}&\leq \|w\|_{1}^{2}\\&=B(w,w)\\&=\int (u-u_{h})w\\&\leq \|u-u_{h}\|_{0}\|w\|_{0}{\mbox{ (Cauchy-Schwartz in L2 ) }}\end{aligned}}\!\,

Hence,

$(*)\qquad \|w\|_{0}\leq \|u-u_{h}\|_{0}\!\,$

Bound for L2 norm of w

From $(\#)\!\,$ , we have

$-w''+w=u-u_{h}\!\,$

Then,

{\begin{aligned}\|w''\|_{0}&\leq \|w\|_{0}+\|u-u_{h}\|_{0}{\mbox{ (triangle inequality) }}\\&\leq \|u-u_{h}\|_{0}+\|u-u_{h}\|_{0}{\mbox{ (by (*) ) }}\\&=2\|u-u_{h}\|_{0}\end{aligned}}\!\,

Bound L2 Error

{\begin{aligned}\|u-u_{h}\|_{L^{2}}^{2}&=B(w,u-u_{h}){\mbox{ ( from equation (4) ) }}\\&=B(w,u-u_{h})-\underbrace {B(w_{h},u-u_{h})} _{0}\\&=B(w-w_{h},u-u_{h})\\&\leq \|w-w_{h}\|_{H^{1}(0,1)}\|u-u_{h}\|_{H^{1}(0,1)}{\mbox{ (Continuity of Bilinear Form) }}\\&\leq Ch\|w\|_{H^{2}(0,1)}\|u-u_{h}\|_{H^{1}(0,1)}{\mbox{ (Cea Lemma and Polynomial Interpolation Error) }}\end{aligned}}

Finally, from (#), we have that $\|w\|_{H^{2}(0,1)}\leq C\|u-u_{h}\|_{L_{2}(0,1)}\!\,$ . Then,

$\|u-u_{h}\|_{L^{2}(0,1)}^{2}\leq Ch\|u-u_{h}\|_{L_{2}(0,1)}\|u-u_{h}\|_{H^{1}(0,1)}\!\,$ ,

or equivalently,

$\|u-u_{h}\|_{L^{2}(0,1)}\leq Ch\|u-u_{h}\|_{H^{1}(0,1)}\leq Ch^{2}\|u\|_{H^{2}(0,1)}\!\,$ .

Problem 6d

 Suppose $\{\phi _{1}^{h},\dots ,\phi _{N_{h}}^{h}\}\!\,$ is a basis for $V_{h}{\mbox{ where }}N_{h}=\dim V_{h}{\mbox{, so }}u_{h}=\sum _{j=1}^{N_{h}}c_{j}^{h}\phi _{j}^{h}{\mbox{, for appropriate coefficients }}c_{j}^{h}.\!\,$ . Show that $\|u-u_{h}\|_{1}^{2}=\|u\|_{1}^{2}-C^{T}AC\!\,$ where $C=[c_{1}^{h},\dots ,c_{N_{h}}^{h}]^{T}{\mbox{ and }}A\!\,$ is the stiffness matrix.

Solution 6d

We know that

{\begin{aligned}\|u-u_{h}\|_{1}^{2}&=\langle u-u_{h},u-u_{h}\rangle _{H^{1}(0,1)}\\&=\langle u-u_{h},u\rangle _{H^{1}(0,1)}-\underbrace {\langle u-u_{h},u_{h}\rangle _{H^{1}(0,1)}} _{0}\\&=\|u\|_{H^{1}(0,1)}^{2}-\langle u_{h},u\rangle _{H^{1}(0,1)}\\&=\|u\|_{H^{1}(0,1)}^{2}-\|u_{h}\|_{H^{1}(0,1)}^{2}\end{aligned}}

where the substitution in the last lines come from the orthogonality of error.

Now, $\|u_{h}\|_{H^{1}(0,1)}^{2}=\sum _{j=1}^{n-1}\sum _{i=1}^{n-1}u_{h,j}u_{h,i}\left(\int _{0}^{1}\phi _{i}'\phi _{j}'dx+\int _{0}^{1}\phi _{i}\phi _{j}dx\right)=\sum _{j=1}^{n-1}\sum _{i=1}^{n-1}u_{h,j}A_{ij}u_{h,i}=C^{t}AC.\!\,$

Then, we have obtained

$\|u-u_{h}\|_{1}^{2}=\|u\|_{H^{1}(0,1)}^{2}-C^{t}AC.\!\,$