# Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/Jan08 667

## Problem 4

 Consider the system ${\displaystyle x=1+h{\frac {e^{-x^{2}}}{1+y^{2}}}{\mbox{,}}\qquad y=.5+h\arctan(x^{2}+y^{2})\!\,}$ .

## Problem 4a

 Show that if the parameter ${\displaystyle h>0\!\,}$  is chosen sufficiently small, then this system has a unique solution ${\displaystyle (x^{*},y^{*})\!\,}$  within some rectangular region.

## Solution 4a

The system of equations may be expressed in matrix notation.

${\displaystyle f(x,y)={\begin{bmatrix}1+h{\frac {e^{-x^{2}}}{1+y^{2}}}\\\\.5+h\arctan(x^{2}+y^{2})\end{bmatrix}}\!\,}$

The Jacobian of ${\displaystyle f(x,y)\!\,}$ , ${\displaystyle J(f)\!\,}$ , is computed using partial deriatives

${\displaystyle J(f)={\begin{bmatrix}{\frac {h}{1+y^{2}}}e^{-x^{2}}(-2x)&{\frac {-h2ye^{-x^{2}}}{(1+y^{2})^{2}}}\\h\cdot {\frac {2x}{1+(x^{2}+y^{2})^{2}}}&h\cdot {\frac {2y}{1+(x^{2}+y^{2})^{2}}}\end{bmatrix}}\!\,}$

If ${\displaystyle h\!\,}$  is sufficiently small and ${\displaystyle x,y\!\,}$  are restricted to a bounded region ${\displaystyle B\!\,}$ ,

${\displaystyle \underbrace {\|J(f)\|} _{C}<1\!\,}$

From the mean value theorem, for ${\displaystyle {\vec {x_{1}}},{\vec {x_{2}}}\in B\!\,}$ , there exists ${\displaystyle {\vec {\xi }}\!\,}$  such that

${\displaystyle f({\vec {x_{1}}})-f({\vec {x_{2}}})=J(f({\vec {\xi }}))({\vec {x_{1}}}-{\vec {x_{2}}})\!\,}$

Since ${\displaystyle \|J(f)\|\!\,}$  is bounded in the region ${\displaystyle B\!\,}$  give sufficiently small ${\displaystyle h\!\,}$

${\displaystyle \|f({\vec {x_{1}}})-f({\vec {x_{2}}})\|\leq C\|{\vec {x_{1}}}-{\vec {x_{2}}}\|\!\,}$

Therefore, ${\displaystyle f\!\,}$  is a contraction and from the contraction mapping theorem there exists an unique fixed point (solution) in a rectangular region.

## Problem 4b

 Derive a fixed point iteration scheme for solving the system and show that it converges. ${\displaystyle \!\,}$

## Solution 4b

### Use Newton Method

To solve this problem, we can use the Newton Method. In fact, we want to find the zeros of

${\displaystyle g(x,y)={\begin{bmatrix}x\\\\y\end{bmatrix}}-{\begin{bmatrix}1+h{\frac {e^{-x^{2}}}{1+y^{2}}}\\\\.5+h\arctan(x^{2}+y^{2})\end{bmatrix}}\!\,}$

The Jacobian of ${\displaystyle g(x,y)\!\,}$ , ${\displaystyle J(g)\!\,}$ , is computed using partial deriatives

${\displaystyle J(g)={\begin{bmatrix}1-{\frac {h}{1+y^{2}}}e^{-x^{2}}(-2x)&{\frac {-he^{-x^{2}}}{1+y^{2}}}\\h\cdot {\frac {2x}{1+(x^{2}+y^{2})^{2}}}&1-h\cdot {\frac {2y}{1+(x^{2}+y^{2})^{2}}}\end{bmatrix}}\!\,}$

Then, the Newton method for solving this linear system of equations is given by

${\displaystyle {\begin{bmatrix}x_{k+1}\\\\y_{k+1}\end{bmatrix}}={\begin{bmatrix}x_{k}\\\\y_{k}\end{bmatrix}}-\underbrace {{\frac {1}{det(J(g))}}{\begin{bmatrix}1-h\cdot {\frac {2y}{1+(x^{2}+y^{2})^{2}}}&{\frac {he^{-x^{2}}}{1+y^{2}}}\\-h\cdot {\frac {2x}{1+(x^{2}+y^{2})^{2}}}&1-{\frac {h}{1+y^{2}}}e^{-x^{2}}(-2x)\end{bmatrix}}} _{J(g)^{-1}}g(x_{k},y_{k})\!\,}$

### Show convergence by showing Newton hypothesis hold

#### Jacobian of g is Lipschitz

We want to show that ${\displaystyle J(g)\!\,}$  is a Lipschitz function. In fact,

{\displaystyle {\begin{aligned}\|J(g)({\vec {x_{1}}})-J(g)({\vec {x_{2}}})\|&=\|(I-J(f))({\vec {x_{1}}})-(I-J(f))({\vec {x_{2}}})\|\\&=\|{\vec {x_{1}}}-{\vec {x_{2}}}-J(f){\vec {x_{1}}}+J(f){\vec {x_{2}}}\|\\&\leq \|{\vec {x_{1}}}-{\vec {x_{2}}}\|+\|J(f){\vec {x_{2}}}-J(f){\vec {x_{1}}}\|\end{aligned}}\!\,}

and now, using that ${\displaystyle J(f)\!\,}$  is Lipschitz, we have

${\displaystyle \|J(g)({\vec {x_{1}}})-J(g)({\vec {x_{2}}})\|\leq (1+C)\|({\vec {x_{1}}})-({\vec {x_{2}}})\|\!\,}$

#### Jacobian of g is invertible

Since ${\displaystyle J(f)\!\,}$  is a contraction, the spectral radius of the Jacobian of ${\displaystyle f\!\,}$  is less than 1 i.e.

${\displaystyle \rho (J(f))<1\!\,}$ .

On the other hand, we know that the eigenvalues of ${\displaystyle J(g)\!\,}$  are ${\displaystyle \lambda (J(g))=1-\lambda (J(f))\!\,}$ .

Then, it follows that ${\displaystyle det(I-(J(g)))\neq 0\!\,}$  or equivalently ${\displaystyle J(g)\!\,}$  is invertible.

#### inverse of Jacobian of g is bounded

${\displaystyle \underbrace {{\frac {1}{det(J(g))}}{\begin{bmatrix}1-h\cdot {\frac {2y}{1+(x^{2}+y^{2})^{2}}}&{\frac {he^{-x^{2}}}{1+y^{2}}}\\-h\cdot {\frac {2x}{1+(x^{2}+y^{2})^{2}}}&1-{\frac {h}{1+y^{2}}}e^{-x^{2}}(-2x)\end{bmatrix}}} _{J(g)^{-1}}\!\,}$

Since ${\displaystyle J(g)^{-1}\!\,}$  exists, ${\displaystyle {\mbox{det}}(J(g))\neq 0\!\,}$ . Given a bounded region (bounded ${\displaystyle x,y\!\,}$ ), each entry of the above matrix is bounded. Therefore the norm is bounded.

#### norm of (Jacobian of g)^-1 (x_0) * g(x_0) bounded

${\displaystyle \|J(g)^{-1}(x_{0})*g(x_{0})\|<\infty \!\,}$  since ${\displaystyle J(g)^{-1}\!\,}$  and ${\displaystyle g(x)\!\,}$  is bounded.

Then, using a good enough approximation ${\displaystyle (x_{0},y_{0})\!\,}$ , we have that the Newton's method is at least quadratically convergent, i.e,

${\displaystyle \|(x_{k-1},y_{k+1})-(x^{*},y^{*})\|\leq K\|(x_{k},y_{k})-(x^{*},y^{*})\|^{2}.\!\,}$

## Problem 5a

 Outline the derivation of the Adams-Bashforth methods for the numerical solution of the initial value problem ${\displaystyle y'=f(x,y){\mbox{, }}y(x_{0})=y_{0}\!\,}$ .

## Solution 5a

We want to solve the following initial value problem: ${\displaystyle y'=f(x,y){\mbox{, }}y(x_{0})=y_{0}\!\,}$ .

First, we integrate this expression over ${\displaystyle [t_{n},t_{n+1}]\!\,}$ , to obtain

${\displaystyle y(t_{n+1})=y(t_{n})+\int _{t_{n}}^{t_{n+1}}f(t,y(t))dt\!\,}$ ,

To approximate the integral on the right hand side, we approximate its integrand ${\displaystyle f(t,y(t))\!\,}$  using an appropriate interpolation polynomial of degree ${\displaystyle p\!\,}$  at ${\displaystyle t_{n},t_{n-1},...,t_{n-p}\!\,}$ .

This idea generates the Adams-Bashforth methods.

${\displaystyle y_{n+1}=y_{n}+\int _{t_{n}}^{t_{n-1}}\sum _{k=0}^{p}g(t_{n-k})l_{k}(t)dt\!\,}$ ,

where, ${\displaystyle y_{n}\!\,}$  denotes the approximated solution, ${\displaystyle g(t)\equiv f(t,y)\!\,}$  and ${\displaystyle l_{k}(t)\!\,}$  denotes the associated Lagrange polynomial.

## Problem 5b

 Derive the Adams-Bashforth formula ${\displaystyle y_{i+1}=y_{i}+h\left[-{\frac {1}{2}}f(x_{i-1},y_{i-1})+{\frac {3}{2}}f(x_{i},y_{i})\right]\qquad \qquad (1)\!\,}$

## Solution 5b

From (a) we have

${\displaystyle y_{i+1}=y_{i}+\int _{x_{i}}^{x_{i+1}}fdx\!\,}$  where ${\displaystyle \int _{x_{i}}^{x_{i+1}}fdx\approx \int _{x_{i}}^{x_{i+1}}\left[{\frac {x-x_{i}}{x_{i-1}-x_{i}}}f_{i-1}+{\frac {x-x_{i-1}}{x_{i}-x_{i-1}}}f_{i}\right]dx\!\,}$

Then if we let ${\displaystyle x=x_{i}+sh\!\,}$ , where h is the fixed step size we get

{\displaystyle {\begin{aligned}dx&=hds\\x-x_{i}&=sh\\x-x_{i-1}&=(1+s)h\\x_{i}-x_{i-1}&=h\end{aligned}}}

${\displaystyle h\int _{0}^{1}\left[{\frac {sh}{-h}}f_{i-1}+{\frac {(1+s)h}{h}}f_{i}\right]ds=h\left[-{\frac {1}{2}}f_{i-1}+{\frac {3}{2}}f_{i}\right]\!\,}$

So we have the Adams-Bashforth method as desired

${\displaystyle y_{i+1}=y_{i}+h\left[-{\frac {1}{2}}f(x_{i-1},y_{i-1})+{\frac {3}{2}}f(x_{i},y_{i})\right]\!\,}$

## Problem 5c

 Analyze the method (1). To be specific, find the local truncation error, prove convergence and find the order of convergence.

## Solution 5c

### Find Local Truncation Error Using Taylor Expansion

Note that ${\displaystyle y_{i}\approx y(t_{i})\!\,}$ . Also, denote the uniform step size ${\displaystyle \Delta t\!\,}$  as h. Hence,

${\displaystyle t_{i-1}=t_{i}-h\!\,}$

${\displaystyle t_{i+1}=t_{i}+h\!\,}$

Therefore, the given equation may be written as

${\displaystyle y(t_{i}+h)-y(t_{i})\approx h\left[-{\frac {1}{2}}y'(t_{i}-h)+{\frac {3}{2}}y'(t_{i})\right]\!\,}$

#### Expand Left Hand Side

Expanding about ${\displaystyle t_{i}\!\,}$ , we get

${\displaystyle {\begin{array}{|c|c|c|c|}{\mbox{Order}}&y(t_{i}+h)&y(t_{i})&\Sigma \\\hline &&&\\0&y(t_{i})&y(t_{i})&0\\&&&\\1&-y'(t_{i})h&0&-y'(t_{i})h\\&&&\\2&{\frac {1}{2!}}y''(t_{i})h^{2}&0&{\frac {1}{2}}y''(t_{i})h^{2}\\&&&\\3&-{\frac {1}{3!}}y'''(t_{i})h^{3}&0&-{\frac {1}{6}}y'''(t_{i})h^{3}\\&&&\\4&{\mathcal {O}}(h^{4})&0&{\mathcal {O}}(h^{4})\\&&&\\\hline \end{array}}}$

#### Expand Right Hand side

Also expanding about ${\displaystyle t_{i}\!\,}$  gives

${\displaystyle {\begin{array}{|c|c|c|c|c|}{\mbox{Order}}&{\frac {13}{12}}hx'(t_{i}+2h)&-{\frac {5}{3}}hx'(t_{i}+h)&-{\frac {5}{12}}hx'(t_{i})&\Sigma \\\hline &&&&\\0&0&0&0&0\\&&&&\\1&{\frac {13}{12}}hx'(t_{i})&-{\frac {5}{3}}hx'(t_{i})&-{\frac {5}{12}}hx'(t_{i})&-1\cdot hx'(t_{i})\\&&&&\\2&{\frac {13}{12}}(2h)hx''(t_{i})&-{\frac {5}{3}}hhx''(t_{i})&0&{\frac {1}{2}}h^{2}x''(t_{i})\\&&&&\\3&{\frac {13}{12}}(2h)^{2}{\frac {hx'''(t_{i})}{2}}&-{\frac {5}{3}}h^{2}{\frac {hx'''(t_{i})}{2}}&0&{\frac {4}{3}}h^{3}x'''(t_{i})\\&&&&\\4&{\mathcal {O}}(h^{4})&{\mathcal {O}}(h^{4})&0&{\mathcal {O}}(h^{4})\\\hline \end{array}}}$

### Show Convergence by Showing Stability and Consistency

A method is convergent if and only if it is both stable and consistent.

#### Stability

It is easy to show that the method is zero stable as it satisfies the root condition. So stable.

#### Consistency

Truncation error is of order 2. Truncation error tends to zero as h tends to zeros. So the method is consistent.

### Order of Convergence

Dahlquist principle: consistency + stability = convergent. Order of convergence will be 2.

## Problem 6

 Consider the problem ${\displaystyle -u''+u=f(x),\;\;0\leq x\leq 1,\;\;u(0)=u(1)=0.\qquad \qquad (2)}$

## Problem 6a

 Give a variational formulation of (2), i.e. express (2) as ${\displaystyle u\in H,\;\;B(x,y)=F(v),\;\;\forall v\in H\qquad \qquad (3)\!\,}$ Define the Space H, the bilinear form B and the linear functional F, and state the relation between (2) and (3).

## Solution 6a

Multiplying (2) by a test function and using integration by parts we obtain:

${\displaystyle -\int _{0}^{1}u''vdx+\int _{0}^{1}uvdx=\int _{0}^{1}fvdx\!\,}$

${\displaystyle \underbrace {\left.-u'v\right|_{0}^{1}} _{0}+\int _{0}^{1}u'v'dx+\int _{0}^{1}uvdx=\int _{0}^{1}fvdx\!\,}$

Thus, the weak form or variational form associated with the problem (2) reads as follows: Find ${\displaystyle u\in H\!\,}$  such that

${\displaystyle B(u,v)=\int _{0}^{1}u'v'dx+\int _{0}^{1}uvdx=\int _{0}^{1}fvdx=F(v)\!\,}$  for all ${\displaystyle v\in H,\!\,}$

where ${\displaystyle H:=H_{0}^{1}(0,1)=\{v:v\in H^{1},v(0)=v(1)=0\}\!\,}$ .

## Problem 6b

 Let ${\displaystyle 0=x_{0}  be a mesh on ${\displaystyle [0,1]\!\,}$  with ${\displaystyle h=\max _{j=0,1,\dots ,n-1}(x_{j+1}-x_{j})\!\,}$ , and let ${\displaystyle V_{h}=\left\{u:u{\mbox{ continuous on }}[0,1],u|_{[x_{j},x_{j+1}]}{\mbox{ is linear for each }}j,u(0)=u(1)=0\right\}\!\,}$ .Define the finite element approximation, ${\displaystyle u_{h}{\mbox{ to }}u\!\,}$  based on the approximation space ${\displaystyle V_{h}\!\,}$ . What can be said about ${\displaystyle \|u-u_{h}\|_{1}\!\,}$  the error on the Sobolev norm on ${\displaystyle H^{1}(0,1)\!\,}$ ?

## Solution 6b

### Define piecewise linear basis

For our basis of ${\displaystyle V_{h}\!\,}$ , we use the set of hat functions ${\displaystyle \{\phi _{j}\}_{j=1}^{n-1}\!\,}$ , i.e., for ${\displaystyle j=1,2,\ldots ,n-1\!\,}$

${\displaystyle \phi _{j}(x)={\begin{cases}{\frac {x-x_{j-1}}{x_{j}-x_{j-1}}}&{\mbox{for }}x\in [x_{j-1},x_{j}]\\{\frac {x_{j+1}-x}{x_{j+1}-x_{j}}}&{\mbox{for }}x\in [x_{j},x_{j+1}]\\0&{\mbox{otherwise}}\end{cases}}\!\,}$

### Define u_h

Since ${\displaystyle \{\phi _{j}\}_{j=1}^{n-1}\!\,}$  is a basis for ${\displaystyle V_{h}\!\,}$ , and ${\displaystyle u_{h}\in V_{h}\!\,}$  we have

${\displaystyle u_{h}=\sum _{j=1}^{n-1}u_{hj}\phi _{j}\!\,}$ .

### Discrete Problem

Now, we can write the discrete problem: Find ${\displaystyle u_{h}\in V_{h}\!\,}$  such that

${\displaystyle B(u_{h},v_{h})=F(v_{h})\!\,}$  for all ${\displaystyle v_{h}\in V_{h}.\!\,}$

If we consider that ${\displaystyle \{\phi _{j}\}_{j=1}^{n-1}\!\,}$  is a basis of ${\displaystyle V_{h}\!\,}$  and the linearity of the bilinear form ${\displaystyle B\!\,}$  and the functional ${\displaystyle F\!\,}$ , we obtain the equivalent problem:

Find ${\displaystyle u_{h}=(u_{h,1},\dots ,u_{h,{n-1}})\in \mathbb {R} ^{n-1}\!\,}$  such that

${\displaystyle \sum _{j=1}^{n-1}u_{h,j}\left\{\int _{0}^{1}\phi _{j}'\phi _{i}'dx+\int _{0}^{1}\phi _{j}\phi _{i}dx\right\}=\int _{0}^{1}f\phi _{i}dx,\quad i=1,\dots ,n-1.\!\,}$

This last problem can be formulated as a matrix problem as follows:

Find ${\displaystyle u_{h}=(u_{h,1},\dots ,u_{h,{n-1}})\in \mathbb {R} ^{n-1}\!\,}$  such that

${\displaystyle Au_{h}=F,\!\,}$

where ${\displaystyle A_{ij}=\int _{0}^{1}\phi _{j}'\phi _{i}'dx+\int _{0}^{1}\phi _{j}\phi _{i}dx\!\,}$  and ${\displaystyle F_{j}=\int _{0}^{1}f\phi _{i}dx\!\,}$ .

### Bound error

In general terms, we can use Cea's Lemma to obtain

${\displaystyle \|u-u_{h}\|_{1}\leq C\inf _{v_{h}\in V_{h}}\|u-v_{h}\|_{1}}$

In particular, we can consider ${\displaystyle v_{h}\!\,}$  as the Lagrange interpolant, which we denote by ${\displaystyle v_{I}\!\,}$ . Then,

${\displaystyle \|u-u_{h}\|_{1}\leq C\|u-v_{I}\|_{1}\leq Ch\|u\|_{H^{2}(0,1)}}$ .

It's easy to prove that the finite element solution is nodally exact. Then it coincides with the Lagrange interpolant, and we have the following punctual estimation:

${\displaystyle \|u(x)-u_{h}(x)\|_{L^{\infty }([x_{i-1},x_{i}])}\leq \max _{x\in [x_{i-1},x_{i}]}{\frac {u^{(2)}(x)}{(2)!}}\left({\frac {h}{2}}\right)^{2}}$

## Problem 6c

 Derive the estimate for ${\displaystyle \|u-u_{h}\|_{0}\!\,}$ , the error in ${\displaystyle L_{2}(0,1)\!\,}$ . Hint: Let w solve (#) :${\displaystyle {\begin{cases}-w''+w=u-u_{h},\\w(0)=w(1)=0.\end{cases}}}$  We characterize ${\displaystyle w\!\,}$  variationally as ${\displaystyle w\in H_{0}^{1},\;\;B(w,v)=\int (u-u_{h})vdx,\forall v\in H_{0}^{1}\!\,}$ .Let ${\displaystyle v=u-u_{h}\!\,}$  to get ${\displaystyle B(w,u-u_{h})=\int (u-u_{h})^{2}dx=\|u-u_{h}\|_{L_{2}}^{2}.\qquad \qquad (4)\!\,}$ Use formula (4) to estimate ${\displaystyle \|u-u_{h}\|_{L_{2}}\!\,}$ .

## Solution 6c

### Continuity of Bilinear Form

{\displaystyle {\begin{aligned}B(u,v)&=\int uv+\int u'v'\\&\leq \|u\|_{0}\|v\|_{0}+\|u'\|_{0}\|v'\|_{0}{\mbox{ (Cauchy-Schwartz in L2) }}\\&\leq (\|u\|_{0}^{2}+\|u'\|_{0}^{2})^{\frac {1}{2}}(\|v\|_{0}^{2}+\|v'\|^{2})^{\frac {1}{2}}{\mbox{ ( Cauchy-Schwartz in R2 ) }}\\&=\|u\|_{1}\cdot \|v\|_{1}\end{aligned}}\!\,}

### Orthogonality of the Error

${\displaystyle B(u-u_{h},v_{h})=0,\quad \forall v_{h}\in V_{h}\!\,}$ .

### Bound for L2 norm of w

{\displaystyle {\begin{aligned}\|w\|_{0}^{2}&\leq \|w\|_{1}^{2}\\&=B(w,w)\\&=\int (u-u_{h})w\\&\leq \|u-u_{h}\|_{0}\|w\|_{0}{\mbox{ (Cauchy-Schwartz in L2 ) }}\end{aligned}}\!\,}

Hence,

${\displaystyle (*)\qquad \|w\|_{0}\leq \|u-u_{h}\|_{0}\!\,}$

### Bound for L2 norm of w

From ${\displaystyle (\#)\!\,}$ , we have

${\displaystyle -w''+w=u-u_{h}\!\,}$

Then,

{\displaystyle {\begin{aligned}\|w''\|_{0}&\leq \|w\|_{0}+\|u-u_{h}\|_{0}{\mbox{ (triangle inequality) }}\\&\leq \|u-u_{h}\|_{0}+\|u-u_{h}\|_{0}{\mbox{ (by (*) ) }}\\&=2\|u-u_{h}\|_{0}\end{aligned}}\!\,}

### Bound L2 Error

{\displaystyle {\begin{aligned}\|u-u_{h}\|_{L^{2}}^{2}&=B(w,u-u_{h}){\mbox{ ( from equation (4) ) }}\\&=B(w,u-u_{h})-\underbrace {B(w_{h},u-u_{h})} _{0}\\&=B(w-w_{h},u-u_{h})\\&\leq \|w-w_{h}\|_{H^{1}(0,1)}\|u-u_{h}\|_{H^{1}(0,1)}{\mbox{ (Continuity of Bilinear Form) }}\\&\leq Ch\|w\|_{H^{2}(0,1)}\|u-u_{h}\|_{H^{1}(0,1)}{\mbox{ (Cea Lemma and Polynomial Interpolation Error) }}\end{aligned}}}

Finally, from (#), we have that ${\displaystyle \|w\|_{H^{2}(0,1)}\leq C\|u-u_{h}\|_{L_{2}(0,1)}\!\,}$ . Then,

${\displaystyle \|u-u_{h}\|_{L^{2}(0,1)}^{2}\leq Ch\|u-u_{h}\|_{L_{2}(0,1)}\|u-u_{h}\|_{H^{1}(0,1)}\!\,}$ ,

or equivalently,

${\displaystyle \|u-u_{h}\|_{L^{2}(0,1)}\leq Ch\|u-u_{h}\|_{H^{1}(0,1)}\leq Ch^{2}\|u\|_{H^{2}(0,1)}\!\,}$ .

## Problem 6d

 Suppose ${\displaystyle \{\phi _{1}^{h},\dots ,\phi _{N_{h}}^{h}\}\!\,}$  is a basis for ${\displaystyle V_{h}{\mbox{ where }}N_{h}=\dim V_{h}{\mbox{, so }}u_{h}=\sum _{j=1}^{N_{h}}c_{j}^{h}\phi _{j}^{h}{\mbox{, for appropriate coefficients }}c_{j}^{h}.\!\,}$ . Show that ${\displaystyle \|u-u_{h}\|_{1}^{2}=\|u\|_{1}^{2}-C^{T}AC\!\,}$ where ${\displaystyle C=[c_{1}^{h},\dots ,c_{N_{h}}^{h}]^{T}{\mbox{ and }}A\!\,}$  is the stiffness matrix.

## Solution 6d

We know that

{\displaystyle {\begin{aligned}\|u-u_{h}\|_{1}^{2}&=\langle u-u_{h},u-u_{h}\rangle _{H^{1}(0,1)}\\&=\langle u-u_{h},u\rangle _{H^{1}(0,1)}-\underbrace {\langle u-u_{h},u_{h}\rangle _{H^{1}(0,1)}} _{0}\\&=\|u\|_{H^{1}(0,1)}^{2}-\langle u_{h},u\rangle _{H^{1}(0,1)}\\&=\|u\|_{H^{1}(0,1)}^{2}-\|u_{h}\|_{H^{1}(0,1)}^{2}\end{aligned}}}

where the substitution in the last lines come from the orthogonality of error.

Now, ${\displaystyle \|u_{h}\|_{H^{1}(0,1)}^{2}=\sum _{j=1}^{n-1}\sum _{i=1}^{n-1}u_{h,j}u_{h,i}\left(\int _{0}^{1}\phi _{i}'\phi _{j}'dx+\int _{0}^{1}\phi _{i}\phi _{j}dx\right)=\sum _{j=1}^{n-1}\sum _{i=1}^{n-1}u_{h,j}A_{ij}u_{h,i}=C^{t}AC.\!\,}$

Then, we have obtained

${\displaystyle \|u-u_{h}\|_{1}^{2}=\|u\|_{H^{1}(0,1)}^{2}-C^{t}AC.\!\,}$