# Vector Spaces

## Vectors and Scalars

A scalar is a single number value, such as 3, 5, or 10. A vector is an ordered set of scalars.

A vector is typically described as a matrix with a row or column size of 1. A vector with a column size of 1 is a row vector, and a vector with a row size of 1 is a column vector.

[Column Vector]

${\displaystyle {\begin{bmatrix}a\\b\\c\\\vdots \end{bmatrix}}}$

[Row Vector]

${\displaystyle {\begin{bmatrix}a&b&c&\cdots \end{bmatrix}}}$

A "common vector" is another name for a column vector, and this book will simply use the word "vector" to refer to a common vector.

## Vector Spaces

A vector space is a set of vectors and two operations (addition and multiplication, typically) that follow a number of specific rules. We will typically denote vector spaces with a capital-italic letter: V, for instance. A space V is a vector space if all the following requirements are met. We will be using x and y as being arbitrary vectors in V. We will also use c and d as arbitrary scalar values. There are 10 requirements in all:

Given: ${\displaystyle x,y\in V}$

1. There is an operation called "Addition" (signified with a "+" sign) between two vectors, x + y, such that if both the operands are in V, then the result is also in V.
2. The addition operation is commutative for all elements in V.
3. The addition operation is associative for all elements in V.
4. There is a unique neutral element, φ, in V, such that x + φ = x. This is also called a zero element.
5. For every x in V, then there is a negative element -x in V such that -x + x = φ.
6. ${\displaystyle cx\in V}$
7. ${\displaystyle c(x+y)=cx+cy}$
8. ${\displaystyle (c+d)x=cx+dx}$
9. ${\displaystyle c(dx)=cdx}$
10. 1 × x = x

Some of these rules may seem obvious, but that's only because they have been generally accepted, and have been taught to people since they were children.

## Scalar Product

A scalar product is a special type of operation that acts on two vectors, and returns a scalar result. Scalar products are denoted as an ordered pair between angle-brackets: <x,y>. A scalar product between vectors must satisfy the following four rules:

1. ${\displaystyle \langle x,x\rangle \geq 0,\quad \forall x\in V}$
2. ${\displaystyle \langle x,x\rangle =0}$ , only if x = 0
3. ${\displaystyle \langle x,y\rangle =\langle y,x\rangle }$
4. ${\displaystyle \langle x,cy_{1}+dy_{2}\rangle =c\langle x,y_{1}\rangle +d\langle x,y_{2}\rangle }$

If an operation satisifes all these requirements, then it is a scalar product.

### Examples

One of the most common scalar products is the dot product, that is discussed commonly in Linear Algebra

## Norm

The norm is an important scalar quantity that indicates the magnitude of the vector. Norms of a vector are typically denoted as ${\displaystyle \|x\|}$ . To be a norm, an operation must satisfy the following four conditions:

1. ${\displaystyle \|x\|\geq 0}$
2. ${\displaystyle \|x\|=0}$  only if x = 0.
3. ${\displaystyle \|cx\|=|c|\|x\|}$
4. ${\displaystyle \|x+y\|\leq \|x\|+\|y\|}$

A vector is called normal if it's norm is 1. A normal vector is sometimes also referred to as a unit vector. Both notations will be used in this book. To make a vector normal, but keep it pointing in the same direction, we can divide the vector by its norm:

${\displaystyle {\bar {x}}={\frac {x}{\|x\|}}}$

### Examples

One of the most common norms is the cartesian norm, that is defined as the square-root of the sum of the squares:

${\displaystyle \|x\|={\sqrt {x_{1}^{2}+x_{2}^{2}+\cdots +x_{n}^{2}}}}$

### Unit Vector

A vector is said to be a unit vector if the norm of that vector is 1.

## Orthogonality

Two vectors x and y are said to be orthogonal if the scalar product of the two is equal to zero:

${\displaystyle \langle x,y\rangle =0}$

Two vectors are said to be orthonormal if their scalar product is zero, and both vectors are unit vectors.

## Cauchy-Schwartz Inequality

The cauchy-schwartz inequality is an important result, and relates the norm of a vector to the scalar product:

${\displaystyle |\langle x,y\rangle |\leq \|x\|\|y\|}$

## Metric (Distance)

The distance between two vectors in the vector space V, called the metric of the two vectors, is denoted by d(x, y). A metric operation must satisfy the following four conditions:

1. ${\displaystyle d(x,y)\geq 0}$
2. ${\displaystyle d(x,y)=0}$  only if x = y
3. ${\displaystyle d(x,y)=d(y,x)}$
4. ${\displaystyle d(x,y)\leq d(x,z)+d(z,y)}$

### Examples

A common form of metric is the distance between points a and b in the cartesian plane:

${\displaystyle d(a,b)_{cartesian}={\sqrt {(x_{a}-x_{b})^{2}+(y_{a}-y_{b})^{2}}}}$

## Linear Independence

A set of vectors ${\displaystyle V={v_{1},v_{2},\cdots ,v_{n}}}$  are said to be linearly dependent on one another if any vector v from the set can be constructed from a linear combination of the other vectors in the set. Given the following linear equation:

${\displaystyle a_{1}v_{1}+a_{2}v_{2}+\cdots +a_{n}v_{n}=0}$

The set of vectors V is linearly independent only if all the a coefficients are zero. If we combine the v vectors together into a single row vector:

${\displaystyle {\hat {V}}=[v_{1}v_{2}\cdots v_{n}]}$

And we combine all the a coefficients into a single column vector:

${\displaystyle {\hat {a}}=[a_{1}a_{2}\cdots a_{n}]^{T}}$

We have the following linear equation:

${\displaystyle {\hat {V}}{\hat {a}}=0}$

We can show that this equation can only be satisifed for ${\displaystyle {\hat {a}}=0}$ , the matrix ${\displaystyle {\hat {V}}}$  must be invertable:

${\displaystyle {\hat {V}}^{-1}{\hat {V}}{\hat {a}}={\hat {V}}^{-1}0}$
${\displaystyle {\hat {a}}=0}$

Remember that for the matrix to be invertable, the determinate must be non-zero.

### Non-Square Matrix V

If the matrix ${\displaystyle {\hat {V}}}$  is not square, then the determinate can not be taken, and therefore the matrix is not invertable. To solve this problem, we can premultiply by the transpose matrix:

${\displaystyle {\hat {V}}^{T}{\hat {V}}{\hat {a}}=0}$

And then the square matrix ${\displaystyle {\hat {V}}^{T}{\hat {V}}}$  must be invertable:

${\displaystyle ({\hat {V}}^{T}{\hat {V}})^{-1}{\hat {V}}^{T}{\hat {V}}{\hat {a}}=0}$
${\displaystyle {\hat {a}}=0}$

### Rank

The rank of a matrix is the largest number of linearly independent rows or columns in the matrix.

To determine the Rank, typically the matrix is reduced to row-echelon form. From the reduced form, the number of non-zero rows, or the number of non-zero columns (whichever is smaller) is the rank of the matrix.

If we multiply two matrices A and B, and the result is C:

${\displaystyle AB=C}$

Then the rank of C is the minimum value between the ranks A and B:

${\displaystyle \operatorname {Rank} (C)=\operatorname {min} [\operatorname {Rank} (A),\operatorname {Rank} (B)]}$

## Span

A Span of a set of vectors V is the set of all vectors that can be created by a linear combination of the vectors.

## Basis

A basis is a set of linearly-independent vectors that span the entire vector space.

### Basis Expansion

If we have a vector ${\displaystyle y\in V}$ , and V has basis vectors ${\displaystyle {v_{1}v_{2}\cdots v_{n}}}$ , by definition, we can write y in terms of a linear combination of the basis vectors:

${\displaystyle a_{1}v_{1}+a_{2}v_{2}+\cdots +a_{n}v_{n}=y}$

or

${\displaystyle {\hat {V}}{\hat {a}}=y}$

If ${\displaystyle {\hat {V}}}$  is invertable, the answer is apparent, but if ${\displaystyle {\hat {V}}}$  is not invertable, then we can perform the following technique:

${\displaystyle {\hat {V}}^{T}{\hat {V}}{\hat {a}}={\hat {V}}^{T}y}$
${\displaystyle {\hat {a}}=({\hat {V}}^{T}{\hat {V}})^{-1}{\hat {V}}^{T}y}$

And we call the quantity ${\displaystyle ({\hat {V}}^{T}{\hat {V}})^{-1}{\hat {V}}^{T}}$  the left-pseudoinverse of ${\displaystyle {\hat {V}}}$ .

### Change of Basis

Frequently, it is useful to change the basis vectors to a different set of vectors that span the set, but have different properties. If we have a space V, with basis vectors ${\displaystyle {\hat {V}}}$  and a vector in V called x, we can use the new basis vectors ${\displaystyle {\hat {W}}}$  to represent x:

${\displaystyle x=\sum _{i=0}^{n}a_{i}v_{i}=\sum _{j=1}^{n}b_{j}w_{j}}$

or,

${\displaystyle x={\hat {V}}{\hat {a}}={\hat {W}}{\hat {b}}}$

If V is invertable, then the solution to this problem is simple.

## Grahm-Schmidt Orthogonalization

If we have a set of basis vectors that are not orthogonal, we can use a process known as orthogonalization to produce a new set of basis vectors for the same space that are orthogonal:

Given: ${\displaystyle {\hat {V}}={x_{1}v_{2}\cdots v_{n}}}$
Find the new basis ${\displaystyle {\hat {W}}={w_{1}w_{2}\cdots w_{n}}}$
Such that ${\displaystyle \langle w_{i},w_{j}\rangle =0\quad \forall i,j}$

We can define the vectors as follows:

1. ${\displaystyle w_{1}=v_{1}}$
2. ${\displaystyle w_{m}=v_{m}-\sum _{i=1}^{m-1}{\frac {\langle v_{m},u_{i}\rangle }{\langle u_{i},u_{i}\rangle }}u_{i}}$

Notice that the vectors produced by this technique are orthogonal to each other, but they are not necessarily orthonormal. To make the w vectors orthonormal, you must divide each one by its norm:

${\displaystyle {\bar {w}}={\frac {w}{\|w\|}}}$

## Reciprocal Basis

A Reciprocal basis is a special type of basis that is related to the original basis. The reciprocal basis ${\displaystyle {\hat {W}}}$  can be defined as:

${\displaystyle {\hat {W}}=[{\hat {V}}^{T}]^{-1}}$

# Linear Transformations

## Linear Transformations

A linear transformation is a matrix M that operates on a vector in space V, and results in a vector in a different space W. We can define a transformation as such:

${\displaystyle T:V\to W}$

In the above equation, we say that V is the domain space of the transformation, and W is the range space of the transformation. Also, we can use a "function notation" for the transformation, and write it as:

${\displaystyle M(x)=Mx=y}$

Where x is a vector in V, and y is a vector in W. To be a linear transformation, the principle of superposition must hold for the transformation:

${\displaystyle M(av_{1}+bv_{2})=aM(v_{1})+bM(v_{2})}$

Where a and b are arbitrary scalars.

## Null Space

The Nullspace of an equation is the set of all vectors x for which the following relationship holds:

${\displaystyle Mx=0}$

Where M is a linear transformation matrix. Depending on the size and rank of M, there may be zero or more vectors in the nullspace. Here are a few rules to remember:

1. If the matrix M is invertable, then there is no nullspace.
2. The number of vectors in the nullspace (N) is the difference between the rank(R) of the matrix and the number of columns(C) of the matrix:
${\displaystyle N=R-C}$

If the matrix is in row-eschelon form, the number of vectors in the nullspace is given by the number of rows without a leading 1 on the diagonal. For every column where there is not a leading one on the diagonal, the nullspace vectors can be obtained by placing a negative one in the leading position for that column vector.

We denote the nullspace of a matrix A as:

${\displaystyle {\mathcal {N}}\{A\}}$

## Linear Equations

If we have a set of linear equations in terms of variables x, scalar coefficients a, and a scalar result b, we can write the system in matrix notation as such:

${\displaystyle Ax=b}$

Where x is a m × 1 vector, b is an n × 1 vector, and A is an n × m matrix. Therefore, this is a system of n equations with m unknown variables. There are 3 possibilities:

1. If Rank(A) is not equal to Rank([A b]), there is no solution
2. If Rank(A) = Rank([A b]) = n, there is exactly one solution
3. If Rank(A) = Rank([A b]) < n, there are infinitely many solutions.

### Complete Solution

The complete solution of a linear equation is given by the sum of the homogeneous solution, and the particular solution. The homogeneous solution is the nullspace of the transformation, and the particular solution is the values for x that satisfy the equation:

${\displaystyle A(x)=b}$
${\displaystyle A(x_{h}+x_{p})=b}$

Where

${\displaystyle x_{h}}$  is the homogeneous solution, and is the nullspace of A that satisfies the equation ${\displaystyle A(x_{h})=0}$
${\displaystyle x_{p}}$  is the particular solution that satisfies the equation ${\displaystyle A(x_{p})=b}$

### Minimum Norm Solution

If Rank(A) = Rank([A b]) < n, then there are infinitely many solutions to the linear equation. In this situation, the solution called the minimum norm solution must be found. This solution represents the "best" solution to the problem. To find the minimum norm solution, we must minimize the norm of x subject to the constraint of:

${\displaystyle Ax-b=0}$

There are a number of methods to minimize a value according to a given constraint, and we will talk about them later.

## Least-Squares Curve Fit

If Rank(A) doesnt equal Rank([A b]), then the linear equation has no solution. However, we can find the solution which is the closest. This "best fit" solution is known as the Least-Squares curve fit.

We define an error quantity E, such that:

${\displaystyle E=Ax-b\neq 0}$

Our job then is to find the minimum value for the norm of E:

${\displaystyle \|E\|^{2}=\|Ax-b\|^{2}=}$

We do this by differentiating with respect to x, and setting the result to zero:

${\displaystyle {\frac {\partial \|E\|^{2}}{\partial x}}=2A'(Ax-b)=0}$

Solving, we get our result:

${\displaystyle x=(A^{T}A)^{-1}A^{T}b}$

# Minimization

## Khun-Tucker Theorem

The Khun-Tucker Theorem is a method for minimizing a function f(x) under the constraint g(x). We can define the theorem as follows:

${\displaystyle L(x)=f(x)+\langle \Lambda ,g(x)\rangle }$

Where Λ is the lagrangian vector, and < , > denotes the scalar product operation. We will discuss scalar products more later. If we differentiate this equation with respect to x first, and then with respect to Λ, we get the following two equations:

${\displaystyle {\frac {\partial L(x)}{\partial x}}=x+A\Lambda }$
${\displaystyle {\frac {\partial L(x)}{\partial \Lambda }}=Ax-b}$

We have the final result:

${\displaystyle x=A^{T}[AA^{T}]^{-1}b}$

# Projections

## Projection

The projection of a vector ${\displaystyle v\in V}$  onto the vector space ${\displaystyle W\in V}$  is the minimum distance between v and the space W. In other words, we need to minimize the distance between vector v, and an arbitrary vector ${\displaystyle w\in W}$ :

${\displaystyle \|w-v\|^{2}=\|{\hat {W}}{\hat {a}}-v\|^{2}}$
${\displaystyle {\frac {\partial \|{\hat {W}}{\hat {a}}-v\|^{2}}{\partial {\hat {a}}}}={\frac {\partial \langle {\hat {W}}{\hat {a}}-v,{\hat {W}}{\hat {a}}-v\rangle }{\partial {\hat {a}}}}=0}$

[Projection onto space W]

${\displaystyle {\hat {a}}=({\hat {W}}^{T}{\hat {W}})^{-1}{\hat {W}}^{T}v}$

For every vector ${\displaystyle v\in V}$  there exists a vector ${\displaystyle w\in W}$  called the projection of v onto W such that <v-w, p> = 0, where p is an arbitrary element of W.

### Orthogonal Complement

${\displaystyle w^{\perp }={x\in V:\langle x,y\rangle =0,\forall y\in W}}$

## Distance between v and W

The distance between ${\displaystyle v\in V}$  and the space W is given as the minimum distance between v and an arbitrary ${\displaystyle w\in W}$ :

${\displaystyle {\frac {\partial d(v,w)}{\partial {\hat {a}}}}={\frac {\partial \|v-{\hat {W}}{\hat {a}}\|}{\partial {\hat {a}}}}=0}$

## Intersections

Given two vector spaces V and W, what is the overlapping area between the two? We define an arbitrary vector z that is a component of both V, and W:

${\displaystyle z={\hat {V}}{\hat {a}}={\hat {W}}{\hat {b}}}$
${\displaystyle {\hat {V}}{\hat {a}}-{\hat {W}}{\hat {b}}=0}$
${\displaystyle {\begin{bmatrix}{\hat {a}}\\{\hat {b}}\end{bmatrix}}={\mathcal {N}}([{\hat {v}}-{\hat {W}}])}$

Where N is the nullspace.

# Matrices

## Derivatives

Consider the following set of linear equations:

${\displaystyle a=bx_{1}+cx_{2}}$
${\displaystyle d=ex_{1}+fx_{2}}$

We can define the matrix A to represent the coefficients, the vector B as the results, and the vector x as the variables:

${\displaystyle A={\begin{bmatrix}b&c\\e&f\end{bmatrix}}}$
${\displaystyle B={\begin{bmatrix}a\\d\end{bmatrix}}}$
${\displaystyle x={\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}}$

And rewriting the equation in terms of the matrices, we get:

${\displaystyle B=Ax}$

Now, let's say we want the derivative of this equation with respect to the vector x:

${\displaystyle {\frac {d}{dx}}B={\frac {d}{dx}}Ax}$

We know that the first term is constant, so the derivative of the left-hand side of the equation is zero. Analyzing the right side shows us:

## Pseudo-Inverses

There are special matrices known as pseudo-inverses, that satisfies some of the properties of an inverse, but not others. To recap, If we have two square matrices A and B, that are both n × n, then if the following equation is true, we say that A is the inverse of B, and B is the inverse of A:

${\displaystyle AB=BA=I}$

### Right Pseudo-Inverse

Consider the following matrix:

${\displaystyle R=A^{T}[AA^{T}]^{-1}}$

We call this matrix R the right pseudo-inverse of A, because:

${\displaystyle AR=I}$

but

${\displaystyle RA\neq I}$

We will denote the right pseudo-inverse of A as ${\displaystyle A^{\dagger }}$

### Left Pseudo-Inverse

Consider the following matrix:

${\displaystyle L=[A^{T}A]^{-1}A^{T}}$

We call L the left pseudo-inverse of A because

${\displaystyle LA=I}$

but

${\displaystyle AL\neq I}$

We will denote the left pseudo-inverse of A as ${\displaystyle A^{\ddagger }}$

Matrices that follow certain predefined formats are useful in a number of computations. We will discuss some of the common matrix formats here. Later chapters will show how these formats are used in calculations and analysis.

## Diagonal Matrix

A diagonal matrix is a matrix such that:

${\displaystyle a_{ij}=0,i\neq j}$

In otherwords, all the elements off the main diagonal are zero, and the diagonal elements may be (but don't need to be) non-zero.

## Companion Form Matrix

If we have the following characteristic polynomial for a matrix:

${\displaystyle |A-\lambda I|=\lambda ^{n}+a_{n-1}\lambda ^{n-1}+\cdots +a_{1}\lambda ^{1}+a_{0}}$

We can create a companion form matrix in one of two ways:

${\displaystyle {\begin{bmatrix}0&0&0&\cdots &0&-a_{0}\\1&0&0&\cdots &0&-a_{1}\\0&1&0&\cdots &0&-a_{2}\\0&0&1&\cdots &0&-a_{3}\\\vdots &\vdots &\vdots &\ddots &\vdots &\vdots \\0&0&0&\cdots &1&-a_{n-1}\end{bmatrix}}}$

Or, we can also write it as:

${\displaystyle {\begin{bmatrix}-a_{n-1}&-a_{n-2}&-a_{n-3}&\cdots &a_{1}&a_{0}\\0&0&0&\cdots &0&0\\1&0&0&\cdots &0&0\\0&1&0&\cdots &0&0\\0&0&1&\cdots &0&0\\\vdots &\vdots &\vdots &\ddots &\vdots &\vdots \\0&0&0&\cdots &1&0\end{bmatrix}}}$

## Jordan Canonical Form

To discuss the Jordan canonical form, we first need to introduce the idea of the Jordan Block:

### Jordan Blocks

A jordan block is a square matrix such that all the diagonal elements are equal, and all the super-diagonal elements (the elements directly above the diagonal elements) are all 1. To illustrate this, here is an example of an n-dimensional jordan block:

${\displaystyle {\begin{bmatrix}a&1&0&\cdots &0\\0&a&1&\cdots &0\\0&0&a&\cdots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&a&\cdots &1\\0&0&0&\cdots &a\end{bmatrix}}}$

### Canonical Form

A square matrix is in Jordan Canonical form, if it is a diagonal matrix, or if it has one of the following two block-diagonal forms:

${\displaystyle {\begin{bmatrix}D&0&\cdots &0\\0&J_{1}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &J_{n}\end{bmatrix}}}$

Or:

${\displaystyle {\begin{bmatrix}J_{1}&0&\cdots &0\\0&J_{2}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &J_{n}\end{bmatrix}}}$

The where the D element is a diagonal block matrix, and the J blocks are in Jordan block form.

If we have an n × 1 vector x, and an n × n symmetric matrix M, we can write:

${\displaystyle x^{T}Mx=a}$

Where a is a scalar value. Equations of this form are called quadratic forms.

## Matrix Definiteness

Based on the quadratic forms of a matrix, we can create a certain number of categories for special types of matrices:

1. if ${\displaystyle x^{T}Mx>0}$  for all x, then the matrix is positive definite.
2. if ${\displaystyle x^{T}Mx\geq 0}$  for all x, then the matrix is positive semi-definite.
3. if ${\displaystyle x^{T}Mx<0}$  for all x, then the matrix is negative definite.
4. if ${\displaystyle x^{T}Mx\leq 0}$  for all x, then the matrix is negative semi-definite.

These classifications are used commonly in control engineering.

# Eigenvalues and Eigenvectors

## The Eigen Problem

This page is going to talk about the concept of Eigenvectors and Eigenvalues, which are important tools in linear algebra, and which play an important role in State-Space control systems. The "Eigen Problem" stated simply, is that given a square matrix A which is n × n, there exists a set of n scalar values λ and n corresponding non-trivial vectors v such that:

${\displaystyle Av=\lambda v}$

We call λ the eigenvalues of A, and we call v the corresponding eigenvectors of A. We can rearrange this equation as:

${\displaystyle (A-\lambda I)v=0}$

For this equation to be satisfied so that v is non-trivial, the matrix (A - λI) must be singular. That is:

${\displaystyle |A-\lambda I|=0}$

## Characteristic Equation

The characteristic equation of a square matrix A is given by:

[Characteristic Equation]

${\displaystyle |A-\lambda I|=0}$

Where I is the identity matrix, and λ is the set of eigenvalues of matrix A. From this equation we can solve for the eigenvalues of A, and then using the equations discussed above, we can calculate the corresponding eigenvectors.

In general, we can expand the characteristic equation as:

[Characteristic Polynomial]

${\displaystyle |A-\lambda I|=(-1)^{n}(\lambda ^{n}+c_{n-1}\lambda ^{n-1}+\cdots +c_{1}\lambda ^{1}+c_{0})}$

This equation satisfies the following properties:

1. ${\displaystyle |A|=(-1)^{n}c_{0}}$
2. A is nonsingular if c0 is non-zero.

### Example: 2 × 2 Matrix

Let's say that X is a square matrix of order 2, as such:

${\displaystyle X={\begin{bmatrix}a&b\\c&d\end{bmatrix}}}$

Then we can use this value in our characteristic equation:

${\displaystyle {\begin{vmatrix}a-\lambda &b\\c&d-\lambda \end{vmatrix}}=0}$
${\displaystyle (a-\lambda )(d-\lambda )-(b)(c)=0}$

The roots to the above equation (the values for λ that satisfies the equality) are the eigenvalues of X.

## Eigenvalues

The solutions, λ, of the characteristic equation for matrix X are known as the eigenvalues of the matrix X.

Eigenvalues satisfy the following properties:

1. If λ is an eigenvalue of A, λn is an eigenvalue of An.
2. If λ is a complex eigenvalue of A, then λ* (the complex conjugate) is also an eigenvalue of A.
3. If any of the eigenvalues of A are zero, then A is singular. If A is non-singular, all the eigenvalues of A are nonzero.

## Eigenvectors

The characteristic equation can be rewritten as such:

${\displaystyle Xv=\lambda v}$

Where X is the matrix under consideration, and λ are the eigenvalues for matrix X. For every unique eigenvalue, there is a solution vector v to the above equation, known as an eigenvector. The above equation can also be rewritten as:

${\displaystyle |X-\lambda I|v=0}$

Where the resulting values of v for each eigenvalue λ is an eigenvector of X. There is a unique eigenvector for each unique eigenvalue of X. From this equation, we can see that the eigenvectors of A form the nullspace:

${\displaystyle v={\mathcal {N}}\{A-\lambda I\}}$

And therefore, we can find the eigenvectors through row-reduction of that matrix.

Eigenvectors satisfy the following properties:

1. If v is a complex eigenvector of A, then v* (the complex conjugate) is also an eigenvector of A.
2. Distinct eigenvectors of A are linearly independent.
3. If A is n × n, and if there are n distinct eigenvectors, then the eigenvectors of A form a complete basis set for ${\displaystyle {\mathcal {R}}^{n}}$

## Generalized Eigenvectors

Let's say that matrix A has the following characteristic polynomial:

${\displaystyle (A-\lambda I)=(-1)^{n}(\lambda -\lambda _{1})^{d_{1}}(\lambda -\lambda _{2})^{d_{2}}\cdots (\lambda -\lambda _{s})^{d_{s}}}$

Where d1, d2, ... , ds are known as the algebraic multiplicity of the eigenvalue λi. Also note that d1 + d2 + ... + ds = n, and s < n. In other words, the eigenvalues of A are repeated. Therefore, this matrix doesnt have n distinct eigenvectors. However, we can create vectors known as generalized eigenvectors to make up the missing eigenvectors by satisfying the following equations:

${\displaystyle (A-\lambda I)^{k}v_{k}=0}$
${\displaystyle (A-\lambda I)^{k-1}v_{k}=0}$

## Right and Left Eigenvectors

The equation for determining eigenvectors is:

${\displaystyle (A-\lambda I)v=0}$

And because the eigenvector v is on the right, these are more appropriately called "right eigenvectors". However, if we rewrite the equation as follows:

${\displaystyle u(A-\lambda I)=0}$

The vectors u are called the "left eigenvectors" of matrix A.

## Similarity

Matrices A and B are said to be similar to one another if there exists an invertable matrix T such that:

${\displaystyle T^{-1}AT=B}$

If there exists such a matrix T, the matrices are similar. Similar matrices have the same eigenvalues. If A has eigenvectors v1, v2 ..., then B has eigenvectors u given by:

${\displaystyle u_{i}=Tv_{i}}$

## Matrix Diagonalization

Some matricies are similar to diagonal matrices using a transition matrix, T. We will say that matrix A is diagonalizable if the following equation can be satisfied:

${\displaystyle T^{-1}AT=D}$

Where D is a diagonal matrix. An n × n square matrix is diagonalizable if and only if it has n linearly independent eigenvectors.

## Transition Matrix

If an n × n square matrix has n distinct eigenvalues λ, and therefore n distinct eigenvectors v, we can create a transition matrix T as:

${\displaystyle T=[v_{1}v_{2}...v_{n}]}$

And transforming matrix X gives us:

${\displaystyle T^{-1}AT={\begin{bmatrix}\lambda _{1}&0&\cdots &0\\0&\lambda _{2}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &\lambda _{n}\end{bmatrix}}}$

Therefore, if the matrix has n distinct eigenvalues, the matrix is diagonalizable, and the diagonal entries of the diagonal matrix are the corresponding eigenvalues of the matrix.

## Complex Eigenvalues

Consider the situation where a matrix A has 1 or more complex conjugate eigenvalue pairs. The eigenvectors of A will also be complex. The resulting diagonal matrix D will have the complex eigenvalues as the diagonal entries. In engineering situations, it is often not a good idea to deal with complex matrices, so other matrix transformations can be used to create matrices that are "nearly diagonal".

## Generalized Eigenvectors

If the matrix A does not have a complete set of eigenvectors, that is, that they have d eigenvectors and n - d generalized eigenvectors, then the matrix A is not diagonalizable. However, the next best thing is acheived, and matrix A can be transformed into a Jordan Cannonical Matrix. Each set of generalized eigenvectors that are formed from a single eigenvector basis will create a jordan block. All the distinct eigenvectors that do not spawn any generalized eigenvectors will form a diagonal block in the Jordan matrix.

If λi are the n distinct eigenvalues of matrix A, and vi are the corresponding n distinct eigenvectors, and if wi are the n distinct left-eigenvectors, then the matrix A can be represented as a sum:

${\displaystyle A=\sum _{i=1}^{n}\lambda _{i}v_{i}w_{i}^{T}}$

this is known as the spectral decomposition of A.

Consider a scenario where the matrix representation of a system A differs from the actual implementation of the system by a factor of ΔA. In other words, our system uses the matrix:

${\displaystyle A+\Delta A}$

From the study of Control Systems, we know that the values of the eigenvectors can affect the stability of the system. For that reason, we would like to know how a small error in A will affect the eigenvalues.

First off, we assume that ΔA is a small shift. The definition of "small" in this sense is arbitrary, and will remained open. Keep in mind that the techniques discussed here are more accurate the smaller ΔA is.

If ΔA is the error in the matrix A, then Δλ is the error in the eigenvalues and Δv is the error in the eigenvectors. The characteristic equation becomes:

${\displaystyle (A+\Delta A)(v+\Delta v)=(\lambda +\Delta \lambda )(v+\Delta v)}$

We have an equation now with two unknowns: Δλ and Δv. In other words, we don't know how a small change in A will affect the eigenvalues and eigenvectors. If we multiply out both sides, we get:

${\displaystyle Av+\Delta Av+A\Delta v+O(\Delta ^{2})=\lambda v+\Delta \lambda v+v\Delta \lambda +O(\Delta ^{2})}$

This situation seems hopeless, until we multiply both sides by the corresponding left-eigenvector w from the left:

${\displaystyle w^{T}Av+w^{T}\Delta Av+w^{T}v\Delta A=w^{T}\lambda v+w^{T}\Delta \lambda v+w^{T}v\Delta \lambda +O(\Delta ^{2})}$

Terms where two Δs (which are known to be small, by definition) are multiplied together, we can say are negligible, and ignore them. Also, we know from our right-eigenvalue equation that:

${\displaystyle w^{T}A=\lambda w^{T}}$

Another fact is that the right-eigenvectors and left eigenvectors are orthogonal to each other, so the following result holds:

${\displaystyle w^{T}v=0}$

Substituting these results, where necessary, into our long equation above, we get the following simplification:

${\displaystyle w^{T}\Delta Av=\Delta \lambda w^{T}\Delta v}$

And solving for the change in the eigenvalue gives us:

${\displaystyle \Delta \lambda ={\frac {w^{T}\Delta Av}{w^{T}\Delta v}}}$

This approximate result is only good for small values of ΔA, and the result is less precise as the error increases.

# Functions of Matrices

If we have functions, and we use a matrix as the input to those functions, the output values are not always intuitive. For instance, if we have a function f(x), and as the input argument we use matrix A, the output matrix is not necessarily the function f applied to the individual elements of A.

## Diagonal Matrix

In the special case of diagonal matrices, the result of f(A) is the function applied to each element of the diagonal matrix:

${\displaystyle A={\begin{bmatrix}a_{11}&0&\cdots &0\\0&a_{22}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &a_{nn}\end{bmatrix}}}$

Then the function f(A) is given by:

${\displaystyle f(A)={\begin{bmatrix}f(a_{11})&0&\cdots &0\\0&f(a_{22})&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &f(a_{nn})\end{bmatrix}}}$

## Jordan Cannonical Form

Matrices in Jordan Canonical form also have an easy way to compute the functions of the matrix. However, this method is not nearly as easy as the diagonal matrices described above.

If we have a matrix in Jordan Block form, A, the function f(A) is given by:

${\displaystyle f(A)={\begin{bmatrix}{\frac {f(a)}{0!}}&{\frac {f'(a)}{1!}}&\cdots &{\frac {f^{(r-1)}(a)}{(r-1)!}}\\0&{\frac {f(a)}{0!}}&\cdots &{\frac {f^{(}r-2)(a)}{(r-2)!}}\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &{\frac {f(a)}{0!}}\end{bmatrix}}}$

The matrix indices have been removed, because in Jordan block form, all the diagonal elements must be equal.

If the matrix is in Jordan Block form, the value of the function is given as the function applied to the individual diagonal blocks.

If the characteristic equation of matrix A is given by:

${\displaystyle \Delta (\lambda )=|A-\lambda I|=(-1)^{n}(\lambda ^{n}+a_{n-1}\lambda ^{n-1}+\cdots +a_{0})=0}$

Then the Cayley-Hamilton theorem states that the matrix A itself is also a valid solution to that equation:

${\displaystyle \Delta (A)=(-1)^{n}(A^{n}+a_{n-1}A^{n-1}+\cdots +a_{0})=0}$

Another theorem worth mentioning here (and by "worth mentioning", we really mean "fundamental for some later topics") is stated as:

If λ are the eigenvalues of matrix A, and if there is a function f that is defined as a linear combination of powers of λ:

${\displaystyle f(\lambda )=\sum _{i=0}^{\infty }b_{i}\lambda ^{i}}$

If this function has a radius of convergence S, and if all the eigenvectors of A have magnitudes less then S, then the matrix A itself is also a solution to that function:

${\displaystyle f(A)=\sum _{i=0}^{\infty }b_{i}A^{i}}$

## Matrix Exponentials

If we have a matrix A, we can raise that matrix to a power of e as follows:

${\displaystyle e^{A}}$

It is important to note that this is not necessarily (not usually) equal to each individual element of A being raised to a power of e. Using taylor-series expansion of exponentials, we can show that:

${\displaystyle e^{A}=I+A+{\frac {1}{2}}A^{2}+{\frac {1}{6}}A^{3}+...=\sum _{k=0}^{\infty }{1 \over k!}A^{k}}$ .

In other words, the matrix exponential can be reducted to a sum of powers of the matrix. This follows from both the taylor series expansion of the exponential function, and the cayley-hamilton theorem discussed previously.

However, this infinite sum is expensive to compute, and because the sequence is infinite, there is no good cut-off point where we can stop computing terms and call the answer a "good approximation". To alleviate this point, we can turn to the Cayley-Hamilton Theorem. Solving the Theorem for An, we get:

${\displaystyle A^{n}=-c_{n-1}A^{n-1}-c_{n-2}A^{n-2}-\cdots -c_{1}A-c_{0}I}$

Multiplying both sides of the equation by A, we get:

${\displaystyle A^{n+1}=-c_{n-1}A^{n}-c_{n-2}A^{n-1}-\cdots -c_{1}A^{2}-c_{0}A}$

We can substitute the first equation into the second equation, and the result will be An+1 in terms of the first n - 1 powers of A. In fact, we can repeat that process so that Am, for any arbitrary high power of m can be expressed as a linear combination of the first n - 1 powers of A. Applying this result to our exponential problem:

${\displaystyle e^{A}=\alpha _{0}I+\alpha _{1}A+\cdots +\alpha _{n-1}A^{n-1}}$

Where we can solve for the α terms, and have a finite polynomial that expresses the exponential.

## Inverse

The inverse of a matrix exponential is given by:

${\displaystyle (e^{A})^{-1}=e^{-A}}$

## Derivative

The derivative of a matrix exponential is:

${\displaystyle {\frac {d}{dx}}e^{Ax}=Ae^{Ax}=e^{Ax}A}$

Notice that the exponential matrix is commutative with the matrix A. This is not the case with other functions, necessarily.

## Sum of Matrices

If we have a sum of matrices in the exponent, we cannot separate them:

${\displaystyle e^{(A+B)x}\neq e^{Ax}e^{Bx}}$

## Differential Equations

If we have a first-degree differential equation of the following form:

${\displaystyle x'(t)=Ax(t)+f(t)}$

With initial conditions

${\displaystyle x(t_{0})=c}$

Then the solution to that equation is given in terms of the matrix exponential:

${\displaystyle x(t)=e^{A(t-t_{0})}c+\int _{t_{0}}^{t}e^{A(t-\tau )}f(\tau )d\tau }$

This equation shows up frequently in control engineering.

## Laplace Transform

As a matter of some interest, we will show the Laplace Transform of a matrix exponential function:

${\displaystyle {\mathcal {L}}[e^{At}]=(sI-A)^{-1}}$

We will not use this result any further in this book, although other books on engineering might make use of it.

# Function Spaces

## Function Space

A function space is a linear space where all the elements of the space are functions. A function space that has a norm operation is known as a normed function space. The spaces we consider will all be normed.

## Continuity

f(x) is continuous at x0 if, for every ε > 0 there exists a δ(ε) > 0 such that |f(x) - f(x0)| < ε when |x - x0| < δ(ε).

## Common Function Spaces

Here is a listing of some common function spaces. This is not an exhaustive list.

### C Space

The C function space is the set of all functions that are continuous.

The metric for C space is defined as:

${\displaystyle \rho (x,y)_{L_{2}}=\max |f(x)-g(x)|}$

Consider the metric of sin(x) and cos(x):

${\displaystyle \rho (sin(x),cos(x))_{L_{2}}={\sqrt {2}},x={\frac {3\pi }{4}}}$

### Cp Space

The Cp is the set of all continuous functions for which the first p derivatives are also continuous. If ${\displaystyle p=\infty }$  the function is called "infinitely continuous. The set ${\displaystyle C^{\infty }}$  is the set of all such functions. Some examples of functions that are infinitely continuous are exponentials, sinusoids, and polynomials.

### L Space

The L space is the set of all functions that are finitely integrable over a given interval [a, b].

f(x) is in L(a, b) if:

${\displaystyle \int _{a}^{b}|f(x)|dx<\infty }$

### L p Space

The Lp space is the set of all functions that are finitely integrable over a given interval [a, b] when raised to the power p:

${\displaystyle \int _{a}^{b}|f(x)|^{p}dx<\infty }$

Most importantly for engineering is the L2 space, or the set of functions that are "square integrable".

The L2 space is very important to engineers, because functions in this space do not need to be continuous. Many discontinuous engineering functions, such as the delta (impulse) function, the unit step function, and other discontinuous functions are part of this space.

## L2 Functions

A large number of functions qualify as L2 functions, including uncommon, discontinuous, piece-wise, and other functions. A function which, over a finite range, has a finite number of discontinuities is an L2 function. For example, a unit step and an impulse function are both L2 functions. Also, other functions useful in signal analysis, such as square waves, triangle waves, wavelets, and other functions are L2 functions.

In practice, most physical systems have a finite amount of noise associated with them. Noisy signals and random signals, if finite, are also L2 functions: this makes analysis of those functions using the techniques listed below easy.

## Null Function

The null functions of L2 are the set of all functions φ in L2 that satisfy the equation:

${\displaystyle \int _{a}^{b}|\phi (x)|^{2}dx=0}$

for all a and b.

## Norm

The L2 norm is defined as follows:

[L2 Norm]

${\displaystyle \|f(x)\|_{L_{2}}={\sqrt {\int _{a}^{b}|f(x)|^{2}dx}}}$

If the norm of the function is 1, the function is normal.

We can show that the derivative of the norm squared is:

${\displaystyle {\frac {\partial \|x\|^{2}}{\partial x}}=2x}$

## Scalar Product

The scalar product in L2 space is defined as follows:

[L2 Scalar Product]

${\displaystyle \langle f(x),g(x)\rangle _{L_{2}}=\int _{a}^{b}f(x)g(x)dx}$

If the scalar product of two functions is zero, the functions are orthogonal.

We can show that given coefficient matrices A and B, and variable x, the derivative of the scalar product can be given as:

${\displaystyle {\frac {\partial }{\partial x}}\langle Ax,Bx\rangle =A^{T}Bx+B^{T}Ax}$

We can recognize this as the product rule of differentiation. Generalizing, we can say that:

${\displaystyle {\frac {\partial }{\partial x}}\langle f(x),g(x)\rangle =f'(x)g(x)+f(x)g'(x)}$

We can also say that the derivative of a matrix A times a vector x is:

${\displaystyle {\frac {d}{dx}}Ax=A^{T}}$

## Metric

The metric of two functions (we will not call it the "distance" here, because that word has no meaning in a function space) will be denoted with ρ(x,y). We can define the metric of an L2 function as follows:

[L2 Metric]

${\displaystyle \rho (x,y)_{L_{2}}={\sqrt {\int _{a}^{b}|f(x)-g(x)|^{2}dx}}}$

## Cauchy-Schwarz Inequality

The Cauchy-Schwarz Inequality still holds for L2 functions, and is restated here:

${\displaystyle |\langle f(x),g(x)\rangle |\leq \|f\|\|g\|}$

## Linear Independence

A set of functions in L2 are linearly independent if:

${\displaystyle a_{1}f_{1}(x)+a_{2}f_{2}(x)+\cdots +a_{n}f_{n}(x)=0}$

If and only if all the a coefficients are 0.

## Grahm-Schmidt Orthogonalization

The Grahm-Schmidt technique that we discussed earlier still works with functions, and we can use it to form a set of linearly independent, orthogonal functions in L2.

For a set of functions φ, we can make a set of orthogonal functions ψ that space the same space but are orthogonal to one another:

[Grahm-Schmidt Orthogonalization]

${\displaystyle \psi _{1}=\phi _{1}}$
${\displaystyle \psi _{i}=\phi _{i}-\sum _{n=1}^{i-1}{\frac {\langle \psi _{n},\phi _{i}\rangle }{\langle \psi _{n},\psi _{n}\rangle }}\psi _{n}}$

## Basis

The L2 is an infinite-basis set, which means that any basis for the L2 set will require an infinite number of basis functions. To prove that an infinite set of orthogonal functions is a basis for the L2 space, we need to show that the null function is the only function in L2 that is orthogonal to all the basis functions. If the null function is the only function that satisfies this relationship, then the set is a basis set for L2.

By definition, we can express any function in L2 as a linear sum of the basis elements. If we have basis elements φ, we can define any other function ψ as a linear sum:

${\displaystyle \psi (x)=\sum _{n=1}^{\infty }a_{n}\phi _{n}(x)}$

We will explore this important result in the section on Fourier Series.

There are some special spaces known as Banach spaces, and Hilbert spaces.

## Convergent Functions

Let's define the piece-wise function φ(x) as:

${\displaystyle \phi _{n}(x)=\left\{{\begin{matrix}0&x\leq 0\\nx&0

We can see that as we set ${\displaystyle n\to \infty }$ , this function becomes the unit step function. We can say that as n approaches infinity, that this function converges to the unit step function. Notice that this function only converges in the L2 space, because the unit step function does not exist in the C space (it is not continuous).

### Convergence

We can say that a function φ converges to a function φ* if:

${\displaystyle \lim _{n\to \infty }\|\phi _{n}-\phi ^{*}\|=0}$

We can call this sequences, and all such sequences that converge to a given function as n approaches infinity a cauchy sequence.

### Complete Function Spaces

A function space is called complete if all sequences in that space converge to another function in that space.

## Banach Space

A Banach Space is a complete normed function space.

## Hilbert Space

A Hilbert Space is a Banach Space with respect to a norm induced by the scalar product. That is, if there is a scalar product in the space X, then we can say the norm is induced by the scalar product if we can write:

${\displaystyle \|f\|=g(\langle f,f\rangle )}$

That is, that the norm can be written as a function of the scalar product. In the L2 space, we can define the norm as:

${\displaystyle \|f\|={\sqrt {\langle f,f\rangle }}}$

If the scalar product space is a Banach Space, if the norm space is also a Banach space.

In a Hilbert Space, the Parallelogram rule holds for all members f and g in the function space:

${\displaystyle \|f+g\|^{2}+\|f-g\|^{2}=2\|f\|^{2}+2\|g\|^{2}}$

The L2 space is a Hilbert Space. The C space, however, is not.

# Fourier Series

The L2 space is an infinite function space, and therefore a linear combination of any infinite set of orthogonal functions can be used to represent any single member of the L2 space. The decomposition of an L2 function in terms of an infinite basis set is a technique known as the Fourier Decomposition of the function, and produces a result called the Fourier Series.

## Fourier Basis

Let's consider a set of L2 functions, ${\displaystyle \phi }$ , as follows:

${\displaystyle \phi =\{1,\sin(\pi x),\cos(\pi x),\sin(2\pi x),\cos(2\pi x),\sin(3\pi x),\cos(3\pi x)...\}.\ }$

We can prove that over a range ${\displaystyle [0,2\pi ]}$ , all of these functions are orthogonal:

${\displaystyle \int _{0}^{2\pi }1\cdot \cos(n\pi x)dx=0}$
${\displaystyle \int _{0}^{2\pi }1\cdot \sin(n\pi x)dx=0}$
${\displaystyle \int _{0}^{2\pi }\sin(n\pi x)\sin(m\pi x)dx=0,n\neq m}$
${\displaystyle \int _{0}^{2\pi }\sin(n\pi x)\cos(m\pi x)dx=0}$
${\displaystyle \int _{0}^{2\pi }\cos(n\pi x)\cos(m\pi x)dx=0,n\neq m}$

Because ${\displaystyle \phi }$  is as an infinite orthogonal set in L2, ${\displaystyle \phi }$  is also a valid basis set in the L2 space. Therefore, we can decompose any function in L2 as the following sum:

[Classical Fourier Series]

${\displaystyle \psi (x)=a_{0}(1)+\sum _{n=1}^{\infty }a_{n}\sin(n\pi x)+\sum _{m=1}^{\infty }b_{m}\cos(m\pi x)}$

However, the difficulty occurs when we need to calculate the a and b coefficients. We will show the method to do this below:

## a0: The Constant Term

Calculation of a0 is the easiest, and therefore we will show how to calculate it first. We use the value of a0 which minimizes the error in approximating ${\displaystyle f(x)}$  by the Fourier series.

First, define an error function, E, that is equal to the squared norm of the difference between the function f(x) and the infinite sum above:

${\displaystyle E={\frac {1}{2}}\int _{0}^{2\pi }\|f(x)-a_{0}(1)-\sum _{n=1}^{\infty }a_{n}\sin(n\pi x)-\sum _{m=1}^{\infty }b_{m}\cos(m\pi x)\|^{2}dx}$

For ease, we will write all the basis functions as the set φ, described above:

${\displaystyle \sum _{i=0}^{\infty }a_{i}\phi _{i}=a_{0}+\sum _{n=1}^{\infty }a_{n}\sin(n\pi x)+\sum _{m=1}^{\infty }b_{m}\cos(m\pi x)}$

Combining the last two functions together, and writing the norm as an integral, we can say:

${\displaystyle E={\frac {1}{2}}\int _{0}^{2\pi }|\sum _{i=0}^{\infty }a_{i}\phi _{i}|^{2}dx}$

We attempt to minimize this error function with respect to the constant term. To do this, we differentiate both sides with respect to a0, and set the result to zero:

${\displaystyle 0={\frac {\partial E}{\partial a_{0}}}=\int _{0}^{2\pi }(f(x)-\sum _{i=0}^{\infty }a_{i}\phi _{i}(x))(-\phi _{0}(x))dx}$

The φ0 term comes out of the sum because of the chain rule: it is the only term in the entire sum dependent on a0. We can separate out the integral above as follows:

${\displaystyle \int _{0}^{2\pi }(f(x)-\sum _{i=0}^{\infty }a_{i}\phi _{i})(-\phi _{0})dx=-\int _{0}^{2\pi }f(x)\phi _{0}(x)dx+a_{0}\int _{0}^{2\pi }\phi _{0}(x)\phi _{0}(x)dx}$

All the other terms drop out of the infinite sum because they are all orthogonal to φ0. Again, we can rewrite the above equation in terms of the scalar product:

${\displaystyle 0=-\langle f(x),\phi _{0}(x)\rangle +a_{0}\langle \phi _{0}(x),\phi _{0}(x)\rangle }$

And solving for a0, we get our final result:

${\displaystyle a_{0}={\frac {\langle f(x),\phi _{0}(x)\rangle }{\langle \phi _{0}(x),\phi _{0}(x)\rangle }}}$

## Sin Coefficients

Using the above method, we can solve for the an coefficients of the sin terms:

${\displaystyle a_{n}={\frac {\langle f(x),\sin(n\pi x)\rangle }{\langle \sin(n\pi x),\sin(n\pi x)\rangle }}}$

## Cos Coefficients

Also using the above method, we can solve for the bn terms of the cos term.

${\displaystyle b_{n}={\frac {\langle f(x),\cos(n\pi x)\rangle }{\langle \cos(n\pi x),\cos(n\pi x)\rangle }}}$

The classical Fourier series uses the following basis:

${\displaystyle \phi (x)={1,\sin(n\pi x),\cos(n\pi x)},n=1,2,...}$

However, we can generalize this concept to extend to any orthogonal basis set from the L2 space.

We can say that if we have our orthogonal basis set that is composed of an infinite set of arbitrary, orthogonal L2 functions:

${\displaystyle \phi ={\phi _{1},\phi _{2},\cdots ,}}$

We can define any L2 function f(x) in terms of this basis set:

[Generalized Fourier Series]

${\displaystyle f(x)=\sum _{n=1}^{\infty }a_{n}\phi _{n}(x)}$

Using the method from the previous chapter, we can solve for the coefficients as follows:

[Generalized Fourier Coefficient]

${\displaystyle a_{n}={\frac {\langle f(x),\phi _{n}(x)\rangle }{\langle \phi _{n}(x),\phi _{n}(x)\rangle }}}$

Bessel's equation relates the original function to the fourier coefficients an:

[Bessel's Equation]

${\displaystyle \sum _{n=1}^{\infty }a_{n}^{2}\leq \|f(x)\|^{2}}$

If the basis set is infinitely orthogonal, and if an infinite sum of the basis functions perfectly reproduces the function f(x), then the above equation will be an equality, known as Parseval's Theorem:

[Parseval's Theorem]

${\displaystyle \sum _{n=1}^{\infty }a_{n}^{2}=\|f(x)\|^{2}}$

Engineers may recognize this as a relationship between the energy of the signal, as represented in the time and frequency domains. However, parseval's rule applies not only to the classical Fourier series coefficients, but also to the generalized series coefficients as well.

The concept of the fourier series can be expanded to include 2-dimensional and n-dimensional function decomposition as well. Let's say that we have a function in terms of independent variables x and y. We can decompose that function as a double-summation as follows:

${\displaystyle f(x,y)=\sum _{i=1}^{\infty }\sum _{j=1}^{\infty }a_{ij}\phi _{ij}(x,y)}$

Where φij is a 2-dimensional set of orthogonal basis functions. We can define the coefficients as:

${\displaystyle a_{ij}={\frac {\langle f(x,y),\phi _{ij}(x,y)\rangle }{\langle \phi _{ij}(x,y),\phi _{ij}(x,y)\rangle }}}$

This same concept can be expanded to include series with n-dimensions.

The Feyman lectures Chapter 50 Harmonics

# Miscellany

[Lyapunov's Equation]

${\displaystyle AM+MB=C}$

Where A, B and C are constant square matrices, and M is the solution that we are trying to find. If A, B, and C are of the same order, and if A and B have no eigenvalues in common, then the solution can be given in terms of matrix exponentials:

${\displaystyle M=-\int _{0}^{\infty }e^{Ax}Ce^{Bx}dx}$

Leibniz' rule allows us to take the derivative of an integral, where the derivative and the integral are performed using different variables:

Wavelets are orthogonal basis functions that only exist for certain windows in time. This is in contrast to sinusoidal waves, which exist for all times t. A wavelet, because it is dependant on time, can be used as a basis function. A wavelet basis set gives rise to wavelet decomposition, which is a 2-variable decomposition of a 1-variable function. Wavelet analysis allows us to decompose a function in terms of time and frequency, while fourier decomposition only allows us to decompose a function in terms of frequency.

## Mother Wavelet

If we have a basic wavelet function ψ(t), we can write a 2-dimensional function known as the mother wavelet function as such:

${\displaystyle \psi _{jk}=2^{j/2}\psi (2^{j}t-k)}$

## Wavelet Series

If we have our mother wavelet function, we can write out a fourier-style series as a double-sum of all the wavelets:

${\displaystyle f(t)=\sum _{j=0}^{\infty }\sum _{k=0}^{\infty }a_{jk}\psi _{jk}(t)}$

## Scaling Function

Sometimes, we can add in an additional function, known as a scaling function:

${\displaystyle f(t)=\sum _{i=0}^{\infty }c_{i}\phi _{i}+\sum _{j=0}^{\infty }\sum _{k=0}^{\infty }a_{jk}\psi _{jk}(t)}$

The idea is that the scaling function is larger than the wavelet functions, and occupies more time. In this case, the scaling function will show long-term changes in the signal, and the wavelet functions will show short-term changes in the signal.

## Optimization

Optimization is an important concept in engineering. Finding any solution to a problem is not nearly as good as finding the one "optimal solution" to the problem. Optimization problems are typically reformatted so they become minimization problems, which are well-studied problems in the field of mathematics.

Typically, when optimizing a system, the costs and benefits of that system are arranged into a cost function. It is the engineers job then to minimize this cost function (and thereby minimize the cost of the system). It is worth noting at this point that the word "cost" can have multiple meanings, depending on the particular problem. For instance, cost can refer to the actual monetary cost of a system (number of computer units to host a website, amount of cable needed to connect Philadelphia and New York), the delay of the system (loading time for a website, transmission delay for a communication network), the reliability of the system (number of dropped calls in a cellphone network, average lifetime of a car transmission), or any other types of factors that reduce the effectiveness and efficiency of the system.

Because optimization typically becomes a mathematical minimization problem, we are going to discuss minimization here.

### Minimization

Minimization is the act of finding the numerically lowest point in a given function, or in a particular range of a given function. Students of mathematics and calculus may remember using the derivative of a function to find the maxima and minima of a function. If we have a function f(x), we can find the maxima, minima, or saddle-points (points where the function has zero slope, but is not a maxima or minima) by solving for x in the following equation:

${\displaystyle {\frac {df(x)}{dx}}=0}$

In other words, we are looking for the roots of the derivative of the function f plus those points where f has a corner. Once we have the so called critical points of the function (if any), we can test them to see if they are relatively high (maxima), or relatively low (minima). Some words to remember in this context are:

Global Minima
A global minimum of a function is the lowest value of that function anywhere. If the domain of the function is restricted, say A < x < B, then the minima can also occur at the boundary, here A or B.
Local Minima
A local minimum of a function is the lowest value of that function within a small range. A value can thus be a local minimum even though there are smaller function values, but not in a small neighborhood.

## Unconstrained Minimization

Unconstrained Minimization refers to the minimization of the given function without having to worry about any other rules or caveats. Constrained Minimization, on the other hand, refers to minimization problems where other relations called constraints must be satisfied at the same time.

Beside the method above (where we take the derivative of the function and set that equal to zero), there are several numerical methods that we can use to find the minima of a function. For these methods there are useful computational tools such as Matlab.

### Hessian Matrix

The function has a local minima at a point x if the Hessian matrix H(x) is positive definite:

${\displaystyle H(x)={\frac {\partial ^{2}f(x)}{\partial x^{2}}}}$

Where x is a vector of all the independant variables of the function. If x is a scalar variable, the hessian matrix reduces to the second derivative of the function f.

### Newton-Raphson Method

The Newton-Raphson Method of computing the minima of a function f uses an iterative computation. We can define the sequence:

${\displaystyle x^{n+1}=x^{n}-{\frac {f'(x)}{f''(x)}}}$

Where

${\displaystyle f'(x)={\frac {df(x)}{dx}}}$
${\displaystyle f''(x)={\frac {d^{2}f(x)}{dx^{2}}}}$

As we repeat the above computation, plugging in consecutive values for n, our solution will converge on the true solution. However, this process will take infinitely many iterations to converge, but if an approximation of the true solution will suffices, you can stop after only few iterations, because the sequence converges rather quickly (quadratic).

### Steepest Descent Method

The Newton-Raphson method can be tricky because it relies on the second derivative of the function f, and this can oftentimes be difficult (if not impossible) to accurately calculate. The Steepest Descent Method, however, does not require the second derivative, but it does require the selection of an appropriate scalar quantity ε, which cannot be chosen arbitrarily (but which can also not be calculated using a set formula). The Steepest Descent method is defined by the following iterative computation:

${\displaystyle x^{n+1}=x^{n}-\epsilon {\frac {df(x)}{dx}}}$

Where epsilon needs to be sufficiently small. If epsilon is too large, the iteration may diverge. If this happens, a new epsilon value needs to be chosen, and the process needs to be repeated.

## Constrained Minimization

Constrained Minimization' is the process of finding the minimum value of a function under a certain number of additional rules called constraints. For instance, we could say "Find the minium value of f(x), but g(x) must equal 10". These kinds of problems are more difficult, but the Khun-Tucker theorem, and also the Karush-Khun-Tucker theorem help to solve them.

There are two different types of constraints: equality constraints and inequality constraints. We will consider them individually, and then mixed constraints.

### Equality Constraints

The Khun-Tucker Theorem is a method for minimizing a function f(x) under the equality constraint g(x). The theorem reads as follows:

Given the cost function f, and an equality constraint g in the following form:

${\displaystyle g(x)=0}$ ,

Then we can convert this problem into an unconstrained minimization problem by constructing the Lagrangian function of f and g:

${\displaystyle L(x)=f(x)+\langle \Lambda ,g(x)\rangle }$

Where Λ is the lagrange multiplier, and < , > denotes the scalar product of the vector space Rn (where n is the number of equality constraints). We will discuss scalar products in more detail later. If we differentiate this equation with respect to x, we can find the minimum of this whole function L(x,Λ), and that will be the minimum of our function f.

${\displaystyle {\frac {df(x)}{dx}}+\left\langle \Lambda ,{\frac {dg(x)}{dx}}\right\rangle =0}$
${\displaystyle g(x)=0}$

This is a set of n+k equations with n+k unknown variables (n Λs and k xs).

### Inequality Constraints

Similar to the method above, let us say that we have a cost function f, and an inequality constraint in the following form:

${\displaystyle g(x)\leq 0}$

Then we can take the Lagrangian of this again:

${\displaystyle L(x)=f(x)+\langle \Lambda ,g(x)\rangle }$

But we now must use the following three equations/ inequalities in determining our solution:

${\displaystyle {\frac {df}{dx}}=0}$
${\displaystyle \langle \Lambda ,g(x)\rangle =0}$
${\displaystyle \Lambda \geq 0}$

These last second equation can be interpreted in the following way:

if ${\displaystyle g(x)<0}$ , then ${\displaystyle \Lambda =0}$
if ${\displaystyle g(x)\leq 0}$ , then ${\displaystyle \Lambda \geq 0}$

Using these two additional equations/ inequalities, we can solve in a similar manner as above.

### Mixed Constraints

If we have a set of equality and inequality constraints

${\displaystyle g(x)=0}$
${\displaystyle h(x)\leq 0}$

we can combine them into a single Lagrangian with two additional conditions:

${\displaystyle L(x)=f(x)+\langle \Lambda ,g(x)\rangle +\langle \mu ,h(x)\rangle }$
${\displaystyle g(x)=0}$
${\displaystyle \langle \mu ,h(x)\rangle =0}$
${\displaystyle \mu \geq 0}$

## Infinite Dimensional Minimization

The above methods work well if the variables involved in the analysis are finite-dimensional vectors, like those in the RN. However, when we are trying to minimize something that is more complex than a vector, i.e. a function we need the following concept. We consider functions that live in a subspace of L2(RN), which is an infinite-dimensional vector space. We will define the term functional as follows:

Functional
A functional is a map that takes one or more functions as arguments, and which returns a scalar value.

Let us say that we consider functions x of time t (N=1). Suppose further we have a fixed function f in two variables. With that function, we can associate a cost functional J:

${\displaystyle J[x]=\int _{a}^{b}f(x,t)dt}$

Where we are explicitly taking account of t in the definition of f. To minimize this function, like all minimization problems, we need to take the derivative of the function, and set the derivative to zero. However, we need slightly more sophisticated version of derivative, because x is a function. This is where the Gateaux Derivative enters the field.

### Gateaux Derivative

We can define the Gateaux Derivative in terms of the following limit:

${\displaystyle \delta F(x,h)=\lim _{\epsilon \to 0}{\frac {1}{\epsilon }}[F(x+\epsilon h)-F(x)]}$

Which is similar to the classical definition of the derivative in the direction h. In plain words, we took the derivative of F with respect to x in the direction of h. h is an arbitrary function of time, in the same space as x (here we are talking about the space L2). Analog to the one-dimensional case a function is differentiable at x iff the above limit exists. We can use the Gateaux derivative to find the minimization of our functional above.

### Euler-Lagrange Equation

We will now use the Gateaux derivative, discussed above, to find the minimizer of the following types of function:

${\displaystyle J(x(t))=\int _{a}^{b}f(x(t),x'(t),t)dt}$

We thus have to find the solutions to the equation:

${\displaystyle \delta J(x)=0}$

The solution is the Euler-Lagrange Equation:

${\displaystyle {\frac {\partial f}{\partial x}}-{\frac {d}{dt}}{\frac {\partial f}{\partial x'}}=0}$

The partial derivatives are done in an ordinary way ignoring the fact that x is a function of t. Solutions to this equation are either maxima, minima, or saddle points of the cost functional J.

### Example: Shortest Distance

We've heard colloquially that the shortest distance between two points is a straight line. We can use the Euler-Lagrange equation to prove this rule.

If we have two points in R2, a, and b, we would like to find the minimum curve (x,y(x)) that joins these two points. Line element ds reads:

${\displaystyle ds={\sqrt {dx^{2}+dy^{2}}}}$

Our function that we are trying to minimize then is defined as:

${\displaystyle J[y]=\int _{a}^{b}ds}$

or:

${\displaystyle J[y]=\int _{a}^{b}{\sqrt {1+\left({\frac {dy}{dx}}\right)^{2}}}dx}$

We can take the Gateaux derivative of the function J and set it equal to zero to find the minimum function between these two points. Denoting the square root as f, we get

${\displaystyle 0={\frac {\partial f}{\partial y}}-{\frac {d}{dx}}\left({\frac {\partial f}{\partial y'}}\right)=y''{\frac {1}{\left(1+y^{\prime 2}\right)^{3/2}}}\;.}$

Knowing that the line element will be finite this boils down to the equation

${\displaystyle {\frac {d^{2}y}{dx^{2}}}=0}$

with the well known solution

${\displaystyle y(x)=mx+n={\frac {b_{y}-a_{y}}{b_{x}-a_{x}}}(x-a_{x})+a_{y}\;.}$

Version 1.3, 3 November 2008 Copyright (C) 2000, 2001, 2002, 2007, 2008 Free Software Foundation, Inc. <http://fsf.org/>

Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.

## 0. PREAMBLE

The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.

This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.

We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.

## 1. APPLICABILITY AND DEFINITIONS

This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.

A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.

A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.

The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.

The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.

A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque".

Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.

The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text.

The "publisher" means any person or entity that distributes copies of the Document to the public.

A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition.

The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.

## 2. VERBATIM COPYING

You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.

You may also lend copies, under the same conditions stated above, and you may publicly display copies.

## 3. COPYING IN QUANTITY

If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.

If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.

If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.

It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.

## 4. MODIFICATIONS

You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:

1. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.
2. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.
3. State on the Title page the name of the publisher of the Modified Version, as the publisher.
4. Preserve all the copyright notices of the Document.
6. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.
7. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice.
8. Include an unaltered copy of this License.
9. Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.
10. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.
11. For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.
12. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.
13. Delete any section Entitled "Endorsements". Such a section may not be included in the Modified version.
14. Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section.
15. Preserve any Warranty Disclaimers.

If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles.

You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties—for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.

You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.

The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.

## 5. COMBINING DOCUMENTS

You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.

The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.

In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements".

## 6. COLLECTIONS OF DOCUMENTS

You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.

You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.

## 7. AGGREGATION WITH INDEPENDENT WORKS

A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.

If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.

## 8. TRANSLATION

Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.

If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.

## 9. TERMINATION

You may not copy, modify, sublicense, or distribute the Document except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, or distribute it is void, and will automatically terminate your rights under this License.

However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.

Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.

Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, receipt of a copy of some or all of the same material does not give you any rights to use it.

## 10. FUTURE REVISIONS OF THIS LICENSE

The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.

Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. If the Document specifies that a proxy can decide which future versions of this License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Document.

## 11. RELICENSING

"Massive Multiauthor Collaboration Site" (or "MMC Site") means any World Wide Web server that publishes copyrightable works and also provides prominent facilities for anybody to edit those works. A public wiki that anybody can edit is an example of such a server. A "Massive Multiauthor Collaboration" (or "MMC") contained in the site means any set of copyrightable works thus published on the MMC site.

"Incorporate" means to publish or republish a Document, in whole or in part, as part of another Document.

An MMC is "eligible for relicensing" if it is licensed under this License, and if all works that were first published under this License somewhere other than this MMC, and subsequently incorporated in whole or in part into the MMC, (1) had no cover texts or invariant sections, and (2) were thus incorporated prior to November 1, 2008.

The operator of an MMC Site may republish an MMC contained in the site under CC-BY-SA on the same site at any time before August 1, 2009, provided the MMC is eligible for relicensing.

To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page:

Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3