# Introduction

This book assumes you have some passing familiarity with the complex numbers. Indeed much of the material in the book assumes your already familiar with the multi-variable calculus. If you have not encountered the complex numbers previously it would be a good idea to read a more detailed introduction which will have many more worked examples of arithmetic of complex numbers which this book assumes is already familiar. Such an introduction can often be found in an Algebra (or "Algebra II") text, such as the Algebra wikibook's section on complex numbers.

Intuitively a complex number z is a number written in the form:

$z=x+iy$ ,

where x and y are real number and i is an imaginary number that satisfies $i^{2}=-1$ . We call x the real part and y the imaginary part of z, and denote them by ${\text{Re }}z$  and ${\text{Im }}z$ , respectively. Note that for the number $z=3-2i$ , ${\text{Im }}z=y=-2$ , not $-2i$ . Also, to distinguish between complex and purely real numbers, we will often use the letters z and w for the complex numbers. It is useful to have a more formal definition of the complex numbers. For example, one frequently encounters treatments of the complex numbers that state that $i$  is the number so that $i={\sqrt {-1}}$ , and we then operate with $i$  using many of our usual rules for arithmetic. Unfortunately if one is not careful this will lead to difficulties. Not all of the usual rules for algebra carry through in the way one might expect. For example, there is a flaw in the following calculation: $i={\sqrt {-1}}={\sqrt {\frac {1}{-1}}}={\frac {\sqrt {1}}{\sqrt {-1}}}={\frac {1}{i}}=-i$ , but is very difficult to point out the flaw without first being clear about what a complex number is, and what operations are allowed with complex numbers.

Mathematically the complex numbers are defined as an ordered pair, endowed with algebraic operations.

Definition

A complex number z is an ordered pair of real numbers. That is $z=(x,y)$  where x and y are real numbers. The collection of all complex numbers is denoted by the symbol $\mathbb {C}$ .

The most immediate consequence of this definition is that we may think of a complex number as a point lying the plane. Comparing this definition with the intuitive definition above, it is easy to see that the imaginary number i simply acts as a place holder for denoting which number belongs in the second coordinate.

Definition

We define the following two functions on the complex plane. Let $z=(x,y)$  be a complex number. We define the real part is as function ${\text{Re}}:\mathbb {C} \to \mathbb {R}$  given by ${\textrm {Re}}(z)=x$ . Similarly we define the imaginary part as a function ${\textrm {Im}}:\mathbb {C} \to \mathbb {R}$  given by ${\textrm {Im}}(z)=y$ .

We say two complex numbers are equal if and only if they are equal as ordered pairs. That is if $z=(x,y)$  and $w=(u,v)$  then z = w if and only if x = u and y = v. Put more succinctly, two complex numbers are equal iff their real parts and imaginary parts are equal.

If complex numbers were simply ordered pairs there would not really be much to say about them. But the complex numbers are ordered pairs together with several algebraic operations, and it is these operations that make the complex numbers so interesting.

Definition

Let z = (xy) and w = (uv) then we define addition as:

z + w = (x + uy + v)

and multiplication as:

z · w = (x · u − y · vx · v + y · u)

Of course, we can view any real number r as being a complex number. Using our intuitive model for the complex numbers it is clear that the real number r should correspond to the complex number (r, 0), and with this identification the above operations correspond exactly to the usual definitions of addition and multiplication of real numbers. For the remainder of the text we will freely refer to a real number r as being a complex number, where the above identification is understood.

The following facts about addition and multiplication follow easily from the corresponding operators for the real numbers. Their verification is left as an exercise to the reader. Let z, w and v be complex numbers, then:

 • z + (w + v) = (z + w) + v (Associativity of addition); • z · (w · v) = (z · w) · v (Associativity of multiplication); • z + w = w + z (Commutativity of addition); • z · w = w · z (Commutativity of multiplication); • z · (w + v) = z · w + z · v (Distributive Property).

One nice feature of complex addition and multiplication is that 0 and 1 play the same role in the real numbers as they do in the complex numbers. That is 0 is the additive identity for the complex numbers (meaning z + 0 = 0 + z = z) and 1 is the multiplicative identity (meaning z · 1 = 1 · z = z).

Of course it is natural at this point to ask about subtraction and division. But stating the formula's for subtraction and division outright, we instead follow the usual course for other subjects of algebra and first discuss inverses.

Definition

Let z = (x, y) be any complex number, then we define the additive inverse −z as:

z = (−x, −y)

Then it is immediate to verify that z + −z = 0.

Now for any two complex numbers z and w we define zw to be z + −w. We now turn to doing the same for multiplication.

Definition

Let z = (x, y) be any non-zero complex number, then we define the multiplicative inverse, ${\tfrac {1}{z}}$  as:

${\frac {1}{z}}={\Big (}{\frac {x}{x^{2}+y^{2}}},-{\frac {y}{x^{2}+y^{2}}}{\Big )}$

It is left to the reader to verify that $z\cdot {\tfrac {1}{z}}=1$ .

We may now of course define division as ${\tfrac {z}{w}}=z\cdot {\tfrac {1}{w}}$ . Just as with the real numbers, division by zero remains undefined. In order for this last definition to make more sense it helps to introduce two more operations on the complex numbers. The first is the absolute value.

Definition

Let z = (x, y) be any complex number, then we define the complex absolute value, denoted |z| as:

$|z|={\sqrt {x^{2}+y^{2}}}$

Notice that |z| is always a real number and |z| ≥ 0 for any z.

Of course with this definition of the absolute value, if z = (x, y) then |z| is exactly the same as the norm of the vector (x, y).

Before introducing the second definition, notice that our intuitive definition simply required us to find a number whose square was −1. Of course i2 = (−i)2 = −1, so for a starting point one could have chosen -i as the most basic imaginary number. This idea motivates the following definition.

Definition

Let z = (x, y) be any complex number, then we define the conjugate of z, denoted ${\bar {z}}$  as:

${\bar {z}}=(x,-y).$

With this definition it is an easy exercise to check that $z\cdot {\bar {z}}=|z|^{2}$ , so dividing both sides by |z|2 we arrive at $z\cdot {\tfrac {\bar {z}}{|z|^{2}}}=1$ . Compare this with the definition of the multiplicative inverse above.

Recall that, every point in the plane can be written using rectangular coordinates such as (x, y) where of course the numbers denote the distance from the x and y axes respectively. But the point could equally well be described using polar coordinates (r, θ), where the first number represents the distance from the origin, and the second is the angle that is made with the positive x axis when you connect the origin and the point with a line segment. Since complex numbers may be thought of simply as points in the plane, we can immediately derive a polar representation of a complex number. As usual we can let a point z = (x, y) = (r cos θ, r sin θ) where $\textstyle r={\sqrt {x^{2}+y^{2}}}$ . The choice of θ is not unique because sine and cosine are 2π periodic. A value θ for which z = (r cos θ, r sin θ) is called an argument of z. If we restrict our choice of θ so that 0 ≤ θ < 2π then the choice of θ is unique provided that z ≠ 0. This is often called the principle branch of the argument.

As a shorthand, we may write $\operatorname {cis} \,\theta =\cos \theta +i\sin \theta$ , so $z=r\operatorname {cis} \theta$ . This notation simplifies multiplication and taking powers, because

 $z_{1}z_{2}\,\!$ $=(r_{1}\operatorname {cis} \theta _{1})(r_{2}\operatorname {cis} \theta _{2})$ $=r_{1}r_{2}\left[\left(\cos \theta _{1}+i\sin \theta _{1}\right)\left(\cos \theta _{2}+i\sin \theta _{2}\right)\right]$ $=r_{1}r_{2}\left[\left(\cos \theta _{1}\cos \theta _{2}-\sin \theta _{1}\sin \theta _{2}\right)+i\left(\sin \theta _{1}\cos \theta _{2}+\cos \theta _{1}\sin \theta _{2}\right)\right]$ $=r_{1}r_{2}\left(\cos(\theta _{1}+\theta _{2})+i\sin(\theta _{1}+\theta _{2})\right)$ $=r_{1}r_{2}\operatorname {cis} (\theta _{1}+\theta _{2})$ by elementary trigonometric identities. Applying this formula can therefore simplify many calculations with complex numbers.

Using induction we can show that

$z^{n}=r^{n}\operatorname {cis} (n\theta )$ ,

holds for all positive integers $n$ .

Now that we have set up the basic concept of a complex number, we continue to topological properties of the complex plane.

## Exercises

1. Determine ${\overline {(z_{1}+z_{2})}}$  in terms of ${\bar {z}}_{1}$  and ${\bar {z}}_{2}$ .
2. Determine ${\overline {z_{1}z_{2}}}$  in terms of ${\bar {z}}_{1}$  and ${\bar {z}}_{2}$ .
3. Show that the absolute value on the complex plane obeys the triangle inequality. That is show that:
$|z_{1}+z_{2}|\leq |z_{1}|+|z_{2}|.$
4. Show that the absolute value on the complex plane obeys the reverse triangle inequality. That is show that:
$|z_{1}+z_{2}|\geq {\big |}|z_{1}|-|z_{2}|{\big |}.$
5. Given a non-zero complex number $z=r\operatorname {cis} (\theta )$  determine $r'$  and $\theta '$  so that ${\frac {1}{z}}=r'\operatorname {cis} (\theta ')$ .
6. Determine formulas for ${\text{Re }}z$  and ${\text{Im }}z$  in terms of $z$  and ${\bar {z}}$ .
7. Find $n$  distinct complex numbers $z_{k}$ , $k=0,\ldots ,n-1$  so that $z_{k}^{n}=z$ . Hint: Use the formula given above for $z_{k}^{n}$  and the $2\pi$  periodicity of $\cos(\theta )$  and $\sin(\theta )$ .

# The Topology of the Complex Plane and Stereographic Projection

As we have already seen the complex numbers are identified with the Euclidean Plane. So it is not surprising that much of what we know about the plane carries over to the complex numbers. In this section we will be specifically interested in topological properties of the complex plane. What are "topological properties"? In mathematics the term topology is used to describe certain geometric properties of spaces. Here we will mostly be concerned with ideas of open, closed, and connected. The notion of limits also falls under this section, because it is really a statement about the geometry of the complex plane to say two quantities are "close" or that one quantity "approaches" another.

We begin with the notion of a limit of a sequence of complex numbers.

We say that the limit of a sequence of complex numbers $z_{n}$  is $z$  if given $\epsilon >0$  there is a natural number $N$  so that if $n>N$  then $|z-z_{n}|<\epsilon .$

One difficulty with this notion of limit is that it requires us to know the limit ahead of time before we can decide if a sequence is convergent. To handle cases when we do not know the limit, an equivalent reformulation called the Cauchy criteria for convergence is stated below.

A sequence of complex numbers $z_{n}$  converges to some limit iff given $\epsilon >0$  there is a natural number $N$  so that if $m,n>N$  then $|z_{m}-z_{n}|<\epsilon .$

One direction of this equivalence is easy, but the other direction relies on the completeness of the Complex numbers. A topic we will defer until later.

Of course this is precisely the same as the definition for a limit of a sequence of points in $\mathbb {R} ^{2}$ . Of course, a first important application of the limit of a sequence is defining convergence for a series of numbers.

Given a series $\textstyle \sum _{n=1}^{\infty }z_{n}$  we define partial sum of order $N$  to be the sum $S_{N}=\sum _{n=1}^{N}z_{n}$ . We shall say the infinite series converges to $z$  if $z=\lim _{n\to \infty }S_{n}$ .

Notice that when the Cauchy criteria is applied to infinite sums it takes the form: given $\epsilon >0$  there is an $N$  so that, if $n,m>N$  then ${\bigg |}\sum _{m+1}^{n}z_{n}{\bigg |}<\epsilon$ .

To give a concrete example consider the series $\sum _{n=1}^{\infty }{\frac {z^{n}}{n!}}$ . For a fixed $z$  it can easily be seen that this series converges, and we shall denote the value it converges to by $e^{z}$ .

To show this we simply we "bootstrap" from what we know about the real numbers. Recall that, for the real number $|z|$  we know that the sum for $e^{|z|}$  converges. Applying the Cauchy criteria to $e^{|z|}$  we know there is an $N$  so that, if $n,m>N$  then $\sum _{m+1}^{n}{\frac {|z|^{n}}{n!}}<\epsilon$ . Now consider the series $\textstyle \sum _{n=0}^{\infty }z^{n}/n!$ . For the $N$  determined above, lets examine ${\big |}S_{n}-S_{m}{\big |}$  for $n\geq m>N$ .

${\big |}S_{n}-S_{m}{\big |}={\bigg |}\sum _{m+1}^{n}{\frac {z^{n}}{n!}}{\bigg |}\leq \sum _{m+1}^{n}{\frac {|z|^{n}}{n!}}<\epsilon$ .

And hence by the Cauchy criteria the sum $\sum _{n=0}^{\infty }z^{n}/n!$  converges.

Even more, we showed the series converges absolutely. Recall that for a series of real numbers, any possible reordering of the series converged to the same value. This theorem remains true for complex numbers. For us it will be very useful to examine the series $e^{i\theta }$  for theta a real number.

In this case we have

$e^{i\theta }=1+(i\theta )+{\frac {(i\theta )^{2}}{2}}+{\frac {(i\theta )^{3}}{6}}+{\frac {(i\theta )^{4}}{4!}}+{\frac {(i\theta )^{5}}{5!}}+{\frac {(i\theta )^{6}}{6!}}+{\frac {(i\theta )^{7}}{7!}}+\cdots$

Now using that $i^{2n}=(-1)^{n}$  and $i^{2n+1}=(-1)^{n}i$  we can rewrite the series above as

$e^{i\theta }=1+i\theta -{\frac {\theta ^{2}}{2}}-i{\frac {\theta ^{3}}{6}}+{\frac {\theta ^{4}}{4!}}+i{\frac {\theta ^{5}}{5!}}-{\frac {\theta ^{6}}{6!}}-i{\frac {\theta ^{7}}{7!}}+\cdots$

Finally if we rearrange the series to determine the real and imaginary parts we have that:

$e^{i\theta }={\bigg (}1-{\frac {\theta ^{2}}{2}}+{\frac {\theta ^{4}}{4!}}-{\frac {\theta ^{6}}{6!}}+\cdots {\bigg )}+i{\bigg (}\theta -{\frac {\theta ^{3}}{6}}+{\frac {\theta ^{5}}{5!}}-{\frac {\theta ^{7}}{7!}}+\cdots {\bigg )}$

But now we notice by inspection that the series in the first set of parenthesis is exactly the Taylor series for $\cos \theta$  and the series in the second parenthesis is exactly the Taylor series for $\sin \theta$ . And so we we conclude:

Euler's Formula
$e^{i\theta }=\cos \theta +i\sin \theta \!.$

Thus we no longer need the name cis θ, we will instead simply use $e^{i\theta }$ .

This section contains some more advanced topics that should perhaps be skipped on a first reading of this text.

### Metric property

Define the metric $d:\mathbb {C} ^{2}\to \mathbb {R}$  as

$d(z_{1},z_{2})=|z_{1}-z_{2}|$

It can easily be seen that $d$  satisfies positive definiteness, symmetry and the triangle inequality, implying that $\mathbb {C}$  is a metric space.

#### Completeness

Recall that a metric space is said to be complete if every Cauchy sequence converges to a limit.

For any point $z_{0}\in \mathbb {C}$ , we call the open ball $B_{\delta }(z_{0})$ , consisting of all the points $z$  such that $|z-z_{0}|<\delta$ , a neighborhood of $z_{0}$ . Similarly, a set consisting of points z such that $|z|>\delta$  for a positive δ will be called a neighborhood of infinity. Given a set ${\mathfrak {G}}\subset \mathbb {C}$ , we call the set open if every point in ${\mathfrak {G}}$  has a neighborhood completely contained in ${\mathfrak {G}}$ . Similarly, we call a set closed if its complement is open. A point $z$  is called an accumulation point of ${\mathfrak {G}}$  if every neighborhood of z contains a point in ${\mathfrak {G}}$  other than z itself. It can be shown that a set is closed if and only if it contains all of its accumulation points: see proof.

### The Riemann Sphere

An interesting idea related to the extension of the complex numbers is the construction of the Riemann Sphere. The Riemann Sphere, essentially a stereographic projection, is constructed by projecting the Complex plane onto the unit sphere about the point $(0,0,1)$ .

Formally, the rectangular coordinates of the projection $(\xi ,\eta ,\zeta )$  can be given by the transformations

$\xi ={\frac {z+{\bar {z}}}{1+z{\bar {z}}}},\eta ={\frac {1}{i}}{\frac {z-{\bar {z}}}{1+z{\bar {z}}}},\zeta =-{\frac {1-z{\bar {z}}}{1+z{\bar {z}}}}$

Or equivalently, the reverse transformation,

$z={\frac {\xi +\eta i}{1-\zeta }}$

The Riemann sphere is this transformation, together with the point $(0,0,1)$  labeled as $\infty$

It can also be shown that the stereographic projection preserves angles, and that circles and lines in the plane correspond to circles on the sphere: see proof.

In the metric |a-b| used earlier, the point z=∞ causes problems. However, using the stereographic projection, we can define another metric where the distance between two points a and b is the chordal distance

$\chi (a,b)={\frac {2|a-b|}{{\sqrt {1+a{\bar {a}}}}{\sqrt {1+b{\bar {b}}}}}}$ ,

which has a well-defined meaning even when one of the points is ∞. We will only employ this metric when dealing with infinite values. For example, using this metric, neighborhoods of infinity do not require special treatment; we say that a neighborhood of a point $z_{0}$  is the set of all points z satisfying

$\chi (z,z_{0})<\delta$ ,

where $z_{0}$  is allowed to be infinity.

# Complex Functions

A complex function is one that takes complex values and maps them onto complex numbers, which we write as $f:\mathbb {C} \to \mathbb {C}$  . Unless explicitly stated, whenever the term function appears, we will mean a complex function. A function can also be multi-valued – for example, ${\sqrt {z}}$  has two roots for every number. This notion will be explained in more detail in later chapters.

A plot of $|z^{2}|$  as $z$  ranges over the complex plane

A complex function $f(z):\mathbb {C} \to \mathbb {C}$  will sometimes be written in the form $f(z)=f(x+yi)=u(x,y)+v(x,y)i$  , where $u,v$  are real-valued functions of two real variables. We can convert between this form and one expressed strictly in terms of $z$  through the use of the following identities:

$x={\frac {z+{\bar {z}}}{2}},y={\frac {1}{i}}{\frac {z-{\bar {z}}}{2}}$

While real functions can be graphed on the x-y plane, complex functions map from a two-dimensional to a two-dimensional space, so visualizing it would require four dimensions. Since this is impossible we will often use the three-dimensional plots of $\Re (z),\Im (z)$  , and $|f(z)|$  to gain an understanding of what the function "looks" like.

For an example of this, take the function $f(z)=z^{2}=(x^{2}-y^{2})+(2xy)i$  . The plot of the surface $|z^{2}|=x^{2}+y^{2}$  is shown to the right.

Another common way to visualize a complex function is to graph input-output regions. For instance, consider the same function $f(z)=z^{2}$  and the input region being the "quarter disc" $Q\cap \mathbb {D}$  obtained by taking the region

$Q=\{x+yi:x,y\geq 0\}$  (i.e. $Q$  is the first quadrant)

and intersecting this with the disc $\mathbb {D}$  of radius 1:

$\mathbb {D} =\{z:|z|\leq 1\}$

If we imagine inputting every point of $Q\cap \mathbb {D}$  into $f$  , marking the output point, and then graphing the set $f(Q\cap \mathbb {D} )$  of output points, the output region would be $UHP\cap \mathbb {D}$  where

$UHP=\{x+yi:y\geq 0\}$  ($UHP$  is called the upper half plane).

So, the squaring function "rotationally stretches" the input region to produce the output region. This can be seen using the polar-coordinate representation of $\mathbb {C}$  , $z=r{\text{cis}}(\theta )$  . For example, if we consider points on the unit circle $S^{1}=\{z:|z|=1\}$  (i.e. the set "$r=1$ ") with $\theta \leq {\tfrac {\pi }{2}}$  then the squaring function acts as follows:

$f(z)=1{\text{cis}}(\theta )^{2}={\text{cis}}(2\theta )$

(here we have used ${\text{cis}}(\theta ){\text{cis}}(\phi )={\text{cis}}(\theta +\phi )$ ). We see that a point having angle $\theta$  is mapped to the point having angle $2\theta$  . If $\theta$  is small, meaning that the point is close to $z=1$  , then this means the point doesn't move very far. As $\theta$  becomes larger, the difference between $\theta$  and $2\theta$  becomes larger, meaning that the squaring function moves the point further. If $\theta ={\tfrac {\pi }{2}}$  (i.e. $z=i$ ) then $2\theta =\pi$  (i.e. $z^{2}=-1$ ).

# Limits and Continuous Functions

In this section, we

• introduce a 'broader class of limits' than known from real analysis (namely limits with respect to a subset of $\mathbb {C}$ ) and
• characterise continuity of functions mapping from a subset of the complex numbers to the complex numbers using this 'class of limits'.

## Limits of complex functions with respect to subsets of the preimage

We shall now define and deal with statements of the form

$\lim _{z\to z_{0} \atop z\in B'}f(z)=w$

for $B\subseteq \mathbb {C} ,f:B\to \mathbb {C} ,B'\subseteq B,w\in \mathbb {C}$  , and prove two lemmas about these statements.

Definition 2.2.1:

Let $B\subseteq \mathbb {C}$  be a set, let $f:B\to \mathbb {C}$  be a function, let $B'\subseteq B$  , let $z_{0}\in B'$  and let $w\in \mathbb {C}$  . If

$\forall \varepsilon >0:\exists \delta >0:{\bigl (}z\in B'\cap B(z_{0},\delta )\Rightarrow |f(z)-w|<\varepsilon {\bigr )}$

we define:

$\lim _{z\to z_{0} \atop z\in B'}f(z):=w$

Lemma 2.2.2:

Let $B\subseteq \mathbb {C}$  be a set, let $f:B\to \mathbb {C}$  be a function, let $B''\subseteq B'\subseteq B$  , let $z_{0}\in B''$  and $w\in \mathbb {C}$  . If

$\lim _{z\to z_{0} \atop z\in B'}f(z)=w$

then

$\lim _{z\to z_{0} \atop z\in B''}f(z)=w$

Proof: Let $\varepsilon >0$  be arbitrary. Since

$\lim _{z\to z_{0} \atop z\in B'}f(z)=w$

there exists a $\delta >0$  such that

$z\in B'\cap B(z_{0},\delta )\Rightarrow |f(z)-w|<\varepsilon$

But since $B''\subseteq B'$  , we also have $B''\cap B(z_{0},\delta )\subseteq B'\cap B(z_{0},\delta )$  , and thus

$z\in B''\cap B(z_{0},\delta )\Rightarrow z\in B'\cap B(z_{0},\delta )\Rightarrow |f(z)-w|<\varepsilon$

and therefore

$\lim _{z\to z_{0} \atop z\in B''}f(z)=w$ $\Box$

Lemma 2.2.3:

Let $B\subseteq \mathbb {C}$  , $f:B\to \mathbb {C}$  be a function, $O\subseteq B$  be open, $z_{0}\in O$  and $w\in \mathbb {C}$  . If

$\lim _{z\to z_{0} \atop z\in O}f(z)=w$

then for all $B'\subseteq B$  such that $z_{0}\in B'$  :

$\lim _{z\to z_{0} \atop z\in B'}f(z)=w$
Proof

Let $B'\subseteq B$  such that $z_{0}\in B'$  .

First, since $O$  is open, we may choose $\delta _{1}>0$  such that $B(z_{0},\delta _{1})\subseteq O$  .

Let now $\varepsilon >0$  be arbitrary. As

$\lim _{z\to z_{0} \atop z\in O}f(z)=w$

there exists a $\delta _{2}>0$  such that:

$z\in B(z_{0},\delta _{2})\cap U\Rightarrow |f(z)-f(z_{0})|<\varepsilon$

We define $\delta :=\min\{\delta _{1},\delta _{2}\}$  and obtain:

$z\in B(z_{0},\delta )\cap B'\Rightarrow z\in B(z_{0},\delta )\Rightarrow z\in B(z_{0},\delta _{2})\cap U\Rightarrow |f(z)-f(z_{0})|<\varepsilon$ $\Box$

## Continuity of complex functions

We recall that a function

$f:M\to M'$

where $M,M'$  are metric spaces, is continuous if and only if

$x_{l}\to x,l\to \infty \Rightarrow f(x_{l})\to f(x)$

for all convergent sequences $(x_{l})_{l\in \mathbb {N} }$  in $M$  .

Theorem 2.2.4:

Let $B\subseteq \mathbb {C}$  and $f:B\to \mathbb {C}$  be a function. Then $f$  is continuous if and only if

$\forall z_{0}\in B:\lim _{z\to z_{0} \atop z\in B}f(z)=f(z_{0})$
Proof

## Exercises

1. Prove that if we define
$f:\mathbb {C} \to \mathbb {C} ,f(z)={\begin{cases}{\dfrac {z^{2}}{|z|^{2}}}&:z\neq 0\\1&:z=0\end{cases}}$
then $f$  is not continuous at $0$  . Hint: Consider the limit with respect to different lines through $0$  and use theorem 2.2.4.

# Complex Derivatives

## Complex differentiability

Let us now define what complex differentiability is.

Definition 2.3.1:

Let $S\subseteq \mathbb {C}$  , let $f:S\to \mathbb {C}$  be a function and let $z_{0}\in S$  . $f$  is called complex differentiable at $z_{0}$  if and only if there exists a $w\in \mathbb {C}$  such that:

$\lim _{z\to z_{0} \atop z\in S}{\frac {f(z)-f(z_{0})}{z-z_{0}}}=w$
Example 2.3.2

The function

$f:\mathbb {C} \to \mathbb {C} ,f(z)={\bar {z}}$

is nowhere complex differentiable.

Proof

Let $z_{0}\in \mathbb {C}$  be arbitrary. Assume that $f$  is complex differentiable at $z_{0}$  , i.e. that

$\lim _{z\to z_{0} \atop z\in \mathbb {C} }{\frac {{\bar {z}}-{\bar {z}}_{0}}{z-z_{0}}}$

exists.

We choose

{\begin{aligned}A:=\{z\in \mathbb {C} |\Re z=\Re z_{0}\}\\B:=\{z\in \mathbb {C} |\Im z=\Im z_{0}\}\end{aligned}}

Due to lemma 2.2.3, which is applicable since of course $\mathbb {C}$  is open, we have:

$\lim _{z\to z_{0} \atop z\in A}{\frac {{\bar {z}}-{\bar {z}}_{0}}{z-z_{0}}}=\lim _{z\to z_{0} \atop z\in \mathbb {C} }{\frac {{\bar {z}}-{\bar {z}}_{0}}{z-z_{0}}}=\lim _{z\to z_{0} \atop z\in B}{\frac {{\bar {z}}-{\bar {z}}_{0}}{z-z_{0}}}$

But

{\begin{aligned}&\lim _{z\to z_{0} \atop z\in A}{\frac {{\bar {z}}-{\bar {z}}_{0}}{z-z_{0}}}=\lim _{z\to z_{0} \atop z\in A}{\frac {\Re (z-z_{0})-i\Im (z-z_{0})}{\Re (z-z_{0})+i\Im (z-z_{0})}}=-1\\\\&\lim _{z\to z_{0} \atop z\in B}{\frac {{\bar {z}}-{\bar {z}}_{0}}{z-z_{0}}}=\lim _{z\to z_{0} \atop z\in B}{\frac {\Re (z-z_{0})-i\Im (z-z_{0})}{\Re (z-z_{0})+i\Im (z-z_{0})}}=1\end{aligned}}

a contradiction.$\Box$

## The Cauchy–Riemann equations

We can define a natural bijective function from $\mathbb {C}$  to $\mathbb {R} ^{2}$  as follows:

$\Phi (x+yi):=(x,y)$

In fact, $\Phi$  is a vector space isomorphism between $\mathbb {C} ^{1}$  and $\mathbb {R} ^{2}$  .

The inverse of $\Phi$  is given by

$\Phi ^{-1}:\mathbb {R} ^{2}\to \mathbb {C} ,\Phi ^{-1}(x,y)=x+yi$

Theorem and definitions 2.3.3:

Let $O\subseteq \mathbb {C}$  be open, let $f:O\to \mathbb {C}$  be a function and let $z_{0}=x_{0}+y_{0}i\in O$  . If $f$  is complex differentiable at $z_{0}$  , then the functions

{\begin{aligned}&u:\Phi (O)\to \mathbb {R} ,u(x,y)=\Re f(x+yi)\\&v:\Phi (O)\to \mathbb {R} ,v(x,y)=\Im f(x+yi)\end{aligned}}

are well-defined, differentiable at $(x_{0},y_{0})$  and satisfy the equations

{\begin{aligned}&\partial _{x}u(x_{0},y_{0})=\partial _{y}v(x_{0},y_{0})\\&\partial _{y}u(x_{0},y_{0})=-\partial _{x}v(x_{0},y_{0})\end{aligned}}

These equations are called the Cauchy-Riemann equations.

Proof

1. We prove well-definedness of $u,v$  .

Let $(x,y)\in \Phi (O)$  . We apply the inverse function on both sides to obtain:

$x+yi\in \Phi ^{-1}(\Phi (O))=O$

where the last equality holds since $\Phi$  is bijective (for any bijective $f:S_{1}\to S_{2}$  we have $f^{-1}{\bigl (}f(S_{3}){\bigr )}=f{\bigl (}f^{-1}(S_{3}){\bigr )}=S_{3}$  if $S_{3}\subseteq S_{1}$  ; see exercise 1).

3. We prove differentiability of $u$  and $v$  and the Cauchy-Riemann equations.

We define

{\begin{aligned}S_{1}:=\{z\in \mathbb {C} :\Re (z)=\Re (z_{0})\}\cap O\\S_{2}:=\{z\in \mathbb {C} :\Im (z)=\Im (z_{0})\}\cap O\end{aligned}}

Then we have:

{\begin{aligned}\partial _{x}u(x_{0},y_{0})&=\lim _{x\to x_{0}}{\frac {u(x,y_{0})-u(x_{0},y_{0})}{x-x_{0}}}&\\&=\lim _{x\to x_{0}}{\frac {\Re {\bigl (}f(x+y_{0}i){\bigr )}-\Re {\bigl (}f(x_{0}+y_{0}i){\bigr )}}{x-x_{0}}}&\\&=\Re \left(\lim _{x\to x_{0}}{\frac {f(x+y_{0}i)-f(x_{0}+y_{0}i)}{x-x_{0}}}\right)&{\text{continuity of }}\Re \\&=\Re \left(\lim _{z\to z_{0} \atop z\in S_{2}}{\frac {f(z)-f(z_{0})}{z-z_{0}}}\right)&\\&=\Re \left(\lim _{z\to z_{0} \atop z\in S_{1}}{\frac {f(z)-f(z_{0})}{z-z_{0}}}\right)&{\text{lemma 2.2.3}}\\&=\Re \left(\lim _{y\to y_{0}}{\frac {f(x_{0}+yi)-f(x_{0}+y_{0}i)}{yi-y_{0}i}}\right)&\\&=\Re \left((-i)\lim _{y\to y_{0}}{\frac {f(x_{0}+yi)-f(x_{0}+y_{0}i)}{y-y_{0}}}\right)&i^{-1}=-i\\&=\Im \left(\lim _{y\to y_{0}}{\frac {f(x_{0}+yi)-f(x_{0}+y_{0}i)}{y-y_{0}}}\right)&\\&=\partial _{y}v(x_{0},y_{0})\end{aligned}}

From these equations follows the existence of $\partial _{x}u(x_{0},y_{0}),\partial _{y}v(x_{0},y_{0})$  , since for example

$\lim _{z\to z_{0} \atop z\in S_{2}}{\frac {f(z)-f(z_{0})}{z-z_{0}}}$

exists due to lemma 2.2.3.

The proof for

$\partial _{y}u(x_{0},y_{0})=-\partial _{x}v(x_{0},y_{0})$

and the existence of $\partial _{y}u(x_{0},y_{0}),\partial _{x}v(x_{0},y_{0})$  we leave for exercise 2.$\Box$

## Holomorphic functions

Definitions 2.3.4:

Let $S\subseteq \mathbb {C}$  and let $f:S\to \mathbb {C}$  be a function. We call $f$  holomorphic if and only if for all $z_{0}\in S$  , $f$  is differentiable at $z_{0}$  . In this case, the function

$f':S\to \mathbb {C} ,f'(z_{0})=\lim _{z\to z_{0} \atop z\in S}{\frac {f(z)-f(z_{0})}{z-z_{0}}}$

is called the complex derivative of $f$ . We write $H(S)$  for the set of holomorphic functions defined on $S$  .

## Exercises

1. Let $S_{1},S_{2},S_{3}$  be sets such that $S_{3}\subseteq S_{1}$  , and let $f:S_{1}\to S_{2}$  be a bijective function. Prove that $f^{-1}{\bigl (}f(S_{3}){\bigr )}=f{\bigl (}f^{-1}(S_{3}){\bigr )}=S_{3}$  .
2. Let $O\subseteq \mathbb {C}$  be open, let $f:O\to \mathbb {C}$  be a function and let $z_{0}=x_{0}+y_{0}i\in O$  . Prove that if $f$  is complex differentiable at $z_{0}$  , then $\partial _{y}u(x_{0},y_{0})$  and $\partial _{x}v(x_{0},y_{0})$  exist and satisfy the equation $\partial _{y}u(x_{0},y_{0})=-\partial _{x}v(x_{0},y_{0})$  .

# Holomorphic and Harmonic Functions

From our look at complex derivatives, we now examine the analytic functions, the Cauchy-Riemann Equations, and Harmonic Functions.

2.4.1 Holomorphic functions

Note: Holomorphic functions are sometimes referred to as analytic functions. This equivalence will be shown later, though the terms may be used interchangeably until then.

Definition: A complex valued function $f(z)$  is holomorphic on an open set $G$  if it has a derivative at every point in $G$  .

Here, holomorphicity is defined over an open set, however, differentiability could only at one point. If f(z) is holomorphic over the entire complex plane, we say that f is entire. As an example, all polynomial functions of z are entire. (proof)

2.4.2 The Cauchy-Riemann Equations

The definition of holomorphic suggests a relationship between both the real and imaginary parts of the said function. Suppose $f(z)=u(x,y)+v(x,y)i$  is differentiable at $z_{0}=x_{0}+y_{0}i$  . Then the limit

$\lim _{\Delta z\to 0}{\frac {f(z_{0}+\Delta z)-f(z_{0})}{\Delta z}}$

can be determined by letting $\Delta z_{0}(=\Delta x_{0}+\Delta y_{0}i)$  approach zero from any direction in $\mathbb {C}$  .

If it approaches horizontally, we have $f'(z_{0})={\frac {\partial u}{\partial x}}(x_{0},y_{0})+i{\frac {\partial v}{\partial x}}(x_{0},y_{0})$  . Similarly, if it approaches vertically, we have $f'(z_{0})={\frac {\partial v}{\partial y}}(x_{0},y_{0})-i{\frac {\partial u}{\partial y}}(x_{0},y_{0})$  . By equating the real and imaginary parts of these two equations, we arrive at:

${\frac {\partial u}{\partial x}}={\frac {\partial v}{\partial y}},{\frac {\partial v}{\partial x}}=-{\frac {\partial u}{\partial y}}$

These are known as the Cauchy-Riemann Equations, and leads us to an important theorem.

Theorem: Let a function $f(z)=u(x,y)+v(x,y)i$  be defined on an open set $G$  containing a point, $z_{0}$  . If the first partials of $u,v$  exist in $G$  and are continuous at $z_{0}$  and satisfy the Cauchy-Riemann equations, then f is differentiable at $z_{0}$  . Furthermore, if the above conditions are satisfied, $f$  is analytic in $G$  . (proof).

2.4.3 Harmonic Functions

Now we move to Harmonic functions. Recall the Laplace equation, $\nabla ^{2}(\phi ):={\frac {\partial ^{2}(\phi )}{\partial x^{2}}}+{\frac {\partial ^{2}(\phi )}{\partial y^{2}}}=0$

Definition: A real valued function $\phi (x,y)$  is harmonic in a domain $D$  if all of its second partials are continuous in $D$  and if at each point in $D$  , $\phi$  is analytic in a domain $D$  , then both $u(x,y),v(x,y)$  are harmonic in $D$  . (proof)

# Polynomial Functions

## Polynomial Function

A function containing a polynomial rule is known as a polynomial function. A polynomial function with only one variable looks like this: f(t) = t3 - 2t2 + 3t.

## Graphing Polynomial Functions

Graphing such functions can be challenging if you don't know what you are doing. A graphing calculator is very helpful in this process if one is available. To graph polynomial functions on a graphing calculator, follow these steps: 1. turn on the calculator, 2. press the y= button, 3. in the area \Y1, enter in the equation (for example, using the equation above, you would press "x", then the karat button, then 3, then -2, then the "x2" button, then +3x), 4. press the graph button. If no graph appears, then the information entered is incorrect, or you need to widen the window, in which case you would press the "zoom" button and then "zoom out".

# Exponential and Trigonometric Functions

Consider the real-valued exponential function $exp:\mathbb {R} \rightarrow \mathbb {R}$  defined by $exp(x)=e^{x}$  . It has the following properties:

1) $e^{x}\neq 0\quad \forall x\in \mathbb {R}$

2) $e^{x+y}=e^{x}e^{y}\quad \forall x,y\in \mathbb {R}$

3) $(e^{x})'=e^{x}\quad \forall x\in \mathbb {R}$

We want to extend the exponential function $exp$  to the complex numbers in such a way that

1) $e^{z}\neq 0\quad \forall z\in \mathbb {C}$

2) $e^{z+w}=e^{z}e^{w}\quad \forall z,w\in \mathbb {C}$

3) $(e^{z})'=e^{z}\quad \forall z\in \mathbb {C}$

But $e^{z}$  has been already defined for $z=i\theta$  and we have $e^{i\theta }=\cos \theta +i\sin \theta$ .

# Logarithmic Functions

## Logarithm

A logarithm is the exponent that a base is raised to get a value. Such exponential equations can be written as logarithmic equations and vice versa. Exponential equations are in the form of bx = a , and logarithmic equations are in the form of logba = x . When converting from exponential to logarithmic form, and vice versa, there are some key points to keep in mind:

1. The base of the exponent become the base of the logarithm.

Example:

37 = 2187

log32187 = 7

2. The exponent is the logarithm.

Example:

52 = 25

log525 = 2

3. Any nonzero base to the 0 power is 1.

60 = 1

log61 = 0

4. An exponent or log can be negative.

4-2 = 0.0625

log40.0625 = -2

5. The exponent and the log can be variables.

4y = 1024

log41024 = y

A logarithm is also an exponent. This means that the exponent rules apply to logarithms as well.

A common logarithm is a logarithm that has a base of 10. Bases of logarithms are known to be 10 when there is no base written for them. For example:

log6 = log106

Logarithmic functions are inverses of exponential functions, since logarithms are inverses of exponents. For example:

y = 3x

is the inverse of

y = log3x

And, since these two functions are inverses, their domain and ranges are switched. So, for

y = 3x

the domain is all real numbers and the range is y > 0.

And, for

y = log3x

the domain is x > 0 and the range is all real numbers.

# Inverse Trigonometric Functions

## Solve Equations Using Inverses

Oftentimes, the value of a trigonometric function for an angle is known and the value to be found is the measure of the angle. In order to find the inverse of trigonometric functions, the idea of inverse functions is applied.

The relation in which all the values of x and y are reversed in the inverse of a function. y = sinx has an inverse of x = siny.

When graphed, the inverse x = siny is found to not be a function since it doesn't pass the vertical line test. Similarly, trigonometric inverses aren't functions either.

In order to make trigonometric inverses functions, the domain of the original trigonometric function has to be restricted. These are known as principal values. Principal values are those values that are in the restricted domains. In order to differentiate trigonometric functions with restricted domains, capital letters are used.

Principal Values of Sine, Cosine, and Tangent

y = Sinx if and only if y = sinx and -pi/2 < x < pi/2.

y = Cosx if amd only if y = cosx and 0 < x < pi.

y = Tanx if and only if y = tanx and -pi/2 < x < pi/2.

The Arcsine function is the inverse of the Sine function. It is symbolized by Sin-1 or Arcsin. These are its characteristics:

1. The set of real numbers from -1 to 1 is its domain.

2. The set of angle measures from -pi/2 < x < pi/2 is its range.

3. Sin-1y = x if and only if Sinx = y.

4. (Sin-1 x Sin)(x) = (Sin x Sin-1)(x) = x

Arccosine and Arctangent functions are similar to the above definition of the Arcsine function.

Inverse Sine, Cosine, and Tangent

1. The inverse Sine function is y = Sin-1x or y = Arcsinx given y = Sinx.

2. The inverse Cosine function is y = Cos-1x or y = Arccosx given y = Cosx.

3. The inverse Tangent function is y = Tan-1x or y = Arctanx given y = Tanx.

The expressions in rows below are all equivalent. These can be used to rewrite and/or solve trigonometric equations.

y = Sinx x = Sin-1y x = Arcsiny

y = Cosx x = Cos-1y x = Arccosy

y = Tanx x = Tan-1y x = Arctany

## Example 1

Solve an Equation

Solve Sinx = 1/2 by finding the value of x to the nearest degree.

If Sinx = 1/2, then x is the least value whose sine is 1/2. So, x = Arcsin(1/2). Use a calculator to find x.

For a TI-84 Plus Silver Edition:

1. Press 2nd

2. Sin-1

3. 2nd

4. 1/2

5. )

6. Enter

The answer is 30. So, x = 30 degrees.

The inverse of trigonometric functions is also used in application problems.

## Example 2

Apply an Inverse to Solve a Problem

The ship Vegas sailed West 25 miles before turning south. When Vegas ran into trouble and radioed for help, the rescue boat found that the fastest way to them covered a distance of 50 miles. The cosine of the angle that the rescue boat should sail at is 0.5. Find the angle, to the nearest hundredth of a degree, at which the rescue boat should travel to give Vegas help.

Cosx = 25/50

Cos-1(25/50) = Cos-1(0.5) = 60

60 degrees south of west

## Trigonometric Values

The values of trigonometric expressions are also found using a calculator.

## Example 3

Find a Trigonometric Value

Find each value. Write angle measures in radians. Round to the nearest hundredth.

ArcTan(1)

For TI-84 Plus Silver Edition:

1. 2nd

2. TAN-1

3. 2nd

4. 1

5. ENTER

0.7853981634

So, ArcTan(1) = 0.7853981634

# The Basics

## What is a residue?

When we say we want a Residue of a function at a point, we mean that we want the coefficients of the term of the expanded function with a simple pole (something that gives a zero in the denominator) at that point. For example, the residue of the function:

$f(x)={\frac {3}{x+1}}$

About $-1$  is 3.

And similarly for:

$f(x)={\frac {3}{x+1}}+{\frac {3}{x+2}}$

Is also 3, for the second term isn't a pole at -1.

Of course, the functions we will be dealing with will be much more complicated, some may have quadratics on the denominator, some may not be well defined like $\sin({\frac {1}{z}})$ ; and according to the type of function, there are different types of what's called isolated singularities that we'll run across. Of course, such things need to be well defined to include possible conflicts before we continue. Also because our method of finding the residue varies with the type of the singularity! This is probably the most important point in this chapter.

## Isolated Singularities

There are three types:

1) Removable Singularities

2) Poles of order m

3) Essential Singularites

Which we will cover in detail one-by-one.

### Removable Singularities

The rigorous definition is a function such that $\lim _{z\rightarrow z_{0}}f(z)=k$  where $k$  is some constant value (you may have to use L'Hopital's Rule to come to this conclusion).

In layman's terms, this is a function that has a similar term multiplied on the numerator and denominator that can be cancelled.

For example, the following function:

$f(x)={\frac {3(x+1)}{x+1}}$

has a removable singularity at $z_{0}=-1$ .

as for what this has to do with residues, with the rigorous definition, this means that the function's residue at that point is considered to be 0. If after cancellation some of the same terms are left over, like in the following function:

$f(x)={\frac {3(x+1)}{(x+1)^{2}}}$

### Poles of order m

Again, the rigorous definition is a function f has a pole at $z_{0}$  if $\lim _{z\rightarrow z_{0}}||f(z)||=\infty$ , we classify the order m by the highest power of the pole in the Laurent series (in more layman's terms, the number of the power after it has been cancelled). Another way of say this would be:

The order of a pole at $z_{0}$  is the least integer m such that $\lim _{z\to z_{0}}(z-z_{0})^{m}f(z)$  is bounded.

Example:

$f(x)={\frac {3}{x+1}}+{\frac {3}{(x+1)^{2}}}$

has a 2nd order pole about $-1$ . This could be said to follow from the fact that: $(x+1)^{2}f(x)=3(x+1)+3$  for $x$  not equal to $-1$  and thus $\lim _{x\to -1}(x+1)^{2}f(x)=3<\infty$

### Essential Singularity

The rigorous definition is a function such that $\lim _{z\rightarrow z_{0}}f(z)$  is neither bounded nor infinite, like the limit being undefined. A good example of such a function is a typical example from 1st semester Calculus classes:

$f(z)=\sin(1/z)$

about $0$  is an essential singularity.

What typically happens with these functions is when the Laurent (or in the case for the function above, Taylor) series is examined, it turns out that the order m is infinite (there are an infinite number of poles). Keeping along the lines of our example, if we perform a Taylor series expansion we obtain:

$f(z)=\sin(1/z)=\sum _{n=1}^{\infty }{\frac {(-1)^{n+1}n!}{z^{n}}}$

Which shows our infinite number of poles.

This is the only type of isolated singularity where the only way known to determine the residue (the power of that 1/z term) is to manually create the Laurent series and read off the coefficient.

Also, but beyond the scope of this book, is an interesting theorem regarding functions with essential singularities called Picard's Theorem, which states that a function with an essential singularity approaches every value except possibly one around a neighborhood about the singularity.

# Partial Fractions

This is probably the most basic technique, and doesn't require a lot of theory, mainly just algebraic manipulation. However, it does have its limitations, namely it really only works with polynomials. It is more of a cookbook method: here's the recipe, follow the steps.

Given two polynomial functions $P(z)$  and $Q(z)$ , where the degree of Q is greater than the degree of P, we define another function to be the quotient of the two polynomials:

$f(z)={\frac {P(z)}{Q(z)}}$

And we note that if we factor Q we obtain:

$Q(z)=(z-z_{0})^{a_{0}}\cdot (z-z_{1})^{a_{1}}\cdot ...$

Then depending on the form of $(z-z_{0})^{a_{0}}$  we can reduce the function into a limited series of simple or m-th order poles as so:

$f(z)={\frac {P(z)}{Q(z)}}={\frac {A_{0}(x)(+B_{0})}{(z-z_{0})^{a_{0}}}}+{\frac {A_{0}(x)(+B_{0})}{(z-z_{0})^{a_{0}-1}}}+...+{\frac {A_{1}(x)(+B_{1})}{(z-z_{1})^{a_{1}}}}+{\frac {A_{1}(x)(+B_{1})}{(z-z_{1})^{a_{1}-1}}}+...$

And then the coefficients $A_{0},...,B_{0},...$  can be solved for and the function can be stated into an explicit form with readable residues.

Of course, this is taught a lot better by example and case.

## Case 1, simple one-order factors

We begin with the function:

$f(z)={\frac {1}{z^{2}+3z+2}}$

And note that it can be factored thusly:

$f(z)={\frac {1}{z^{2}+3z+2}}={\frac {1}{(z+1)(z+2)}}$

For this case the correct form we "guess" is like so:

${\frac {1}{(z+1)(z+2)}}={\frac {A_{0}}{z+1}}+{\frac {A_{1}}{z+2}}$

The remaining portion is by algebra, we multiply both sides by $(z+1)(z+2)$ :

$1=A_{0}z+2A_{0}+A_{1}z+A_{1}$

Which gives us two equations:

$0=A_{0}z+A_{1}z$

$1=2A_{0}+A_{1}$

Thus:

$A_{0}=1$  and $A_{1}=-1$

And our function can be rewritten as:

$f(z)={\frac {1}{z+1}}-{\frac {1}{z+2}}$

The remainder of this section discusses suggests fractional forms that aid in separation, since the actual method and theory hold.

## The Other Two Cases

Case 1, Unfactorable Terms. In our generic expression, there is A(x)+B term, but really should include an additional polynomial for possible "unfactorable" terms (i.e., terms that can't be factored with only real numbers, although if the terms are factored correctly with imaginary numbers, the method works). To account for this the "guessed" fraction, include these extra terms. For example,

$f(z)={\frac {1}{(z^{2}+4)(z-1)}}={\frac {A_{0}x+B_{0}}{z^{2}+4}}+{\frac {A_{1}}{z-1}}$

With $A_{0}$ , $B_{0}$ , and $A_{1}$  to be solved.

Case 2, Term(s) Raised To A Power, The correct "guess" will include a trailing series of decreasing powers of the factor. For example,

$f(z)={\frac {1}{(z+1)^{2}(z-1)}}={\frac {A_{0}}{(z+1)^{2}}}+{\frac {A_{1}}{z+1}}+{\frac {A_{2}}{z-1}}$

## Remark

Again, partial fractioning only really works with polynomials, and can be a huge hassle for large denominators. The next section looks at a more general way of determining the residue.

# A More "Complex" Solution

There is a much more general, more lovely, all-pole encompassing formula for determining residues. We start off by examining the Laurent series of a function:

$f(z)=\sum _{n=-\infty }^{\infty }a_{n}(z-z_{0})^{n}$

And when examining the expansion we note that if we want the residue of the simple pole of a function, we want the coefficient $a_{-1}$ . The second order pole, $a_{-2}$ , and so on. In order to really see what's going on in the formula, it's best to look at the expansion:

$f(z)=...+{\frac {a_{-2}}{(z-z_{0})^{2}}}+{\frac {a_{-1}}{(z-z_{0})}}+a_{0}+a_{1}(z-z_{0})+a_{2}(z-z_{0})^{2}+...$

Say we want the residue of $z_{0}$  from a function with a 2nd-order pole about that point, we first multiply by $(z-z_{0})^{2}$ :

$f(z)={\frac {a_{-2}}{(z-z_{0})^{2}}}+{\frac {a_{-1}}{(z-z_{0})}}+a_{0}+a_{1}(z-z_{0})+a_{2}(z-z_{0})^{2}+...$

$(z-z_{0})^{2}f(z)=a_{-2}+a_{-1}(z-z_{0})+...\,$

We now want to isolate the $a_{-1}$  term, so we take a derivative:

$g(z)={\frac {d}{dz}}((z-z_{0})^{2}f(z))=a_{-1}+...$

Now if we evaluate $g$  at $z_{0}$ , the remaining terms will be zero, thus:

$g(z_{0})={\frac {d}{dz}}((z-z_{0})^{2}f(z))=a_{-1}$

gets us the residue, from repeating this same procedure the general formula can be obtained quite easily.

## The Residue Formula

$\mathrm {Res} (f,z_{0})=\lim _{z\rightarrow z_{0}}{\frac {1}{(m-1)!}}{\frac {d^{m-1}}{dz^{m-1}}}{\Big (}(z-z_{0})^{m}\cdot f(z){\Big )}$

Where $z_{0}$  is the point about which the residue is to be found, $f$  is the function.

There are some extra terms placed into this formula that weren't discussed above. The factorial eliminates the extra multiplied terms from the derivatives, and the limit deals with issues caused by a removable singularity.

# Some Consequences

## Simplifying Integrals

Given the following integral:

$\int _{a}^{b}{\frac {1}{x^{2}-1}}dx$

Now with partial fractions (or the residue theorem) we can split this up into a series of mono-pole terms, which would allow us to use substitution and receive logarithmic answers:

$\int _{a}^{b}{\frac {\frac {1}{2}}{x-1}}dx+\int _{a}^{b}{\frac {-{\frac {1}{2}}}{x+1}}dx$

${\frac {1}{2}}\ln(x-1)|_{a}^{b}-{\frac {1}{2}}\ln(x+1)|_{a}^{b}$

## Cauchy's Residue Theorem

Cauchy's Residue Theorem is a VERY important result which gives many other results in Complex Analysis and Theory, but more importantly to us, is that it allows us to calculate integration with only residue, that is, we can literally integrate without actually 'integrating'. Note: Its derivation is in Complex Analysis, which is listed as a prerequisite for these more advanced tricks.

This is the actual (general) theorem:

Let Γ be a simple closed positively oriented contour. If f is analytic in some simply connected domain D containing Γ and $z_{0}$  is any point inside Γs, then:

$f^{n}(z_{0})={\frac {n!}{2\pi i}}\int _{\Gamma }{\frac {f(\gamma )}{(\gamma -z)^{n+1}}}dz$

Upon first look, this has absolutely nothing to do with residues, but mathematicians are very abstract and tricky people.

Take a general function, call it $g(z)$ , just so we don't confuse it with the function in Cauchy's Integral Formula. $g(z)$  can be made into a Laurent Series:

$g(z)=\sum _{n=-\infty }^{\infty }a_{n}(z-z_{0})^{n}$

Now, we integrate over a contour Γ of $g(z)$ , keeping in mind that $g(z)$  has been 'Laurentized':

$\int _{\Gamma }\sum _{n=-\infty }^{\infty }a_{n}(z-z_{0})^{n}dz$

Also by Complex Analysis, the parts of the series that are analytic and zero and thus dropped out (which is actually ANOTHER result by Cauchy), this leaves the sum of the integrals containing powers on the bottom, now by carrying out the differentiations of $g(z)$  and applying the General Cauchy Integral Formula (The proof is tedious but you can do the Laurent series and check yourself), you will come upon the Cauchy Residue Theorem (Cauchy really did do a lot of this stuff, a running joke in Complex Analysis classes is, "Isn't every proof done by Cauchy?"):

$\int _{\Gamma }g(z)=2\pi i\sum _{j=1}^{n}\mathrm {Res} (z_{j})$

Read over that equation a few times, make sure you really grasp what it's saying, to do an integral, you need only calculate the residues. Does it seem useless because you're concerned with only the real number line? You're not being creative enough. Take the line integral with one part over the real-number line, and the other lines over the complex such that they are easily computable, as a general example:

$\int _{\Gamma }g(z)=\int _{\text{Real Line}}g(z)+\int _{\text{A simple line that closes the loop}}g(z)=2\pi i\sum _{j=1}^{n}\mathrm {Res(z_{j})}$

Unfortunately, this too borders the line with a book on Complex Analysis, since these 'simple lines' are discussed therein, but just to not leave you hanging, here's a typically simple one so you can try out integration without integrating on your own:

If $f(z)$  is the quotient of two polynomials such that the degree of the lower polynomial is at least two more than the numerator polynomial then $\lim _{\rho \rightarrow \infty }\int _{C_{\rho ^{+}}}f(z)dz=0$ .

Where $C_{\rho ^{+}}$  is a half-circle on the plane. This allows you to close some loops made by integrals over the real-number line and try out this method for yourself.

And that's where we leave off!