Circuit Theory/Phasors



Variables are defined the same way. But there is a difference. Before variables were either "known" or "unknown." Now there is a sort of in between.

At this point the concept of a constant function (a number) and a variable function (varies with time) needs to be reviewed. See this student professor dialogue. Knowns are described in terms of functions, unknowns are computed based upon the knowns and are also functions.

For example:

v(t) = M_v \cos (\omega t + \phi_v) voltage varying with time

Here v(t) is the symbol for a function. It is assigned a function of the symbols M_v, \omega, \phi_v and t. Typically time is not ever solved for.

Time remains an unknown. Furthermore all power, voltage and current turn into equations of time. Time is not solved for. Because time is everywhere, it can be eliminated from the equations. Integrals and derivatives turn into algebra and the answers can be purely numeric (before time is added back in).

At the last moment, time is put back into voltage, current and power and the final solution is a function of time.

Most of the math in this course has these steps:

  1. describe knowns and unknowns in the time domain, describe all equations
  2. change knowns into phasors, eliminate derivatives and integrals in the equations
  3. solve numerically or symbolically for unknowns in the phasor domain
  4. transform unknowns back into the time domain

Passive circuit output is similar to inputEdit

If the input to a linear circuit is a sinusoid, then the output from the circuit will be a sinusoid. Specifically, if we have a voltage sinusoid as such:

v(t) = M_v \cos (\omega t + \phi_v)

Then the current through the linear circuit will also be a sinusoid, although its magnitude and phase may be different quantities:

i(t) = M_i \cos (\omega t + \phi_i)

Note that both the voltage and the current are sinusoids with the same radial frequency, but different magnitudes, and different phase angles. Passive circuit elements cannot change the frequency of a sinusoid, only the magnitude and the phase. Why then do we need to write \omega in every equation, when it doesnt change? For that matter, why do we need to write out the cos( ) function, if that never changes either? The answers to these questions is that we don't need to write these things every time. Instead, engineers have produced a short-hand way of writing these functions, called "phasors".

Phasor TransformEdit

Phasors are a type of "transform." We are transforming the circuit math so that time disappears. Imagine going to a place where time doesn't exist.

We know that every function can be written as a series of sine waves of various frequencies and magnitudes added together. (Look up fourier transform animation). The entire world can be constructed from sin waves. Here, one sine wave is looked at, the repeating nature (\omega) is stripped away. Whats left is a phasor. Since time is made of circles, and if we consider just one of these circles, we can move to a world where time doesn't exist and circles are "things". Instead of the word "world", use the word "domain" or "plane" as in two dimensions.

Math in the Phasor domain is almost the same as DC circuit analysis. What is different is that inductors and capacitors have an impact that needs to be accounted for.

The transform into the Phasors plane or domain and transforming back into time is based upon Euler's equation. It is the reason you studied imaginary numbers in past math class.

Euler's EquationEdit

Euler's Formula

Euler started at these three series. Obviously there is a relationship:

\cos x = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots
\sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots
e^{x} = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} + \frac{x^5}{5!} + \cdots

He did the following:

e^{ix} = 1 + i x + \frac{i^2 x^2}{2!} + \frac{i^ 3 x^3}{3!} + \frac{i^ 4 x^4}{4!} + \frac{i^ 5 x^5}{5!} + \cdots
e^{ix} = 1 + i x - \frac{x^2}{2!} - i\frac{x^3}{3!} + \frac{x^4}{4!} + i\frac{x^5}{5!} - \frac{x^6}{6!} - i\frac{x^7}{7!} \cdots
e^{ix} = (1  - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots) + i (x - \frac{x^3}{3!}  + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots)
e^{ix} = \cos(x) + i \sin(x)

Set x = π and:

e^{i\pi} = -1

Euler's formula is ubiquitous in mathematics, physics, and engineering. The physicist Richard Feynman called the equation "our jewel" and "one of the most remarkable, almost astounding, formulas in all of mathematics."

A more general version of Euler's equation is:

M e^{j(\omega t + \phi)} = M \cos (\omega t + \phi) + j M \sin (\omega t + \phi)

This equation allows us to view sinusoids as complex exponential functions. A cyclic function represented as a voltage, current or power given in terms of radial frequency and phase angle turns into an arrow having length \mathbb{C} (magnitude) and angle \phi (phase) in the phasor domain/plane or a point having both a real (X) and imaginary (Y) coordinate in the complex domain/plane.

Generically, the phasor \mathbb{C}, (which could be voltage, current or power) can be written:

\mathbb{C} = X + jY (rectangular coordinates)
\mathbb{C} = M_v \angle \phi (polar coordinates)

We can graph the point (X, Y) on the complex plane and draw an arrow to it showing the relationship between X,Y,\mathbb{C} and \phi.

Using this fact, we can get the angle from the origin of the complex plane to out point (X, Y) with the function:

[Angle equation]

\theta_C = \arctan(\frac{Y}{X})

And using the pythagorean theorem, we can find the magnitude of C -- the distance from the origin to the point (X, Y) -- as:

[Pythagorean Theorem]

M_C = |\mathbb{C}| = \sqrt{X^2 + Y^2}.

Phasor SymbolsEdit

Phasors don't account for the frequency information, so make sure you write down the frequency some place safe.

Suppose in the time domain:

v(t) = M_v e^{j(\omega t + \phi)}

In the phasor domain, this voltage is expressed like this:

\mathbb{V} = M_v \angle \phi

The radial velocity \omega disappears from known functions (not the derivate and integral operations) and reappears in the time expression for the unknowns.

Not VectorsEdit

Contrary to the statement made in this heading, phasors (phase vectors), are vectors. Phasors form a vector space with additional structure, hence they have some properties that are not common to all vector spaces; these additional properties exist because phasors form a field - thus you also get division.

For more details see

Like many kinds of vectors they have additional struct Phasors will always be written out either with a large bold letter (as above). They are not a vector. Vectors have two or more real axes that are not related by Euler, but are independent. They share some math in two dimensions, but this math diverges.

Phasors can be divided, but vectors can not.

Voltage can be divided by current (in the phasor domain), but East can not be divided by North (vectors can not be divided). Vectors move into three or more dimensions of linear algebra math that help build complicated structures in the real world such as space frames. Phasors move into more complicated transforms related to differential equation math and electronics.

cross product visualization

The math of phasors is exactly the same as ordinary math, except with imaginary numbers. Vectors demand new mathematical operations such as dot product and cross product:

  • The dot product of vectors finds the shadow of one vector on another.
  • The cross product of vectors combines vectors into a third vector perpendicular to both.

Cosine ConventionEdit

In this book, all phasors correspond to a cosine function, not a sine function.

It is important to remember which trigonometric function your phasors are mapping to. Since a phasor only includes information on magnitude and phase angle, it is impossible to know whether a given phasor maps to a sin( ) function, or a cos( ) function instead. By convention, this wikibook and most electronic texts/documentation map to the cosine function.

If you end up with an answer that is sin, convert to cos by subtracting 90 degrees:

\sin(\omega t + \phi) = cos(\omega t + \phi - \frac{\pi}{2})

If your simulator requires the source to be in sin form, but the starting point is cos, then convert to sin by adding 90 degrees:

\cos(\omega t + \phi) = sin(\omega t + \phi + \frac{\pi}{2})

Phasor ConceptsEdit

Inside the phasor domain, concepts appear and are named. Inductors and capacitors can be coupled with their derivative operator transforms and appear as imaginary resistors called "reactance." The combination of resistance and reactance is called "impedance." Impedance can be treated algebraically as a phasor although technically it is not. Power concepts such as real, reactive, apparent and power factor appear in the phasor domain. Numeric math can be done in the phasor domain. Symbols can be manipulated in the phasor domain.

Phasor MathEdit

There is more information about Phasors in
The Appendix

Phasor math turns into the imaginary number math which is reviewed below.

Phasor A can be multiplied by phasor B:

[Phasor Multiplication]

\mathbb{A} \times \mathbb{B} = (M_a \times M_b) \angle (\phi_a + \phi_b)

The phase angles add because in the time domain they are exponents of two things multiplied together.

[Phasor Division]

\mathbb{A} / \mathbb{B} = (M_a / M_b) \angle (\phi_a - \phi_b)

Again the phase angles are treated like exponents ... so they subtract.

The magnitude and angle form of phasors can not be used for addition and subtraction. For this, we need to convert the phasors into rectangular notation:

\mathbb{C} = X + jY

Here is how to convert from polar form (magnitude and angle) to rectangular form (real and imaginary)

X = M \cos (\phi), Y = M \sin (\phi)

Once in rectangular form:

  • Real parts get add or subtract
  • Imaginary parts add or subtract

[Phasor Addition]

\mathbb{C} = \mathbb{A} + \mathbb{B} = (X_A + X_B) + j(Y_A + Y_B) = X_C + jY_C

Here is how to convert from rectangular form to polar form:

\mathbb{C} = M_c \angle \phi_c = \sqrt{X^2 + Y^2} \angle \arctan(\frac{Y}{X})

Once in polar phasor form, conversion back into the time domain is easy:

\operatorname{Re}(M e^{j(\omega t + \phi)}) = M \cos (\omega t + \phi)

Function transformation DerivationEdit

g(t) represents either voltage, current or power.

g(t)=G_m cos(\omega t + \phi) starting point
g(t)=G_m \operatorname{Re}(e^{j(\omega t + \phi)}) from Euler's Equation
g(t)=G_m \operatorname{Re}(e^{j\phi}e^{j\omega t}) law of exponents
g(t)=\operatorname{Re}(G_m e^{j\phi}e^{j\omega t}) .... G_m is a real number so it can be moved inside
g(t)=\operatorname{Re}(\mathbb{G} e^{j\omega t})  .... \mathbb{G} is the definition of a phasor, here it is an expression substituting for G_m e^{j\phi}
g(t) \Leftrightarrow \mathbb{G} where  \mathbb{G} = G_m e^{j\phi}

What happens to e^{j\omega t} term? Long Answer. It hangs around until it is time to transform back into the time domain. Because it is an exponent, and all the phasor math is algebra associated with exponents, the final phasor can be multiplied by it. Then the real part of the expression will be the time domain solution.

time domain transformation phasor domain
A cos(\omega t) \Leftrightarrow proof A
A sin(\omega t) \Leftrightarrow proof -Aj
A cos(\omega t) + B sin(\omega t) \Leftrightarrow A-Bj
A cos(\omega t) - B sin(\omega t) \Leftrightarrow A+Bj
A cos(\omega t + \phi) \Leftrightarrow proof A cos(\phi) + A sin(\phi)j
A sin(\omega t + \phi) \Leftrightarrow proof A sin(\phi) - A cos(\phi)j
A cos(\omega t - \phi) \Leftrightarrow proof A cos(\phi) - A sin(\phi)j
A sin(\omega t - \phi) \Leftrightarrow proof -A sin(\phi) - A cos(\phi)j

In all the cases above, remember that \phi is a constant, a known value in most cases. Thus the phasor is an complex number in most calculations.

There is another transform associated with a derivatives that is discussed in "phasor calculus."

Transforming calculus operators into phasorsEdit

When sinusoids are represented as phasors, differential equations become algebra. This result follows from the fact that the complex exponential is the eigenfunction of the operation:

\frac{d}{dt}(e^{j \omega t}) = j \omega e^{j \omega t}

That is, only the complex amplitude is changed by the derivative operation. Taking the real part of both sides of the above equation gives the familiar result:

\frac{d}{dt} \cos{\omega t} = - \omega \sin{\omega t}\,

Thus, a time derivative of a sinusoid becomes, when tranformed into the phasor domain, algebra:

{d \over dt}i(t)\rightarrow j\omega\mathbb{I} j is the square root of -1 or an imaginary number

In a similar way the time integral, when transformed into the phasor domain is:

\int V(t) dt \rightarrow \frac{\mathbb{V}}{j\omega}

There is an integration constant that will have to be dealt with when translating back into the time domain. It doesn't disappear.

The above is true of voltage, current, and power.

The question is why does this work? Where is the proof? Lets do this three times: once for a resistor, then inductor, then capacitor. The symbols for the current and voltage going through the terminals are: V_m cos(\omega t + \phi_v) and I_m cos(\omega t + \phi_I)

Resistor Terminal EquationEdit

V=R I . terminal relationship
V_m cos(\omega t + \phi_V) = R I_m cos(\omega t + \phi_I) .. substituting example functions
V_m e^{\omega t + j \phi_V} = R I_m e^{\omega t + j \phi_I} .. Euler's version of the terminal relationship
V_m e^{\omega t} e^{j \phi_V} = R I_m e^{\omega t} e^{j \phi_I} .. law of exponents
V_m \cancel{e^{\omega t}} e^{j \phi_V} = R I_m \cancel{e^{\omega t}} e^{j \phi_I} .. do same thing go both sides of equal sign
V_m e^{j \phi_V} = R I_m e^{j \phi_I} .. time domain result
\mathbb{V} = R \mathbb{I} .. phasor expression

Just put the voltage and current in phasor form and substitute to migrate equation into the phasor domain.

Inductor Terminal EquationEdit

V = L\frac{d}{dt}I ... terminal relationship
V_m cos(\omega t + \phi_V) = L \frac{d}{dt} (I_m cos(\omega t + \phi_I)) .. substitution of a generic sinusodial
V_m cos(\omega t + \phi_V) = -\omega L I_m sin(\omega t + \phi_I) .. taking the derivative
- sin(\omega t + \phi_I) = cos(\omega t + \phi_I + \frac{\pi}{2}) .. trig
V_m cos(\omega t + \phi_V) = \omega L I_m cos(\omega t + \phi_I + \frac{\pi}{2}) .. substitution
V_m \operatorname{Re}(e^{j(\omega t + \phi_V)}) = \omega L I_m \operatorname{Re}(e^{j(\omega t + \phi_L + \frac{\pi}{2})}) from Euler's Equation
V_m \operatorname{Re}(e^{j\omega t} e^{j\phi_V}) = \omega L I_m \operatorname{Re}(e^{j\omega t}e^{j\phi_L}e^{j\frac{\pi}{2}}) law of exponents
 \operatorname{Re}(V_m e^{j\phi_V} \cancel{e^{j\omega t}} ) = \operatorname{Re}(e^{j\frac{\pi}{2}} \omega L I_m e^{j\phi_L} \cancel{e^{j\omega t}}) .... real numbers can be moved inside
e^{j\frac{\pi}{2}} = cos(\frac{\pi}{2}) + j*sin(\frac{\pi}{2}) = j ... substitute in above
\mathbb{I} = I_m e^{j\phi_L} and \mathbb{V} = V_m e^{j\phi_V} .. substitute in above
cancel out the e^{j\omega} terms on both sides
 \operatorname{Re}(\mathbb{V}e^{j\omega t} ) = \operatorname{Re}(j \omega L \mathbb{I}e^{j\omega t}) .... definition of phasors
 \mathbb{V} = j \omega L \mathbb{I} .... equation transformed into phasor domain

Conclusion, put the voltage and current in phasor form, replace \frac{d}{dt} with j\omega to translate the equation to the phasor domain.

Capacitor Terminal EquationEdit

A capacitor is basically the same form, V and I switch sides, C is substituted for L.

I = C\frac{d}{dt}V ... terminal relationship
I_m cos(\omega t + \phi_I) = C \frac{d}{dt} (V_m cos(\omega t + \phi_V)) .. substitution of a generic sinusodial
I_m cos(\omega t + \phi_I) = -\omega C V_m sin(\omega t + \phi_V) .. taking the derivative
- sin(\omega t + \phi_V) = cos(\omega t + \phi_V + \frac{\pi}{2}) .. trig
I_m cos(\omega t + \phi_I) = \omega C V_m cos(\omega t + \phi_V + \frac{\pi}{2}) .. substitution
I_m \operatorname{Re}(e^{j(\omega t + \phi_I)}) = \omega C V_m \operatorname{Re}(e^{j(\omega t + \phi_V + \frac{\pi}{2})}) from Euler's Equation
I_m \operatorname{Re}(e^{j\omega t} e^{j\phi_I}) = \omega C V_m \operatorname{Re}(e^{j\omega t}e^{j\phi_V}e^{j\frac{\pi}{2}}) law of exponents
 \operatorname{Re}(I_m e^{j\phi_I} \cancel{e^{j\omega t}}) = \operatorname{Re}(e^{j\frac{\pi}{2}} \omega C V_m e^{j\phi_V} \cancel{e^{j\omega t}}) .... real numbers can be moved inside
e^{j\frac{\pi}{2}} = cos(\frac{\pi}{2}) + j*sin(\frac{\pi}{2}) = j ... substitute in above equation
\mathbb{V} = V_m e^{j\phi_V} and \mathbb{I} = I_m e^{j\phi_I}.. substitute in above
cancel out the e^{j\omega} terms on both sides
 \operatorname{Re}(\mathbb{I}e^{j\omega t} ) = \operatorname{Re}(j \omega C \mathbb{V}e^{j\omega t}) .... definition of phasors
 \mathbb{I} = j \omega C \mathbb{V} .... equation transformed into phasor domain

Conclusion, put the voltage and current in phasor form, replace \frac{d}{dt} with j\omega to translate the equation to the phasor domain.

In summary, all the terminal relations have e^{j \omega} terms that cancel:

V_m e^{j\phi}\cancel{e^{j\omega t}} = I_m e^{j\phi}\cancel{e^{j\omega t}} * R
\mathbb{V} = \mathbb{I}R
V_m e^{j\phi}\cancel{e^{j\omega t}} = I_m e^{j\phi}\cancel{e^{j\omega t}} * j\omega*L
\mathbb{V} = \mathbb{I}j\omega L
I_m e^{j\phi}\cancel{e^{j\omega t}} = V_m e^{j\phi}\cancel{e^{j\omega t}} * j\omega*C
\mathbb{I} = \mathbb{V}j\omega C

What is interesting about this path of inquiry/logic/thought is a new concept emerges:

Device \frac{\mathbb{V}}{\mathbb{I}} \frac{\mathbb{I}}{\mathbb{V}}
Resistor R \frac{1}{R}
Capacitor \frac{1}{j\omega C} j\omega C
Inductor j\omega L \frac{1}{j\omega L}

The j\omega terms that don't cancel out come from the derivative terms in the terminal relations. These derivative terms are associated with the capacitors and inductors themselves, not the sources. Although the derivative is applied to a source, the independent device the derivative originates from (a capacitor or inductor) is left with it's feature after the transform! So if we leave the driving forces as \frac{output}{input} ratios on one side of the equal sign, we can consider separately the other side of the equal sign as a function! These functions have a name ... Transfer Functions. When we analyze the voltage/current ratios's in terms of R, L an C, we can sweep \omega through a variety of driving source frequencies, or keep the frequency constant and sweep through a variety of inductor values .. . we can analyze the circuit response!

Note: Transfer Functions are an entire section of this course. They come up in mechanical engineering control system classes also. There are similarities. Driving over a bump is like a surge or spike. Driving over a curb is like turning on a circuit. And when mechanical engineers study vibrations, they deal with sinusoidal driving functions, but they are dealing with a three dimensional object rather than a one dimensional object like we are in this course.

Phasor Domain to Time DomainEdit

Getting back into the time domain is just about as simple. After working through the equations in the phasor domain and finding \mathbb{V} and \mathbb{I}, the goal is to convert them to V and I.

The phasor solutions will have the form \mathbb{G} = A + Bj = G_m e^{j\phi} you should be able now to convert between the two forms of the solution. Then:

G = \operatorname{Re}(\mathbb{G} e^{j\omega t})= \operatorname{Re}(G_m e^{j\phi}e^{j\omega t}) = \operatorname{Re}(G_m e^{j(\omega t + \phi)}) = G_m cos(\omega t + \phi)

If there was an integral involved in the phasor math, then a constant needs to be added onto the time domain solution. The time constant is calculated from the initial conditions. If the solution doesn't involve a differential equation, then time constant is computed immediately. Otherwise the solution is treated as a particular solution and the time constant is computed after the homogeneous solution's magnitude is found. See the phasor examples for more detail.

What is not coveredEdit

There is another way of thinking about circuits where inductors and capacitors are complex resistances. The idea is:

impedance = resistance + j * reactance

Or symbolically

Z = R + j*X

Here the derivative is attached to the inductance and capacitance, rather than to the terminal equation as we have done. This spreads the math of solving circuit problems into smaller pieces that is more easily checked, but it makes symbolic solutions more complex and can cause numeric solution errors to accumulate because of intermediate calculations.

The phasor concept is found everywhere. Some day it will be necessary to study this if you get in involved in microwave projects that involve "stubs" or antenna projects that involve a "loading coil" ... the list is huge.

The goal here is to avoid the concepts of conductance, reactance, impedance, susceptance, and admittance ... and avoid the the confusion of relating these concepts while trying to compare phasor math with calculus and Laplace transforms.

Phasor NotationEdit

Remember, a phasor represents a single value that can be displayed in multiple ways.
\mathbb{C} = M \angle \phi "Polar Notation"
C = M e^{j(\omega t + \phi)} "Exponential Notation"
\mathbb{C} = A + jB "Rectangular Notation"
C = M \cos (\omega t + \phi) + j M \sin (\omega t + \phi) "time domain notation"

These 4 notations are all just different ways of writing the same exact thing.

Phasor symbolsEdit

When writing on a board or on paper, use hats \hat{V} to denote phasors. Expect variations in books and online:

  • \mathbb{V} (the large bold block-letters we use in this wikibook)
  • \bar{V} ("bar" notation, used by Wikipedia)
  • \vec{V} (bad ... save for vectors ... vector arrow notation)
  • \tilde{V} (some text books)
  • \hat{V} (some text books)

Differential EquationsEdit

Phasors Generate the Particular SolutionEdit

Phasors can replace calculus, they can replace Laplace transforms, they can replace trig. But there is one thing they can not do: initial conditions/integration constants. When doing problems with both phasors and Laplace, or phasors and calculus, the difference in the answers is going to be an integration constant.

Differential equations are solved in this course in three steps:

  • finding the particular solution ... particular to the driving function ... particular to the voltage or current source
  • finding the homogenous solution ... the solution that is the same no matter what the driving function is ... the solution that explores how an initial energy imbalance in the circuit is balanced
  • determining the coefficients, the constants of integration from initial conditions

Phasors Don't Generate Integration ConstantsEdit

The integration constant doesn't appear in phasor solutions. But they will appear in the Laplace and Calculus alternatives to phasor solutions. If the full differential equation is going to be solved, it is absolutely necessary to see where the phasors fail to create a symbol for the unknown integration constant ... that is calculated in the third step.

Phasors are the technique used to find the particular AC solution. Integration constants document the initial DC bias or energy difference in the circuit. Finding these constants requires first finding the homogeneous solution which deals with the fact that capacitors may or may not be charged when a circuit is first turned on. Phasors don't completely replace the steps of Differential Equations. Phasors just replace the first step: finding the particular solution.

Differential Equations ReviewEdit

The goal is to solve Ordinary Differential Equations (ODE) of the first and second order with both phasors, calculus, and Laplace transforms. This way the phasor solution can be compared with content of pre-requiste or co-requiste math courses. The goal is to do these problems with numeric and symbolic tools such as matLab and mupad/mathematica/wolframalpha. If you have already had the differential equations course, this is a quick review.

The most important thing to understand is the nature of a function. Trig, Calculus, and Laplace transforms and phasors are all associated with functions, not algebra. If you don't understand the difference between algebra and a function, maybe this student professor dialogue will help.

We start with equations from terminal definitions, loops and junctions. Each of the symbols in these algebraic equations is a function. We are not transforming the equations. We are transforming the functions in these equations. All sorts of operators appear in these equations including + - * / and \frac{d}{dt}. The first table focuses on transforming these operators. The second focuses on transforming the functions themselves.

The real power of the Laplace tranform is that it eliminates the integral and differential operators. Then the functions themselves can be transformed. Then unknowns can be found with just algebra. Then the functions can be transformed back into time domain functions.

Here are some of the Properties and Theorems needed to transform the typical sinusolidal voltages, powers and currents in this class.

Laplace Operator TransformsEdit

Properties of the unilateral Laplace transform
Time domain 's' domain Comment
Time scaling f(at)  \frac{1}{|a|} F \left ( {s \over a} \right ) for figuring out how \omega affects the equation
Time shifting  f(t - a) u(t - a) \  e^{-as} F(s) \ u(t) is the unit step function .. for figuring out the \phi phase angle
Linearity  a f(t) + b g(t) \  a F(s) + b G(s) \ Can be proved using basic rules of integration.
Differentiation  f'(t) \  s F(s) - f(0) \ f is assumed to be a differentiable function, and its derivative is assumed to be of exponential type. This can then be obtained by integration by parts
Integration  \int_0^t f(\tau)\, d\tau  =  (u * f)(t)  {1 \over s} F(s) a constant pops out at the end of this too

Laplace Function TransformEdit

Here are some of the transforms needed in this course:

Function Time domain
f(t) = \mathcal{L}^{-1} \left\{ F(s) \right\}
Laplace s-domain
F(s) = \mathcal{L}\left\{ f(t) \right\}
Region of convergence Reference
exponential decay  e^{-\alpha t} \cdot u(t)  \  { 1 \over s+\alpha } Re(s) > −α Frequency shift of
unit step
exponential approach ( 1-e^{-\alpha t})  \cdot u(t)  \ \frac{\alpha}{s(s+\alpha)} Re(s) > 0 Unit step minus
exponential decay
sine  \sin(\omega t) \cdot u(t) \  { \omega \over s^2 + \omega^2  } Re(s) > 0
cosine  \cos(\omega t) \cdot u(t) \  { s \over s^2 + \omega^2  } Re(s) > 0
exponentially decaying
sine wave
e^{-\alpha t}  \sin(\omega t) \cdot u(t) \  { \omega \over (s+\alpha )^2 + \omega^2  } Re(s) > −α
exponentially decaying
cosine wave
e^{-\alpha t}  \cos(\omega t) \cdot u(t) \  { s+\alpha \over (s+\alpha )^2 + \omega^2  } Re(s) > −α
Last modified on 14 November 2013, at 12:22