# Real Analysis/Applications of Integration

 Real Analysis Applications of Integration

Integrals are used primarily to work with derivations. Even though this chapter is titled "Applications of Integration", the term "application" does not mean that this chapter will discuss how integration can be used in life. Instead, the following theorems we will present are focused on illustrating methods to calculate integrals and defining properties.

## Theorems on Computation

This heading will deal with deriving computation formulas for integration. Although the results of the Fundamental Theorem of Calculus yields a method of calculating integrals that involves having knowledge of the primitive beforehand, it is by no means the only way to compute integrals—especially if the integral is anything more complex than a power function.

### The Primitive

The Definition of Primitive of f
Given a function f, and any other function g, if ${\textstyle \left[\int {f}\right]'=\left[\int {g}\right]'}$ , f and g differ by a constant and the primitive is defined as the function that satisfies ${\textstyle \left[\int {f}\right]'=\left[\int {g}\right]'}$  with a possible constant C.

f actually refers to the derivative of the integral, not the function within. Thus, the Primitive of f is a statement that requires a reference to the function one is derivating over, not the output of that derivation.

This is also the antiderivative

We will first take a small detour into the nature of what a primitive is. Recall from the Fundamental Theorem of Calculus that, essentially, there exists a function $f$  such that they appear when the derivative of ${\textstyle \int _{a}^{x}{f}}$  is applied.

If we use different variables than from the theorem (They use F and f), we can display this process through the following steps

$u=\int _{a}^{x}{f}\Longleftrightarrow u'={\frac {\operatorname {d} }{\operatorname {d} \!x}}\left[\int _{a}^{x}{f}\right]\implies u'=f$

We use the implication arrow for the last step to highlight a point. The final step, although logically is an equivalent statement—and thus an iff sign is fine in that position, it does not mean that the reverse is easy. In fact, only mathematics outside of Real Analysis will rigorously show how reverting that step may be more complex than moving forward. Thus, most first-year real analysis courses will only ask you so much as only moving forward.

Yet, there are some functions whose primitives are easily found. Almost by design of the educators of mathematics, every function studied in elementary mathematics, including those rigorously defined in the Functions section; the Trigonometry chapter; and the Exponential and Logarithmic headings, have easily defined primitives that is easily derived through the difficult mental exercises of working with the Table of Derivatives in reverse (with leniency offered to things like the power functions and trigonometric functions – trigonometric functions are among the hardest in this list to derive). Despite that, we will provide a table below, so that you do not have to work through the mental exercise.

Note
Tables like this makes it look as if both integration and differentiation cancel each other out and makes it look like both primitives are equal. It doesn't and it isn't! Tables like this are often the basis of these confusions. If one's confused, make sure to read the entire material to understand the nuances!
Conversion Table for Primitives
Is Becomes
Derivative is Indefinite Integral is Antiderivative is Primitive is
$a$  $ax+C$
$e^{x}$  $e^{x}+C$
${\frac {1}{x}}$  $\log {}+C$
$x^{n}$  ${\frac {x^{n+1}}{n+1}}+C$
$a^{x}$  ${\frac {a^{x}}{\log a}}+C$
$\sin$  $-\cos {}+C$
$\cos$  $\sin {}+C$
$\tan$  $-\log {\cos {}}+C$
$\sec ^{2}{}$  $\tan {}+C$
$\sec {}\tan {}$  $\sec {}+C$
${\frac {1}{1+x^{2}}}$  $\arctan {}+C$
${\frac {1}{\sqrt {1-x^{2}}}}$  $\arcsin {}+C$
$\sin ^{2}$  ${\frac {1-\cos {2x}}{2}}+C$
$\cos ^{2}$  ${\frac {1+\cos {2x}}{2}}+C$
$\arctan$  $x\cdot \arctan {}-1/2\cdot \log {(1+x^{2})}+C$
$\sec$  $\log {(\sec {}+\tan {})}+C$
$\csc$  $-\log {(\csc {}+\cot {})}+C$

Hah! If you noticed, there is, on each primitive, a constant variable C around. Coupled with the rather indirect definition of a primitive, it appears as if the concept of the primitive itself warrants an explanation. Case in point, because of one major consequence of applying the Fundamental Theorem of Calculus and a derivation theorem together (specifically the one that states that if $f'=g'$ , then $f=g+c$ ), a constant variable C must be added in order to be an algebraically correct conversion. As a consequence of this requirement, although the derivation of an integration commonly, to use a laymen term, "cancels" both operations, it logically is not a full cancellation—especially when the primitive f is not known.

However, there are two facts that, when accepted, make the constant C issue immensely more manageable when working with integrals.

1. For the intents and purposes of integral computation (a trick with indefinite integral definitions), one does not need to worry about constants until the final result is computed.
2. Computing the constant value C is easy if the function's properties allow for easy numerical values (such as its roots) to show.

### The Indefinite Integral

This will act as a small aside, but is essential to understand the rest of this chapter.

What if one does not wish to write out all those steps above? One wishes to instead simply acknowledge the multiple interpretations of what a primitive is without being bogged down by explicitly declaring your function f as an antiderivitive while still writing the primitive by virtue of filling out the f in what $f+C$  is. Simple. Mathematicians agreed to the definition of an indefinite integral, which is defined as

$\int {f}:=\left\{f+C:\forall C\in \mathbb {R} ,f={\frac {\operatorname {d} }{\operatorname {d} \!x}}\left[\int _{a}^{x}{f}\right]\right\}$

which, simply put, means the set of all primitives of that statement with the integral and the derivative. Curiously, this definition is implied when talking about the primitive, and, more importantly, this definition is the technical output when derivating an integral.

### Integration by Parts

Theorem
Given a differentiable function u and v, if $u'$  and $v'$  are continuous, then $\int {uv'}=uv-\int {u'v}$  is a valid equation

Why are the variables u and v instead of the more traditional f and g? Well, using the variables u and v is the traditional naming convention for integration by parts! Aside from confusing traditions, it serves a more pragmatic reason as well, namely that a given function f can be a combination of two functions u and v, which can be placed into this equation to yield a calculated answer for f.

This is an important theorem which is extremely easy to derive as well (it only involves algebra and equations). Assuming one knows the derivation of a function composed of multiplying two other functions, the proof is as follows.

 Given the derivation of multiplication equation, we know that firstly, we can rearrange the equation in the following manner, and that secondly, we know that the functions are continuous as well, since derivatives of functions imply that the function is continuous to begin with. {\begin{aligned}(uv)'&=u'v+uv'\\u'v&=(uv)'-uv'\end{aligned}} The function u′ and v′ is assumed to be continuous. Thus, now that everything is continuous, we can apply an integral over the entire equation and still have a valid statement. {\begin{aligned}(uv)'&=u'v+uv'\implies \\\int {u'v}&=\int {(uv)'-uv'\operatorname {d} \!x}\end{aligned}} After some algebraic manipulations, which includes using the Fundamental Theorem of Calculus, the final result is as shown. {\begin{aligned}\int {u'v}&=\int {(uv)'-uv'\operatorname {d} \!x}\\&=\int {(uv)'}-\int {uv'}\\&=uv-\int {uv'}\\\end{aligned}} $\blacksquare$ #### Notation

One may have noticed, especially if one has read any other literature on mathematics, of a notation format used when discussing about integration. For example, the full notation for integration by parts is

$\int u(x)v'(x)\,\operatorname {d} \!x=u(x)v(x)-\int v(x)\,u'(x)\operatorname {d} \!x$

but is often written as

$\int u\,\operatorname {d} \!v=uv-\int v\,\operatorname {d} \!u$

The symbols $\operatorname {d} \!u$  and $\operatorname {d} \!v$  are not to be meant in the normal sense as discussed in the section on integrals, namely that the variable after the d represents the variable that will be treated as such and every other variable as a constant. The varibale instead refers to a function. Thus, the symbol is being used in a new manner, which can be succinctly described by defining the specific cases

$\operatorname {d} \!v:=v'(x)\operatorname {d} \!x$
$\operatorname {d} \!u:=u'(x)\operatorname {d} \!x$

Note that this new definition is non-conflicting with the original definition of $\operatorname {d} \!x$ , since this one refers to when the variable on the right is a function instead of a variable.

### Integration by Substitution

Theorem
Given a differentiable function g, if some function $f$  and $g'$  are continuous, then $\int _{g(a)}^{g(b)}{f\operatorname {d} \!x}=\int _{a}^{b}{f\circ g\cdot g'}$  is a valid equation.

Unlike Integration by Parts, no reference to a variable u is made. This is because of complex relationships between how the variable u, g, and f relate in this overall theorem, which will be explained in another heading. For now, let's focus on making sure that this statement is even valid to begin with.

This formula is, luckily, also easy to validate. Like Integration by Parts, it too does not use a lot of theorems, only relying on the Second Fundamental Theorem of Calculus and the Chain Rule.

 Define the function Λ as the primitive of f, which is valid given that a continuous function f implies that $\int _{g(a)}^{g(b)}{f}$ is subject to the First Fundamental Theorem of Calculus (Note that this primitive can be defined as the actual primitive – the function being integrated, instead of the primitive that allows for a variable constant, since the First Fundamental Theorem of Calculus shows that a primitive that equals the function being integrated does indeed exist). We can then apply the Second Fundamental Theorem of Calculus to derive a computable equation. {\begin{aligned}\int _{g(a)}^{g(b)}{f}\implies \exists \Lambda :\Lambda '&=f\\\land \int _{g(a)}^{g(b)}{f}&=(\Lambda \circ g)(b)-(\Lambda \circ g)(a)\end{aligned}} However, let's instead imagine we input the function g into Λ instead of the usual input variable x. Now, since Λ is a composition function whose derivative should still equal f (after all, we only changed the input, not the function), we can apply Chain Rule on the function Λ. $(\Lambda \circ g)'=\Lambda '\circ g\cdot g'=f\circ g\cdot g'$ Now, if we should take an integral over $(\Lambda \circ g)'$ , what should the interval be that will make things interesting? Clearly, the interval of [a, b] since if one were to use the Second Fundamental Theorem of Calculus, $\int _{a}^{b}(\Lambda \circ g)'=(\Lambda \circ g)(b)-(\Lambda \circ g)(a)$ which makes these two statements equal. $\therefore \int _{a}^{b}{f\circ g\cdot g'}=\int _{g(a)}^{g(b)}{f}.\;\;\blacksquare$ As mentioned earlier, this equation speaks little of how it can be applied to compute integration statements. Given that much of the content on how integration by substitution is done through other websites or other wikibooks (take Wikipedia's page on Integration by Substitution, which provides detailed examples on how to use this method, or take Calculus as an example, whose page discusses method of Integration by Substitution as well), the remainder of this heading will discuss how the steps they teach relate to this theorem.

First and foremost, this theorem stands out from what others normally teach because this uses definite integrals, unlike Integration by Parts which uses indefinite integrals. This is because the theorem, and its explanations, are best illustrated using definite integrals, as it highlights where the functions go in order to equate both statements. However, the bounds making up the definite integral can be easily "canceled" through derivation to create the easier-to-work-with theorem elementary mathematics teaches,

Theorem
Given a differentiable function g, if a function $f$  and $g'$  are continuous, then $\int {f}=\int {f\circ g\cdot g'}$  is a valid equation.

Also notated as $\int {f(x)}\operatorname {d} \!x=\int {(f\circ u)(x)\operatorname {d} \!u}{\text{ or as }}\int {(f\circ u)(x)\;{\frac {\operatorname {d} \!u}{\operatorname {d} \!x}}\;\operatorname {d} \!x}$

Also known as u-substitution.

One may be interested in noting that when Leibniz's notation is used here, it almost appears as if the algebraic operation of division is used for the $\operatorname {d} \!x$  operator. This is a possible motivation for why this notation continues to exist today, as certain theorems can be easily expressed by adhering to algebraic-like properties.

#### Usage of Integration by Substitution

Unlike Integration by Parts, which explanation is easily bundled up through derivation of multiplication and indefinite integral explanation, this theorem requires plenty of explanation in order to understand how its used.

Aside from why it uses definite integrals, which was brushed over earlier and will be implied throughout this heading, we will first discuss what the functions f, g, and u represent. Note that in nearly all cases (unless you define $g(x)=x$ , which doesn't result in an easier calculation), the function you see being integrated as well as masquerading as f in the left-side of the theorem statement will actually be a composition function of f and g, where g is the function one wishes to manipulate and f is the other parts of the overall function to be integrated. So, if the example is

$\int {(2x+5)(x^{2}+5x)^{7}}$

then the function $f\circ g=(2x+5)(x^{2}+5x)^{7}$  can be expressed as

$f=x'\cdot x^{7}\land g=(x^{2}+5x)^{7}$

So for most cases, the method of Integration by Substitution is actually sussing out the function g from the overall composition function $f\circ g=(2x+5)(x^{2}+5x)^{7}$ , defining u = g, and reducing the overall function into one where integration can be simply derived. So for this example,

$f\circ g=g'\cdot g^{7}$

which can easily be computed since now the only focus is on g7 i.e. apply Integration by Substitution.

So, the variable u used in most explanations will simply equal g under simple circumstances. For more complex situations, u will not equal g – which is a situation explained in the next heading.

#### Inverting Integration by Substitution

The usual method of applying Integration by Substitution, explained further in other wikibooks, is to find some function u = g(x) that uses the input variable x in the overall statement f whose derivative can also be found in the overall statement f, substitute it in for the statement f – making sure that this new function can replace all instances of the input variable x along with other terms with u, and apply the theorem. However, it is possible to do the inverse. What do we mean? Instead of finding a function u = g(x) that can replace the input variable x, one can find the inverse of the function x = g-1(u) that can cancel out sections of the overall statement f as well as the input variable x and then integrate for u.

In other words, if one ever requires to "move" – which would involve inverting u the function – the function from the dx side over to the du side, this corollary is being used. We know that for the variables x and u, inverting is as easy as inverting the function g and using the variables x and u as respective inputs. However, do we know whether or not we can "move" functions from one dx to the other du by replacing it with the inverse? After all, dx and du are not variables, but operators. The following corollary proves yes.

 We are aware from the previous heading that the definition ${\textstyle \int {f}}$ actually refers to an function composed of f and g – the f in the indefinite integral is not the same f. As such, given the following, we can expand the $\int {h}=\int {f\circ g}$ Let's write out the equation in full notation. $\int {f\circ g}=\int {f\circ g(x)\operatorname {d} \!x}$ Now let's check what happens to the equation when we define $x=g^{-1}(u)$ and $\operatorname {d} \!x=[g^{-1}]'(u)\operatorname {d} \!u$ . $\int {f\circ g(x)\operatorname {d} \!x}=\int {f(u)\cdot [g^{-1}]'(u)\;\operatorname {d} \!u}$ Returning to the original full notation, let's multiply by 1. Note that g'(x) must never equal 0. But how would that be known? $\int {f\circ g}=\int {f\circ g(x)\cdot {\frac {1}{g'(x)}}\cdot g'(x)\operatorname {d} \!x}$ Easy! Suppose $u=g(x)$ and $\operatorname {d} \!u=g'(x)\operatorname {d} \!x$ . Note that we also know that $(g^{-1}\circ u)(x)=x$ , which is used in the equation. $\int {f\circ g(x)\operatorname {d} \!x}=\int {f(u)\cdot {\frac {1}{g'\circ g^{-1}(u)}}\operatorname {d} \!u}$ Apply the equivalent definition of an inverse function. $\int {f\circ g(x)\operatorname {d} \!x}=\int {f(u)\cdot [g^{-1}]'(u)\;\operatorname {d} \!u}$ Both equations, although assuming different values for different variables, both remain equal not only on the left side but also on the right side. This implies that whichever variable one chooses, integration by substitution can still be used. $\blacksquare$ 