# Calculus/Product Rule

 ← Differentiation/Differentiation Defined Calculus Quotient Rule → Product Rule

When we wish to differentiate a more complicated expression such as:

${\displaystyle h(x)=(x^{2}+5x+7)\cdot (x^{3}+2x-4)}$

our only way (up to this point) to differentiate the expression is to expand it and get a polynomial, and then differentiate that polynomial. This method becomes very complicated and is particularly error prone when doing calculations by hand. A beginner might guess that the derivative of a product is the product of the derivatives, similar to the sum and difference rules, but this is not true. To take the derivative of a product, we use the product rule.

 Derivatives of products (Product rule)${\displaystyle {\frac {d}{dx}}\left[f(x)\cdot g(x)\right]=f(x)\cdot g'(x)+f'(x)\cdot g(x)\,\!}$

It may also be stated as

${\displaystyle (f\cdot g)'=f'\cdot g+f\cdot g'\,\!}$

or in the Leibniz notation as

${\displaystyle {\dfrac {d}{dx}}(u\cdot v)=u\cdot {\dfrac {dv}{dx}}+v\cdot {\dfrac {du}{dx}}}$.

The derivative of the product of three functions is:

${\displaystyle {\dfrac {d}{dx}}(u\cdot v\cdot w)={\dfrac {du}{dx}}\cdot v\cdot w+u\cdot {\dfrac {dv}{dx}}\cdot w+u\cdot v\cdot {\dfrac {dw}{dx}}}$.

Since the product of two or more functions occurs in many mathematical models of physical phenomena, the product rule has broad application in Physics, Chemistry, and Engineering.

## Examples

• Suppose one wants to differentiate ƒ(x) = x2 sin(x). By using the product rule, one gets the derivative ƒ '(x) = 2x sin(x) + x2cos(x) (since the derivative of x2 is 2x and the derivative of sin(x) is cos(x)).
• One special case of the product rule is the constant multiple rule which states: if c is a real number and ƒ(x) is a differentiable function, then (x) is also differentiable, and its derivative is (c × ƒ)'(x) = c × ƒ '(x). This follows from the product rule since the derivative of any constant is zero. This, combined with the sum rule for derivatives, shows that differentiation is linear.
• The rule for integration by parts is derived from the product rule, as is (a weak version of) the quotient rule. (It is a "weak" version in that it does not prove that the quotient is differentiable, but only says what its derivative is if it is differentiable.)

### Physics Example I: Electromagnetic induction

Faraday's law of electromagnetic induction states that the induced electromotive force is the negative time rate of change of magnetic flux through a conducting loop.

${\displaystyle {\mathcal {E}}=-{{d\Phi _{B}} \over dt},}$

where ${\displaystyle {\mathcal {E}}}$  is the electromotive force (emf) in volts and ΦB is the magnetic flux in webers. For a loop of area, A, in a magnetic field, B, the magnetic flux is given by

${\displaystyle \Phi _{B}=B\cdot A\cdot \cos(\theta ),}$

where θ is the angle between the normal to the current loop and the magnetic field direction.

Taking the negative derivative of the flux with respect to time yields the electromotive force gives

{\displaystyle {\begin{aligned}{\mathcal {E}}&=-{\frac {d}{dt}}\left(B\cdot A\cdot \cos(\theta )\right)\\&=-{\frac {dB}{dt}}\cdot A\cos(\theta )-B\cdot {\frac {dA}{dt}}\cos(\theta )-B\cdot A{\frac {d}{dt}}\cos(\theta )\\\end{aligned}}}

In many cases of practical interest, only one variable (A, B, or θ) is changing so two of the three above terms are often zero.

## Proof

Proving this rule is relatively straightforward, first let us state the equation for the derivative:

${\displaystyle {\frac {d}{dx}}\left[f(x)\cdot g(x)\right]=\lim _{h\to 0}{\frac {f(x+h)\cdot g(x+h)-f(x)\cdot g(x)}{h}}}$

We will then apply one of the oldest tricks in the book—adding a term that cancels itself out to the middle:

${\displaystyle {\frac {d}{dx}}\left[f(x)\cdot g(x)\right]=\lim _{h\to 0}{\frac {f(x+h)\cdot g(x+h)\mathbf {-f(x+h)\cdot g(x)+f(x+h)\cdot g(x)} -f(x)\cdot g(x)}{h}}}$

Notice that those terms sum to zero, and so all we have done is add 0 to the equation. Now we can split the equation up into forms that we already know how to solve:

${\displaystyle {\frac {d}{dx}}\left[f(x)\cdot g(x)\right]=\lim _{h\to 0}\left[{\frac {f(x+h)\cdot g(x+h)-f(x+h)\cdot g(x)}{h}}+{\frac {f(x+h)\cdot g(x)-f(x)\cdot g(x)}{h}}\right]}$

Looking at this, we see that we can separate the common terms out of the numerators to get:

${\displaystyle {\frac {d}{dx}}\left[f(x)\cdot g(x)\right]=\lim _{h\to 0}\left[f(x+h){\frac {g(x+h)-g(x)}{h}}+g(x){\frac {f(x+h)-f(x)}{h}}\right]}$

Which, when we take the limit, becomes:

${\displaystyle {\frac {d}{dx}}\left[f(x)\cdot g(x)\right]=f(x)\cdot g'(x)+g(x)\cdot f'(x)}$ , or the mnemonic "one D-two plus two D-one"

This can be extended to 3 functions:

${\displaystyle {\frac {d}{dx}}[fgh]=f(x)g(x)h'(x)+f(x)g'(x)h(x)+f'(x)g(x)h(x)\,}$

For any number of functions, the derivative of their product is the sum, for each function, of its derivative times each other function.

Back to our original example of a product, ${\displaystyle h(x)=(x^{2}+5x+7)\cdot (x^{3}+2x-4)}$ , we find the derivative by the product rule is

${\displaystyle h'(x)=(x^{2}+5x+7)(3x^{2}+2)+(2x+5)(x^{3}+2x-4)=5x^{4}+20x^{3}+27x^{2}+12x-6\,}$

Note, its derivative would not be

${\displaystyle {\color {red}(2x+5)\cdot (3x^{2}+2)=3x^{3}+15x^{2}+4x+10}}$

which is what you would get if you assumed the derivative of a product is the product of the derivatives.

To apply the product rule we multiply the first function by the derivative of the second and add to that the derivative of first function multiply by the second function. Sometimes it helps to remember the memorize the phrase "First times the derivative of the second plus the second times the derivative of the first."

## Application, proof of the power rule

The product rule can be used to give a proof of the power rule for whole numbers. The proof proceeds by mathematical induction. We begin with the base case ${\displaystyle n=1}$ . If ${\displaystyle f_{1}(x)=x}$  then from the definition is easy to see that

${\displaystyle f_{1}'(x)=\lim _{h\rightarrow 0}{\frac {x+h-x}{h}}=1}$

Next we suppose that for fixed value of ${\displaystyle N}$ , we know that for ${\displaystyle f_{N}(x)=x^{N}}$ , ${\displaystyle f_{N}'(x)=Nx^{N-1}}$ . Consider the derivative of ${\displaystyle f_{N+1}(x)=x^{N+1}}$ ,

${\displaystyle f_{N+1}'(x)=(x\cdot x^{N})'=(x)'x^{N}+x\cdot (x^{N})'=x^{N}+x\cdot N\cdot x^{N-1}=(N+1)x^{N}.}$

We have shown that the statement ${\displaystyle f_{n}'(x)=n\cdot x^{n-1}}$  is true for ${\displaystyle n=1}$  and that if this statement holds for ${\displaystyle n=N}$ , then it also holds for ${\displaystyle n=N+1}$ . Thus by the principle of mathematical induction, the statement must hold for ${\displaystyle n=1,2,\dots }$ .