# Calculus Optimization Methods/Lagrange Multipliers

The method of Lagrange multipliers solves the constrained optimization problem by transforming it into a non-constrained optimization problem of the form:

• ${\displaystyle \operatorname {\mathcal {L}} (x_{1},x_{2},\ldots ,x_{n},\lambda )=\operatorname {f} (x_{1},x_{2},\ldots ,x_{n})+\operatorname {\lambda } (k-g(x_{1},x_{2},\ldots ,x_{n}))}$

Then finding the gradient and hessian as was done above will determine any optimum values of ${\displaystyle \operatorname {\mathcal {L}} (x_{1},x_{2},\ldots ,x_{n},\lambda )}$.

Suppose we now want to find optimum values for ${\displaystyle f(x,y)=2x^{2}+y^{2}}$ subject to ${\displaystyle x+y=1}$ from [2].

Then the Lagrangian method will result in a non-constrained function.

• ${\displaystyle \operatorname {\mathcal {L}} (x,y,\lambda )=2x^{2}+y^{2}+\lambda (1-x-y)}$

The gradient for this new function is

• ${\displaystyle {\frac {\partial {\mathcal {L}}}{\partial x}}(x,y,\lambda )=4x+\lambda (-1)=0}$
• ${\displaystyle {\frac {\partial {\mathcal {L}}}{\partial y}}(x,y,\lambda )=2y+\lambda (-1)=0}$
• ${\displaystyle {\frac {\partial {\mathcal {L}}}{\partial \lambda }}(x,y,\lambda )=1-x-y=0}$

Finding the stationary points of the above equations can be obtained from their matrix from.

${\displaystyle {\begin{bmatrix}4&0&-1\\0&2&-1\\-1&-1&0\end{bmatrix}}{\begin{bmatrix}x\\y\\\lambda \end{bmatrix}}={\begin{bmatrix}0\\0\\-1\end{bmatrix}}}$

This results in ${\displaystyle x=1/3,y=2/3,\lambda =4/3}$.

Next we can use the hessian as before to determine the type of this stationary point.

${\displaystyle H({\mathcal {L}})={\begin{bmatrix}4&0&-1\\0&2&-1\\-1&-1&0\end{bmatrix}}}$

Since ${\displaystyle H({\mathcal {L}})>0}$ then the solution ${\displaystyle (1/3,2/3,4/3)}$ minimizes ${\displaystyle f(x,y)=2x^{2}+y^{2}}$ subject to ${\displaystyle x+y=1}$ with ${\displaystyle f(x,y)=2/3}$.