The method of Lagrange multipliers solves the constrained optimization problem by transforming it into a non-constrained optimization problem of the form:

Then finding the gradient and Hessian as was done above will determine any optimum values of $\operatorname {\mathcal {L}} (x_{1},x_{2},\ldots ,x_{n},\lambda )$.

Suppose we now want to find optimum values for $f(x,y)=2x^{2}+y^{2}$ subject to $x+y=1$ from [2].

Then the Lagrangian method will result in a non-constrained function.