# Calculus of Variations/CHAPTER IX

CHAPTER IX: CONJUGATE POINTS.

• 122 The second variation of the differential equation ${\displaystyle J=0}$.
• 123,124 The solutions of the equations ${\displaystyle G=0}$ and ${\displaystyle J=0}$. The second variation derived from the first variation.
• 125 Variations of the constants in the solutions of ${\displaystyle G=0}$.
• 126 The solutions ${\displaystyle u_{1}}$ and ${\displaystyle u_{2}}$ of the differential equation ${\displaystyle J=0}$.
• 127 These solutions are independent of each other.
• 128 The function ${\displaystyle \Theta (t,t')}$. Conjugate points.
• 129 The relative position of conjugate points on a curve.
• 130 Graphical representation of the ratio ${\displaystyle {\frac {u_{1}}{u_{2}}}}$.
• 131 Summary.
• 132 Points of intersection of the curves ${\displaystyle G=0}$ and ${\displaystyle \delta G=0}$.
• 133 The second variation when two conjugate points are the limits of integration, and when a pair of conjugate points are situated between these limits.

Article 122.
The condition given in the preceding Chapter is not sufficient to establish the existence of a maximum or a minimum. Under the assumption that ${\displaystyle F_{1}}$ is neither zero nor infinite within the interval ${\displaystyle t_{0}\ldots t_{1}}$, suppose that two functions ${\displaystyle \phi _{1}(t)}$ and ${\displaystyle \phi _{2}(t)}$ can be found which satisfy the differential equation 13) of the last Chapter, so that, consequently,

${\displaystyle u=c_{1}\phi _{1}(t)+c_{2}\phi _{2}(t)}$

is the general solution of ${\displaystyle J=0}$. Then, even if within the limits of integration it can be shown that ${\displaystyle u}$ is not infinite, it may still happen that, however the constants ${\displaystyle c_{1}}$ and ${\displaystyle c_{2}}$ be chosen, the function ${\displaystyle u}$ vanishes, so that the transformation of the ${\displaystyle v}$-equation into the ${\displaystyle u}$-equation is not admissible ; consequently nothing can be determined regarding the appearance of a maximum or a minimum. We are thus led again to the necessity of studying more closely the function ${\displaystyle u}$ defined by the equation ${\displaystyle J=0}$, in order that we may determine under what conditions this function does not vanish within the interval ${\displaystyle t_{0}\ldots t_{1}}$.

It is seen that the equation ${\displaystyle J=0}$ is satisfied, if for ${\displaystyle u}$ we write

${\displaystyle u_{1}=-F_{1}u'\qquad }$ [see Art. 118, equation 11)],

and consequently

${\displaystyle v={\frac {u_{1}}{u}}=-F_{1}{\frac {u'}{u}}}$

is a solution of the equation in ${\displaystyle v}$.

The integral 10) of the last Chapter may be then written

${\displaystyle \delta ^{2}I=\int _{t_{0}}^{t_{1}}F_{1}w^{2}\left({\frac {w'}{w}}-{\frac {u'}{u}}\right)^{2}~{\text{d}}t+\left[R+w^{2}F_{1}{\frac {u'}{u}}\right]_{t_{0}}^{t_{1}}}$

From this we see that if ${\displaystyle {\frac {w'}{w}}={\frac {u'}{u}}}$, or if ${\displaystyle w=Cu}$, then the second variation is free from the sign of integration ; in other words, the second variation is free from the integral sign, if we make any deformation (normal [Art. 113, equation 5)] to the curve) such that the displacement is proportional to the value of any integral of the differential equation ${\displaystyle J=0}$.

Again, if we deform any one of the family of curves ${\displaystyle G=0}$ into a neighboring curve belonging to the family, we have an expression which is also free from the integral sign. For (see Arts. 79 and 81), if we write ${\displaystyle p={\sqrt {x'^{2}+y'^{2}}}={\frac {{\text{d}}s}{{\text{d}}t}}}$, we have

${\displaystyle \delta F=Gpw_{N}+\left[{\frac {\text{d}}{{\text{d}}t}}\left(\xi {\frac {\partial F}{\partial x'}}+\eta {\frac {\partial F}{\partial y'}}\right)\right]_{t_{0}}^{t_{1}}}$,

and consequently,

${\displaystyle \delta ^{2}F=pw_{N}\delta G+G\delta (pw_{N})+\left[{\frac {\text{d}}{{\text{d}}t}}\delta \left(\xi {\frac {\partial F}{\partial x'}}+\eta {\frac {\partial F}{\partial y'}}\right)\right]_{t_{0}}^{t_{1}}}$.

Hence, if ${\displaystyle \delta G=0}$, we have here also

${\displaystyle \delta ^{2}I=\left[\delta \left(\xi {\frac {\partial F}{\partial x'}}+\eta {\frac {\partial F}{\partial y'}}\right)\right]_{t_{0}}^{t_{1}}}$.

It may be shown as follows that the curve ${\displaystyle \delta G=0}$ is one of the family of curves ${\displaystyle G=0}$. The curves belonging to the family of curves ${\displaystyle G=0}$ are given (Art. 90) by

${\displaystyle x=\phi (t,\alpha ,\beta ),\qquad y=\psi (t,\alpha ,\beta )}$,

where ${\displaystyle \alpha }$ and ${\displaystyle \beta }$ are arbitrary constants. We have a neighboring curve of the family when for ${\displaystyle \alpha }$, ${\displaystyle \beta }$ we write ${\displaystyle \alpha +\epsilon \alpha '}$, ${\displaystyle \beta +\epsilon \beta '}$. Then the function ${\displaystyle G}$ becomes

${\displaystyle G+\Delta G=G+\epsilon \delta G+\epsilon ^{2}(~~)+\ldots }$

Hence, when ${\displaystyle \epsilon }$ is taken very small, it follows that

${\displaystyle x=\phi (t,\alpha +\epsilon \alpha ',\beta +\epsilon \beta '),\qquad y=\psi (t,\alpha +\epsilon \alpha ',\beta +\epsilon \beta ')}$

is a solution of ${\displaystyle \delta G=0}$, since it is a solution of ${\displaystyle G+\Delta G=0}$ and of ${\displaystyle G=0}$.

Now we may always choose normal displacements ${\displaystyle {\frac {w}{p}}}$ which will take us from one of the curves ${\displaystyle G=0}$ to a neighboring curve ${\displaystyle \delta G=0}$. From this it appears that there is a relation between the differential equations ${\displaystyle \delta G=0}$ and ${\displaystyle J=0}$.

Article 123.
In this connection a discovery made by Jacobi (Crelle's Journal, bd. 17, p. 68) is of great use. He showed that with the integration of the differential equation ${\displaystyle G=0}$, also that of the differential equation ${\displaystyle J=0}$ is performed. We are then able to derive the general expression for ${\displaystyle u}$, and may determine completely whether and when ${\displaystyle u=0}$. We shall next derive the general solution of the equation ${\displaystyle J=0}$, it being presupposed that the differential equation ${\displaystyle G=0}$ admits of a general solution. We derived the first variation in the form

${\displaystyle \delta I=\int _{t_{0}}^{t_{1}}Gw~{\text{d}}t+{\Big [}~~{\Big ]}_{t_{0}}^{t_{1}}}$.

We may form the second variation by causing in this expression ${\displaystyle G}$ alone to vary, and then ${\displaystyle w}$ alone, and by adding the results.

It follows that

${\displaystyle \delta ^{2}I=\int _{t_{0}}^{t_{1}}(\delta Gw+G\delta w)~{\text{d}}t+{\Big [}~~{\Big ]}_{t_{0}}^{t_{1}}}$. ${\displaystyle \qquad }$ (i)

Since the differential equation ${\displaystyle G=0}$ is supposed satisfied, we have

${\displaystyle \delta ^{2}I=\int _{t_{0}}^{t_{1}}\delta Gw~{\text{d}}t+{\Big [}~~{\Big ]}_{t_{0}}^{t_{1}}}$. ${\displaystyle \qquad }$ (a)

${\displaystyle G_{1}={\frac {\partial F}{\partial x}}-{\frac {\text{d}}{{\text{d}}t}}\left({\frac {\partial F}{\partial x'}}\right)}$, ${\displaystyle \qquad G_{2}={\frac {\partial F}{\partial y}}-{\frac {\text{d}}{{\text{d}}t}}\left({\frac {\partial F}{\partial y'}}\right)}$,

and also

${\displaystyle G_{1}=y'G}$, ${\displaystyle \qquad G_{2}=-x'G}$.

When in the expression for ${\displaystyle G_{1}}$, the substitutions

${\displaystyle x\rightarrow x+\epsilon \xi }$, ${\displaystyle \qquad y\rightarrow y+\epsilon \eta }$

${\displaystyle G_{1}+\Delta G_{1}=(y'+\epsilon \eta ')(G+\Delta G)}$;

and since

${\displaystyle \Delta G_{1}=\epsilon \delta G_{1}+\epsilon ^{2}(~~)+\cdots }$,
${\displaystyle \Delta G=\epsilon \delta G+\epsilon ^{2}(~~)+\cdots }$,

it follows that

${\displaystyle \delta G_{1}=y'\delta G=G\eta '}$,

and similarly

${\displaystyle \delta G_{2}=-x'\delta G-G\xi '}$.

Article 124.
When ${\displaystyle G}$ is eliminated from the last two expressions, we have

${\displaystyle \delta G_{1}\xi '+\delta G_{2}\eta '=(y'\xi '-x'\eta ')\delta G}$. ${\displaystyle \qquad (ii)}$

On the other hand, it is seen that

${\displaystyle \delta G_{1}={\frac {\partial ^{2}F}{\partial x^{2}}}\xi +{\frac {\partial ^{2}F}{\partial x\partial y}}\eta +{\frac {\partial ^{2}F}{\partial x\partial x'}}\xi '+{\frac {\partial ^{2}F}{\partial x\partial y'}}\eta '-{\frac {\text{d}}{{\text{d}}t}}\left({\frac {\partial ^{2}F}{\partial x\partial x'}}\xi +{\frac {\partial ^{2}F}{\partial ^{2}x'}}\xi '+{\frac {\partial ^{2}F}{\partial x'\partial y}}\eta +{\frac {\partial ^{2}F}{\partial x'\partial y}}\eta '\right)}$,

an expression which, owing to 2), 3) and 4) of the last Chapter, may be written in the following form :

${\displaystyle \delta G_{1}={\frac {\partial ^{2}F}{\partial x^{2}}}\xi +{\frac {\partial ^{2}F}{\partial x\partial y}}\eta +{\frac {\partial ^{2}F}{\partial x\partial x'}}\xi '+{\frac {\partial ^{2}F}{\partial x\partial y}}\eta '-{\frac {{\text{d}}L}{{\text{d}}t}}\xi -{\frac {{\text{d}}M}{{\text{d}}t}}\eta -L\xi '-M\eta '-{\frac {\text{d}}{{\text{d}}t}}\left(F_{1}y'{\frac {{\text{d}}w}{{\text{d}}t}}\right)}$;

and if we take into consideration 3), 4) 6) and 7) of the last Chapter, we may write the above result in the form:

${\displaystyle \delta G_{1}=-y'{\frac {\text{d}}{{\text{d}}t}}\left(F_{1}{\frac {{\text{d}}w}{{\text{d}}t}}\right)+y'F_{2}w}$.

In an analogous manner, we have

${\displaystyle \delta G_{2}=x'{\frac {\text{d}}{{\text{d}}t}}\left(F_{1}{\frac {{\text{d}}w}{{\text{d}}t}}\right)-x'F_{2}w}$.

When these values are substituted in ${\displaystyle (ii)}$, we have

${\displaystyle \delta G=-{\frac {\text{d}}{{\text{d}}t}}\left(F_{1}{\frac {{\text{d}}w}{{\text{d}}t}}\right)+F_{2}w}$. ${\displaystyle \qquad (b)}$

Hence from (a) we have

${\displaystyle \delta ^{2}I=\int _{t_{0}}^{t_{1}}\left(-{\frac {\text{d}}{{\text{d}}t}}\left(F_{1}{\frac {{\text{d}}w}{{\text{d}}t}}\right)w+F_{2}w^{2}\right)~{\text{d}}t+{\Big [}~~{\Big ]}_{t_{0}}^{t_{1}}}$.

By the previous method we found the second variation to be [see formula 8) of the last Chapter]

${\displaystyle \delta ^{2}I=\int _{t_{0}}^{t_{1}}\left(F_{1}\left({\frac {{\text{d}}w}{{\text{d}}t}}\right)^{2}+F_{2}w^{2}\right)~{\text{d}}t+{\Big [}~~{\Big ]}_{t_{0}}^{t_{1}}}$.

These two expressions should agree as to a constant term. The difference of the integrals is

${\displaystyle D=\int _{t_{0}}^{t_{1}}-{\frac {\text{d}}{{\text{d}}t}}\left(F_{1}{\frac {{\text{d}}w}{{\text{d}}t}}\right)w~{\text{d}}t-\int _{t_{0}}^{t_{1}}F_{2}\left({\frac {{\text{d}}w}{{\text{d}}t}}\right)^{2}~{\text{d}}t}$;

but since

${\displaystyle \int {\frac {\text{d}}{{\text{d}}t}}\left(F_{1}{\frac {{\text{d}}w}{{\text{d}}t}}\right)w~{\text{d}}t=wF_{1}{\frac {{\text{d}}w}{{\text{d}}t}}-\int F_{1}\left({\frac {{\text{d}}w}{{\text{d}}t}}\right)^{2}~{\text{d}}t}$,

it is seen that

${\displaystyle D=\left[-wF_{1}{\frac {{\text{d}}w}{{\text{d}}t}}\right]_{t_{0}}^{t_{1}}}$.

The formula (b) is

${\displaystyle \delta G=F_{2}w-{\frac {\text{d}}{{\text{d}}t}}\left(F_{1}{\frac {{\text{d}}w}{{\text{d}}t}}\right)}$.

When we compare this with ${\displaystyle 12^{a})}$ of the preceding Chapter, the differential equation for <maht>u[/itex], viz.:

${\displaystyle 0=F_{2}u-{\frac {\text{d}}{{\text{d}}t}}\left(F_{1}{\frac {{\text{d}}u}{{\text{d}}t}}\right)}$,

it is seen that as soon as we find a quantity ${\displaystyle w}$ for which ${\displaystyle \delta G=0}$, we have a corresponding integral of the diflEerential equation for ${\displaystyle u}$.

Article 125.
The total variation of ${\displaystyle G}$ is

${\displaystyle \Delta G=G\left(x+\epsilon \xi _{1}+{\frac {\epsilon ^{2}}{2!}}\xi _{2}+\cdots ,y+\epsilon \eta _{1}+{\frac {\epsilon ^{2}}{2!}}\eta _{2}+\cdots ,x'+\epsilon \xi '_{1}+{\frac {\epsilon ^{2}}{2!}}\xi '_{2}+\cdots ,y'+\epsilon \eta _{1}'+{\frac {\epsilon ^{2}}{2!}}\eta _{2}'+\cdots ,x''+\epsilon \xi _{1}'+{\frac {\epsilon ^{2}}{2!}}\xi ''_{2}+\cdots ,y''+\epsilon \eta _{1}''+{\frac {\epsilon ^{2}}{2!}}\eta _{2}''+\cdots \right)-G(x,y,x',y',x'',y'')=\epsilon \delta G={\frac {\epsilon ^{2}}{2!}}\delta ^{2}G+\cdots )}$,

where ${\displaystyle \delta G}$, as found in the preceding article, has the value

${\displaystyle \delta G=-{\frac {\text{d}}{{\text{d}}t}}\left(F_{1}{\frac {{\text{d}}w}{{\text{d}}t}}\right)+F_{2}w}$.

Suppose that the equation ${\displaystyle G=0}$ is integrable, and let

${\displaystyle x=\phi (t,\alpha ,\beta )\qquad y=\psi (t,\alpha ,\beta )}$

be general expressions which satisfy it, where ${\displaystyle \alpha }$, ${\displaystyle \beta }$ are arbitrary constants of integration. The difEerential equation ${\displaystyle G=0}$ will be satisfied, if we suppose that ${\displaystyle \alpha }$ and ${\displaystyle \beta }$, having arbitrarily fixed values, are increased by two arbitrarily small quantities ${\displaystyle \epsilon \delta \alpha }$ and ${\displaystyle \epsilon \delta \beta }$; that is, the functions

${\displaystyle {\bar {x}}=\phi (t,\alpha +\epsilon \delta \alpha ,\beta +\epsilon \delta \beta )=\phi (t,\alpha ,\beta )+\epsilon \left({\frac {\partial \phi }{\partial \alpha }}\delta \alpha +{\frac {\partial \phi }{\partial \beta }}\delta \beta \right)+\epsilon ^{2}(~~)}$,
${\displaystyle {\bar {y}}=\psi (t,\alpha +\epsilon \delta \alpha ,\beta +\epsilon \delta \beta )=\psi (t,\alpha ,\beta )+\epsilon \left({\frac {\partial \psi }{\partial \alpha }}\delta \alpha +{\frac {\partial \psi }{\partial \beta }}\delta \beta \right)+\epsilon ^{2}(~~)}$

are also solutions of ${\displaystyle G=0}$.

Article 126.
Now choose the variation of the curve (Art. Ill) in such a way that

${\displaystyle {\bar {x}}=x+\epsilon \xi _{1}+{\frac {\epsilon ^{2}}{2!}}\xi _{2}+\cdots \qquad {\bar {y}}=y+\epsilon \eta _{1}+{\frac {\epsilon ^{2}}{2!}}\eta _{2}+\cdots }$;

and, whatever be the values of ${\displaystyle \delta \alpha }$ and ${\displaystyle \delta \beta }$, we determine ${\displaystyle \xi _{1}}$,${\displaystyle \xi _{2}}$,${\displaystyle \eta _{1}}$,${\displaystyle \eta _{2}}$, etc., by the relations:

${\displaystyle \xi _{1}={\frac {\partial \phi }{\partial \alpha }}\delta \alpha +{\frac {\partial \phi }{\partial \beta }}\delta \beta \qquad \eta _{1}={\frac {\partial \psi }{\partial \alpha }}\delta \alpha +{\frac {\partial \psi }{\partial \beta }}\delta \beta }$. ${\displaystyle \qquad (iii)}$

For all values of ${\displaystyle \alpha }$ and ${\displaystyle \beta }$ the difEerential equation ${\displaystyle G=0}$ satisfied; hence, the values of ${\displaystyle \xi _{1}}$, ${\displaystyle \eta _{1}}$, etc., just written, when substituted in ${\displaystyle \Delta G}$ above must make the right-hand side of that equation vanish identically, and consequently also ${\displaystyle \delta G}$. Hence, the corresponding normal displacement ${\displaystyle w=y'\xi _{1}-x'\eta _{1}}$ transforms one of the system of curves ${\displaystyle G=0}$ to another one of the same system.

Since ${\displaystyle \delta \alpha }$ and ${\displaystyle \delta \beta }$ are entirely arbitrary, the coeflEcients of ${\displaystyle \delta \alpha }$ and ${\displaystyle \delta \beta }$ must each vanish in the expansion of ${\displaystyle \Delta G}$ above. Owing to (iii) ${\displaystyle w=y'\xi _{1}-x'\eta _{1}}$ becomes

${\displaystyle w=\left(y'{\frac {\partial \phi }{\partial \alpha }}-x'{\frac {\partial \psi }{\partial \alpha }}\right)\delta \alpha +\left(y'{\frac {\partial \phi }{\partial \beta }}-x'{\frac {\partial \psi }{\partial \beta }}\right)\delta \beta }$.

Writing this value of ${\displaystyle w}$ in the equation ${\displaystyle \delta G=0}$, we have

${\displaystyle -{\frac {\text{d}}{{\text{d}}t}}\left(F_{1}{\frac {\text{d}}{{\text{d}}t}}\left[\left(y'{\frac {\partial \phi }{\partial \alpha }}-x'{\frac {\partial \psi }{\partial \alpha }}\right)\delta \alpha +\left(y'{\frac {\partial \phi }{\partial \beta }}-x'{\frac {\partial \psi }{\partial \beta }}\right)\delta \beta \right]\right)+F_{2}\left[\left(y'{\frac {\partial \phi }{\partial \alpha }}-x'{\frac {\partial \psi }{\partial \alpha }}\right)\delta \alpha +\left(y'{\frac {\partial \phi }{\partial \beta }}-x'{\frac {\partial \psi }{\partial \beta }}\right)\delta \beta \right]=0}$.

By equating the coefficients of ${\displaystyle \delta \alpha }$ and ${\displaystyle \delta \beta }$ respectively to zero, we have the two equations:

${\displaystyle 1)\qquad -{\frac {\text{d}}{{\text{d}}t}}\left(F_{1}{\frac {\text{d}}{{\text{d}}t}}\theta _{v}(t)\right)+F_{2}\theta _{v}(t)=0,\qquad (v=1,2)}$

where, for brevity, we have written

${\displaystyle {\frac {\partial \phi (t)}{\partial t}}=\phi '(t),{\frac {\partial \phi }{\partial \alpha }}=\phi _{1}(t),{\frac {\partial \phi }{\partial \beta }}=\phi _{2},\qquad {\frac {\partial \psi (t)}{\partial t}}=\psi '(t),{\frac {\partial \psi }{\partial \alpha }}=\psi _{1}(t),{\frac {\partial \psi }{\partial \beta }}=\psi _{2}}$
${\displaystyle 2)\qquad \theta _{1}(t)=\phi '(t)\phi _{1}(t)-\phi '(t)\psi _{1}(t),\qquad \theta _{2}(t)=\psi '(t)\phi _{2}(t)-\phi '(t)\psi _{2}(t)}$.

It is seen at once that ${\displaystyle \theta _{1}(t)}$ and ${\displaystyle \theta _{2}(t)}$ are the solutions of the differential equation

${\displaystyle {\frac {\text{d}}{{\text{d}}t}}\left(F_{1}{\frac {{\text{d}}u}{{\text{d}}t}}\right)-F_{2}u=0}$.

Hence it is seen that the general solution of the differential equation for ${\displaystyle u}$ is had from the integrals of the differential equation ${\displaystyle G=0}$, through simple differentiation.

Article 127.
We have next to prove that the two solutions ${\displaystyle \theta _{1}(t)}$ and ${\displaystyle \theta _{2}(t)}$ are independent of each other. In order to make this proof as simple as possible, let ${\displaystyle x}$ be written for the arbitrary quantity ${\displaystyle t}$.

Then the expressions ${\displaystyle x=\phi (t,\alpha ,\beta )}$, ${\displaystyle y=\psi (t,\alpha ,\beta )}$, etc., become

${\displaystyle x=x,\qquad y=\psi (x,\alpha ,\beta ),}$
${\displaystyle \phi '=1,\qquad \phi _{1}=0,\phi _{2}=0,\qquad \psi '={\frac {{\text{d}}y}{{\text{d}}x}},}$
${\displaystyle \theta _{1}=-\psi _{1},\theta _{2}=-\psi _{2}}$.

If ${\displaystyle \theta _{1}}$ and ${\displaystyle \theta _{2}}$ are linearly dependent upon each other, we must have

${\displaystyle \theta _{2}={\text{constant}}\theta _{1}}$,

from which it follows, at once, that

${\displaystyle \theta _{1}\theta _{2}'=\theta _{2}\theta _{1}'=0}$,

where the accents denote differentiation with respect to ${\displaystyle x}$; or,

${\displaystyle \psi _{1}\psi _{2}'-\psi _{2}\psi _{1}'=0}$.

On the other hand, ${\displaystyle y=\psi (x,\alpha ,\beta )}$ is the complete solution of the differential equation, which arises out of ${\displaystyle G_{2}=-x'G=0}$, when ${\displaystyle x}$ is written for ${\displaystyle t}$; that is, of

${\displaystyle {\frac {\text{d}}{{\text{d}}x}}\left({\frac {\partial F}{\partial {\frac {{\text{d}}y}{{\text{d}}x}}}}\right)-{\frac {\partial F}{\partial y}}=0}$;

but here ${\displaystyle \alpha }$ and ${\displaystyle \beta }$ are two arbitrary independent constants, and consequently ${\displaystyle \psi }$ and ${\displaystyle \psi '={\frac {{\text{d}}\psi }{{\text{d}}x}}}$ are independent of each other with respect to ${\displaystyle \alpha }$ and ${\displaystyle \beta }$, so that the determinant

${\displaystyle \psi _{1}\psi _{2}'-\psi _{2}\psi _{1}'}$

is different from zero. Consequently ${\displaystyle \theta _{1}}$ and ${\displaystyle \theta _{2}}$ are independent of each other, since the contrary assumption stands in contradiction to the result just established. Hence, the general solution of the differential equation ${\displaystyle J=0}$, is of the form

${\displaystyle u=c_{1}\theta _{1}(t)+c_{2}\theta _{2}(t)}$,

where ${\displaystyle c_{1}}$ and ${\displaystyle c_{2}}$ are arbitrary constants.

Article 128.
Following the methods of Weierstrass we have just proved the assertion of Jacobi ; since, as soon as we have the complete integral of ${\displaystyle G=0}$, it is easy to express the complete solution of the differential equation ${\displaystyle J=0}$.

The constants ${\displaystyle c_{1}}$ and ${\displaystyle c_{2}}$ may be so determined that ${\displaystyle u}$ vanishes on a definite position ${\displaystyle t'}$, which may lie somewhere on the curve before we get to ${\displaystyle t_{1}}$. This may be effected by writing

${\displaystyle c_{1}=-\theta _{2}(t'),\qquad c_{2}=\theta _{1}(t')}$.

The solution of the equation ${\displaystyle J=0}$ becomes

${\displaystyle 3)\qquad u=\theta _{1}(t')\theta _{2}(t)-\theta _{2}(t')\theta _{1}(t)=\Theta (t,t')}$.

It may turn out that ${\displaystyle \Theta (t,t')}$ vanishes for no other value of ${\displaystyle t}$; but it may also happen that there are other positions than ${\displaystyle t'}$ at which ${\displaystyle \Theta (t,t')}$ becomes zero. If ${\displaystyle t''}$ is the first zero position of ${\displaystyle \Theta (t,t')}$ which follows ${\displaystyle t'}$ then ${\displaystyle t''}$ is called the conjugate point to ${\displaystyle t'}$.

Since ${\displaystyle t'}$ has been arbitrarily chosen, we may associate with every point of the curve a second point, its conjugate. This being premised, we come to the following theorem, also due to Jacobi :

If within the interval ${\displaystyle t_{0}\ldots t_{1}}$ there are no two points which are conjugate to each other in the above sense, then it is possible so to determine u that it satisfies the differential equation ${\displaystyle J=0}$, and nowhere vanishes within the interval ${\displaystyle t_{0}\ldots t_{1}}$.

Article 129.
Let the point ${\displaystyle t=t'}$ be a zero position of the function

${\displaystyle u=\Theta (t,t')}$,

and let ${\displaystyle t''}$ be a conjugate point to ${\displaystyle t'}$, then ${\displaystyle \Theta (t,t')}$ will not again vanish within the interval ${\displaystyle t'\ldots t''}$. Take in the neighborhood of the point ${\displaystyle t'}$ a point ${\displaystyle t'+\tau }$, where ${\displaystyle \tau >0}$, then the point which is conjugate to ${\displaystyle t'+\tau }$ can lie only on the other side of ${\displaystyle t''}$. This may be shown as follows:

If u = \Theta(t,t') is a solution of the equation

${\displaystyle F_{1}{\frac {{\text{d}}^{2}u}{{\text{d}}t^{2}}}+{\frac {{\text{d}}F_{1}}{{\text{d}}t}}{\frac {{\text{d}}u}{{\text{d}}t}}-F_{2}u=0}$,

then is

${\displaystyle {\bar {u}}=\Theta (t,t'+\tau )}$

a solution of the same equation ; that is, of

${\displaystyle F_{1}{\frac {{\text{d}}^{2}{\bar {u}}}{{\text{d}}t^{2}}}+{\frac {{\text{d}}F_{1}}{{\text{d}}t}}{\frac {{\text{d}}{\bar {u}}}{{\text{d}}t}}-F_{2}{\bar {u}}=0}$,

since ${\displaystyle {\bar {u}}}$ differs from ${\displaystyle u}$ only through another choice of the arbitrary constants ${\displaystyle c_{1}}$ and ${\displaystyle c_{2}}$.

If ${\displaystyle \tau }$ is chosen sujBciently small, then ${\displaystyle \Theta (t'+\tau ,t')}$ is different from zero and consequently also ${\displaystyle \Theta (t',t'+\tau )\neq 0}$.

Eliminate ${\displaystyle F_{2}}$ from the two equations above, and we have

${\displaystyle 4)\qquad F_{1}\left(u{\frac {{\text{d}}^{2}{\bar {u}}}{{\text{d}}t^{2}}}-{\bar {u}}{\frac {{\text{d}}^{2}u}{{\text{d}}t^{2}}}\right)+{\frac {{\text{d}}F_{1}}{{\text{d}}t}}\left(u{\frac {{\text{d}}{\bar {u}}}{{\text{d}}t}}-{\bar {u}}{\frac {{\text{d}}u}{{\text{d}}t}}\right)=0}$.

Now write

${\displaystyle 5)\qquad u{\frac {{\text{d}}{\bar {u}}}{{\text{d}}t}}-{\bar {u}}{\frac {{\text{d}}u}{{\text{d}}t}}=v}$,

and the above equation becomes

${\displaystyle 6){\frac {{\text{d}}v}{v}}=-{\frac {{\text{d}}F_{1}}{F_{1}}}}$,

which, when integrated, is

${\displaystyle 7)\qquad v=u{\frac {{\text{d}}{\bar {u}}}{{\text{d}}t}}-{\bar {u}}{\frac {{\text{d}}u}{{\text{d}}t}}=+{\frac {C}{F_{1}}}}$.

The constant ${\displaystyle C}$ in this expression cannot vanish, for, in that case,

${\displaystyle u={\text{const.}}{\bar {u}}}$,

or

${\displaystyle \Theta (t,t')={\text{const.}}\Theta (t,t'+\tau )}$.

Since, however, ${\displaystyle \Theta (t,t')}$ vanishes for ${\displaystyle t=t'}$, it results from the above that ${\displaystyle \Theta (t',t'+\tau )=0}$, which is contrary to the hypothesis, and consequently ${\displaystyle C}$ cannot vanish.

It is further assumed that ${\displaystyle F_{1}}$ does not change its sign or become zero within the interval ${\displaystyle t_{0}\ldots t_{1}}$. If ${\displaystyle F_{1}}$ vanishes without a transition from the positive to the negative or vice versa within the stretch ${\displaystyle t_{0}\ldots t_{1}}$ then in general no further deductions can be drawn, and a special investigation has to be made for each particular case.

In the first case, however, ${\displaystyle v}$ has a finite value, and the equation 7), when divided through by ${\displaystyle u^{2}}$ becomes

${\displaystyle {\frac {u{\frac {{\text{d}}{\bar {u}}}{{\text{d}}t}}-{\bar {u}}{\frac {{\text{d}}u}{{\text{d}}t}}}{u^{2}}}={\frac {{\text{d}}{\frac {\bar {u}}{u}}}{{\text{d}}t}}={\frac {C}{F_{1}u^{2}}}}$,

an expression, which, when integrated, is

${\displaystyle {\bar {u}}=Cu\int _{t'+\tau }{\frac {{\text{d}}t}{F_{1}u^{2}}}}$.

Since the function ${\displaystyle u}$ does not vanish between ${\displaystyle t'}$ and ${\displaystyle t''}$, it follows from the last expression that ${\displaystyle {\bar {u}}}$ cannot vanish between the limits ${\displaystyle t'+\tau }$ and ${\displaystyle t''}$. Accordingly, if there is a point conjugate to ${\displaystyle t'+\tau }$, it cannot lie before ${\displaystyle t''}$. If, therefore, we choose a point ${\displaystyle t'''}$ before ${\displaystyle t''}$ and as close to it as we wish, then ${\displaystyle u'}$ will certainly not vanish within the interval ${\displaystyle t'+\tau \ldots t'''}$.

If ${\displaystyle t'}$ is a point situated immediately before ${\displaystyle t_{0}}$, and if we determine the point ${\displaystyle t''}$ conjugate to ${\displaystyle t'}$, and choose a point ${\displaystyle t_{1}}$ before ${\displaystyle t''}$ and as near to it as we wish, then from the preceding it is clear that no points conjugate to each other lie within the interval ${\displaystyle t_{0}\ldots t_{1}}$, the boundaries excluded. We may then, as shown above, find a function ${\displaystyle u}$, which satisfies the differential equation ${\displaystyle J=0}$ and which vanishes neither on the limits nor within the interval ${\displaystyle t_{0}\ldots t_{1}}$. The transformation of Art. 117 is therefore admissible, and the sign of ${\displaystyle \delta ^{2}I}$ depends only upon the sign of ${\displaystyle F_{1}}$.

Article 130.
We may investigate a little more closely the relation of Art. 120, where

${\displaystyle u_{2}{\frac {{\text{d}}u_{1}}{{\text{d}}t}}-u_{1}{\frac {{\text{d}}u_{2}}{{\text{d}}t}}={\frac {C}{f_{1}}}}$.

In the interval under consideration, boundaries included, we assume that ${\displaystyle F_{1}}$ does not become zero or infinite, and consequently retains the same sign. Further, the constant ${\displaystyle C}$ has always the same value and is different from zero, since ${\displaystyle u_{1}}$ and ${\displaystyle u_{2}}$ are linearly independent.

It follows at once that ${\displaystyle {\frac {{\text{d}}u_{1}}{{\text{d}}t}}}$ cannot be zero at the same time that ${\displaystyle u_{1}}$ is zero; for then ${\displaystyle C}$ would be zero contrary to our hypothesis.

Owing to the form

${\displaystyle {\frac {\text{d}}{{\text{d}}t}}\left({\frac {u_{1}}{u_{2}}}\right)={\frac {1}{u_{2}^{2}}}{\frac {C}{F_{1}}}}$,

it is clear that ${\displaystyle {\frac {\text{d}}{{\text{d}}t}}\left({\frac {u_{1}}{u_{2}}}\right)}$ has the same sign as ${\displaystyle {\frac {C}{F_{1}}}}$. We may take this sign positive, since otherwise owing to the expression

${\displaystyle u_{1}{\frac {{\text{d}}u_{2}}{{\text{d}}t}}-u_{2}{\frac {{\text{d}}u_{1}}{{\text{d}}t}}={\frac {C}{F_{1}}}}$

we would would have ${\displaystyle {\frac {\text{d}}{{\text{d}}t}}\left({\frac {u_{2}}{u_{1}}}\right)}$ positive. We may assume then that the indices have been placed upon the ${\displaystyle u}$'s, so that ${\displaystyle {\frac {u_{1}}{u_{2}}}}$ is always on the increase with increasing t.

The ratio ${\displaystyle {\frac {u_{1}}{u_{2}}}}$ will become infinite for the zero values of ${\displaystyle u_{2}}$ (see Art. 120). Since this quotient is always increasing with increasing values of ${\displaystyle t}$, the trace of the corresponding curve must pass through ${\displaystyle +\infty }$, and return again (if it does return) from ${\displaystyle -\infty }$. Values of ${\displaystyle t}$, for which this quotient has the same value, may be called congruent.

It is evident, as shown in the accompanying figure, that such values are equi-distant from two values of ${\displaystyle t}$, say ${\displaystyle t_{0}}$ and ${\displaystyle t_{1}}$, which make ${\displaystyle u_{2}=0}$. The abscissae are values of ${\displaystyle t}$, and the ordinates are the corresponding values of the ratio ${\displaystyle {\frac {u_{1}}{u_{2}}}}$.

Article 131.
To summarize : We have supposed the cases excluded in which ${\displaystyle F_{1}}$ is zero along the curve under consideration. If this function were zero at an isolated point of the curve, it would be a limiting case of what we have considered. If it were zero along a stretch of this curve, we should have to consider variations of the third order, and would have, in general, neither a maximum nor a minimum value unless this variation also vanished, leaving us to investigate variations of the fourth order. We exclude these cases from the present treatment, and suppose also that ${\displaystyle F_{1}}$ and ${\displaystyle F_{2}}$ are everywhere finite along our curve (otherwise the expression for the second variation, viz. ”

${\displaystyle \int (F_{1}w'^{2}+F_{2}w^{2})~{\text{d}}t}$,

would have no meaning).

We also derived in Art. 124 the variation of ${\displaystyle G}$ in the form

${\displaystyle \delta G=F_{2}w-{\frac {\text{d}}{{\text{d}}t}}\left(F_{1}{\frac {{\text{d}}w}{{\text{d}}t}}\right)}$,

and when this is compared with the differential equation

${\displaystyle 12^{a})\qquad 0=F_{2}u-{\frac {\text{d}}{{\text{d}}t}}\left(F_{1}{\frac {{\text{d}}u}{{\text{d}}t}}\right)\qquad }$ (see Art. 118),

it is seen that if an integral ${\displaystyle u}$ of the differential equation ${\displaystyle 12^{a})}$ vanishes for any value of ${\displaystyle t}$, the corresponding integral ${\displaystyle w}$ of the equation ${\displaystyle \delta G=0}$ vanishes for the same value of ${\displaystyle t}$.

${\displaystyle w=y'\xi _{1}-x'\eta _{1}=\delta \alpha \theta _{1}(t)+\delta \beta \theta _{2}(t)}$,

where the displacement ${\displaystyle \xi _{1}}$, ${\displaystyle \eta _{1}}$ takes us from a point of the curve ${\displaystyle G=0}$ to a point of the curve ${\displaystyle \delta G=0}$. Consequently the normal displacement ${\displaystyle w_{N}}$ can be zero only at a point where the curves ${\displaystyle G=0}$ and ${\displaystyle \delta G=0}$ intersect.

At such a point we must have

${\displaystyle \delta \alpha \theta _{1}(t)\delta \beta \theta _{2}(t)=0}$.

When one of the family of curves ${\displaystyle G=0}$ has been selected, the two associated constants ${\displaystyle \alpha }$ and ${\displaystyle \beta }$ are fixed. These are the constants that occur in ${\displaystyle \theta _{1}(t)}$ and ${\displaystyle \theta _{2}(t)}$. If , further, the curve passes through a fixed point ${\displaystyle P_{0}}$, the variable ${\displaystyle t}$ is determined, and consequently the functions ${\displaystyle \theta _{1}(t)}$ and ${\displaystyle \theta _{2}(t)}$ are definitely determined, so that the ratio ${\displaystyle \delta \alpha :\delta \beta }$ is definitely known from the above relation. There may be a second point at which the curves ${\displaystyle G=0}$ and ${\displaystyle \delta G=0}$ intersect. This point is the point conjugate to ${\displaystyle P_{0}}$ (see Art. 128).

Article 132.
The geometrical significance of these conjugate points is more fully considered in Chapter XI. Writing the second variation in the form

${\displaystyle \delta ^{2}I=\int _{t_{0}}^{t_{1}}F_{1}w^{2}\left({\frac {w'}{w}}-{\frac {u'}{u}}\right)~{\text{d}}t}$,

we see that the possibility of ${\displaystyle {\frac {w'}{w}}-{\frac {u'}{u}}=0}$ is when ${\displaystyle u=Cw}$. Now ${\displaystyle w}$ is zero at both of the end-points of the curve, since at these points there is no variation, but ${\displaystyle u}$ is equal to zero at ${\displaystyle P_{1}}$ only when ${\displaystyle P_{1}}$ is conjugate to ${\displaystyle P_{0}}$. Hence, unless the two curves ${\displaystyle G=0}$ and \delta G = 0 intersect again at ${\displaystyle P_{1}}$, ${\displaystyle u}$ is not equal to zero at ${\displaystyle P_{1}}$, and consequently

${\displaystyle \left({\frac {w'}{w}}-{\frac {u'}{u}}\right)^{2}\neq 0}$.

In this case, if ${\displaystyle F_{1}}$ has a positive sign throughout the interval ${\displaystyle t_{0}\ldots t_{1}}$, there is a possibility of a minimum value of the integral ${\displaystyle I}$, and there is a possibility of a maximum value when ${\displaystyle F_{1}}$ has a negative sign throughout this interval.

Article 133.
Next, let ${\displaystyle P_{1}}$ be conjugate to ${\displaystyle P_{0}}$, so that at both of the limits of integration we have ${\displaystyle u=0=w}$. We may then take ${\displaystyle u=w}$ at all other points of the curve, so that consequently

${\displaystyle \delta ^{2}I=\int _{t_{0}}^{t_{1}}F_{1}w^{2}\left({\frac {w'}{w}}-{\frac {u'}{u}}\right)^{2}~{\text{d}}t=0}$.

We cannot then say anything regarding a maximum or a minimum until we have investigated the variations of a higher order.[1]

Next, suppose that a pair of conjugate points are situated between ${\displaystyle P_{0}}$ and ${\displaystyle P_{1}}$, and let these points be ${\displaystyle P'}$ and ${\displaystyle p''}$. We may then make a displacement of the curve so that

${\displaystyle w=kw}$ from ${\displaystyle P_{0}}$ to ${\displaystyle P'}$,
${\displaystyle w=u+kw}$ from ${\displaystyle P'}$ to ${\displaystyle P''}$ and
${\displaystyle w=kw}$ from ${\displaystyle P''}$ to ${\displaystyle P_{1}}$,

where ${\displaystyle k}$ is an indeterminate constant. The quantity ${\displaystyle w}$ is subjected only to the condition that it must be zero at ${\displaystyle P_{0}}$ and ${\displaystyle P_{1}}$, and ${\displaystyle u}$ must be a solution of the difEerential equation ${\displaystyle J=0}$, and is zero at the conjugate points ${\displaystyle P'}$ and ${\displaystyle P''}$.

The second variation takes the form

${\displaystyle \delta ^{2}I=k^{2}\int _{t_{0}}^{t'}(F_{1}w'^{2}+F_{2}w^{2})~{\text{d}}t+\int _{t'}^{t''}[(F_{1}u'^{2}+F_{2}u^{2})+2k(F_{1}u'w'+F_{2}uw)+k^{2}(F_{1}w'^{2}+F_{2}w^{2})]~{\text{d}}t+k\int _{t''}^{t_{1}}(F_{1}w'^{2}+F_{2}w^{2})~{\text{d}}t}$.

In the preceding article we saw (cf. also Art. 117) that

${\displaystyle \int _{t'}^{t''}(F_{1}u'^{2}+F_{2}u^{2})~{\text{d}}t=0}$,

and we may therefore write ${\displaystyle \delta ^{2}I}$ in the form

${\displaystyle \delta ^{2}I=2k\int (F_{1}u'w'+F_{2}uw)~{\text{d}}t+k^{2}M}$,

where ${\displaystyle M}$ is a finite quantity.

The integral

${\displaystyle \int _{t'}^{t''}(F_{1}u'w'+F_{2}uw)~{\text{d}}t}$

may be written

${\displaystyle \int _{t'}^{t''}\left(-{\frac {\text{d}}{{\text{d}}t}}(F_{1}u')+F_{2}u\right)w~{\text{d}}t+{\Big [}F_{1}u'w{\Big ]}_{t'}^{t''}}$

and since, in virtue of the formula ${\displaystyle 12^{a})}$ of Art. 118, the expression under this latter integral sign is zero, it follows that

${\displaystyle \delta ^{2}I=2k{\Big [}F_{1}u'w{\Big ]}_{t'}^{t''}+k^{2}M}$.

Further, by hypothesis, ${\displaystyle F_{1}}$ retains the same sign within the interval ${\displaystyle t'\ldots t''}$, and does not become zero within or at these limits, the function ${\displaystyle u'}$ is different from zero at the limits (Arts. 130 and 152), and of opposite sign at these limits, since ${\displaystyle u}$, always retaining the same sign, leaves the value zero at one limit and approaches it at the other limit. Consequently ${\displaystyle [F_{1}u']}$ is finite and of opposite signs at the two points ${\displaystyle P'}$ and ${\displaystyle P''}$, and it remains only that ${\displaystyle w}$ be chosen finite and with the same sign, so that ${\displaystyle {\Big [}F_{1}u'w{\Big ]}_{t'}^{t''}}$ be different from zero. Hence by the proper choice of ${\displaystyle k}$ we may effect displacements for which ${\displaystyle \delta ^{2}I}$ is positive, and also those for which it is negative.

Hence when our interval includes not, however, both as extremities) a pair of conjugate points, we have definitely established that the curve in question can give rise to neither a maximum nor a minimum.

The above semi-geometrical proof is due to a note given by Prof. Schwarz at Berlin (1898-99); see also Lefon V of a course of Lectures given by Prof.Picard at Paris (1899-1900) on Equations aux dirivies partielles."

1. It is sometimes possible to establish the existence or the non-existence of a maximum or a minimum by other methods ; for example, the non-existence of a minimum is seen in Case II of Art. 58. In a very instructive paper (Trans, of the Am. Math. Soc, Vol. II, p. 166) Prof. Osgood has shown that there is a minimum in the case of the g-eodesics on an ellipsoid of revolution (due to the fact that the curve must lie on the ellipsoid). Prof. Osgood says (p. 166) that Kneser's Theorem "to the effect that there is not a minimum" is in general true. It seems that each separate case must be examined for itself, and in general nothing can be said regarding a maximum or a minimum.