Let
A
{\displaystyle A\!\,}
be a real symmetric matrix of order
n
{\displaystyle n\!\,}
with
n
{\displaystyle n\!\,}
distinct eigenvalues, and let
v
1
∈
R
n
{\displaystyle v_{1}\in R^{n}\!\,}
be such that
‖
v
1
‖
2
=
1
{\displaystyle \|v_{1}\|_{2}=1\!\,}
and the inner product
(
v
1
,
u
)
≠
0
{\displaystyle (v_{1},u)\neq 0\!\,}
for every eigenvector
u
{\displaystyle u\!\,}
of
A
{\displaystyle A\!\,}
.
Let
P
n
{\displaystyle P_{n}\!\,}
denote the space of polynomials of degree at most
n
−
1
{\displaystyle n-1\!\,}
. Show that
⟨
p
,
q
⟩
≡
(
p
(
A
)
v
1
,
q
(
A
)
v
1
)
{\displaystyle \langle p,q\rangle \equiv (p(A)v_{1},q(A)v_{1})\!\,}
defines an inner product on
P
n
{\displaystyle P_{n}\!\,}
, where the expression on the right above is the Euclidean inner product in
R
n
{\displaystyle R^{n}\!\,}
⟨
p
,
q
⟩
=
(
p
(
A
)
v
1
,
q
(
A
)
v
1
)
=
(
q
(
A
)
v
1
,
p
(
A
)
v
1
)
=
⟨
q
,
p
⟩
{\displaystyle {\begin{aligned}\langle p,q\rangle &=(p(A)v_{1},q(A)v_{1})\\&=(q(A)v_{1},p(A)v_{1})\\&=\langle q,p\rangle \end{aligned}}\!\,}
Linearity of 1st Argument
edit
Let
α
∈
R
{\displaystyle \alpha \in \mathbb {R} \!\,}
⟨
α
p
,
q
⟩
=
(
α
p
(
A
)
v
1
,
q
(
A
)
v
1
)
=
α
(
p
(
A
)
v
1
,
q
(
A
)
v
1
)
=
α
⟨
p
,
q
⟩
{\displaystyle {\begin{aligned}\langle \alpha p,q\rangle &=(\alpha p(A)v_{1},q(A)v_{1})\\&=\alpha (p(A)v_{1},q(A)v_{1})\\&=\alpha \langle p,q\rangle \end{aligned}}\!\,}
⟨
p
+
r
,
q
⟩
=
(
(
p
(
A
)
+
r
(
A
)
)
v
1
,
q
(
A
)
v
1
)
=
(
p
(
A
)
v
1
,
q
(
A
)
v
1
)
+
(
r
(
A
)
v
1
,
q
(
A
)
v
1
)
=
⟨
p
,
q
⟩
+
⟨
r
,
q
⟩
{\displaystyle {\begin{aligned}\langle p+r,q\rangle &=((p(A)+r(A))v_{1},q(A)v_{1})\\&=(p(A)v_{1},q(A)v_{1})+(r(A)v_{1},q(A)v_{1})\\&=\langle p,q\rangle +\langle r,q\rangle \end{aligned}}\!\,}
Positive Definiteness
edit
⟨
p
,
p
⟩
=
(
p
(
A
)
v
1
⏟
v
^
,
p
(
A
)
v
1
)
⏟
v
^
where
v
^
∈
R
n
=
(
v
^
,
v
^
)
=
∑
i
=
1
n
(
v
^
i
)
2
≥
0
{\displaystyle {\begin{aligned}\langle p,p\rangle &=(\underbrace {p(A)v_{1}} _{\hat {v}},\underbrace {p(A)v_{1})} _{\hat {v}}{\mbox{ where }}{\hat {v}}\in R^{n}\\&=({\hat {v}},{\hat {v}})\\&=\sum _{i=1}^{n}({\hat {v}}_{i})^{2}\geq 0\end{aligned}}\!\,}
We also need to show that
⟨
p
,
p
⟩
=
0
{\displaystyle \langle p,p\rangle =0\!\,}
if and only if
p
=
0
{\displaystyle p=0\!\,}
.
Forward Direction (alt)
edit
Suppose
p
≠
0
{\displaystyle p\neq 0}
. It suffices to show
⟨
p
,
p
⟩
≠
0
{\displaystyle \langle p,p\rangle \neq 0}
. However, this a trivial consequence of the fact that
p
(
A
)
v
1
≠
0
{\displaystyle p(A)v_{1}\neq 0}
(which is clear from the fact that
p
(
A
)
≠
0
{\displaystyle p(A)\neq 0}
for
p
≠
0
{\displaystyle p\neq 0}
with degree less than
n
{\displaystyle n}
and that
v
i
{\displaystyle v_{i}}
does not lie in the orthogonal compliment of any of the
n
{\displaystyle n}
distinct eigen vectors of
A
{\displaystyle A}
).
Claim: If
⟨
p
,
p
⟩
=
0
{\displaystyle \langle p,p\rangle =0\!\,}
, then
p
=
0
{\displaystyle p=0\!\,}
.
From hypothesis
v
1
=
∑
i
=
1
n
α
i
u
i
{\displaystyle v_{1}=\sum _{i=1}^{n}\alpha _{i}u_{i}\!\,}
where
u
i
{\displaystyle u_{i}\!\,}
are the orthogonal eigenvectors of
A
{\displaystyle A\!\,}
and all
α
i
{\displaystyle \alpha _{i}\!\,}
are non-zero
⟨
p
,
p
,
⟩
=
(
p
(
A
)
v
1
,
p
(
A
)
v
1
)
=
(
p
(
A
)
∑
i
=
1
n
α
i
u
i
,
p
(
A
)
∑
i
=
1
n
α
i
u
i
)
=
(
∑
i
=
1
n
β
i
u
i
,
∑
i
=
1
n
β
u
i
)
(since
u
i
are eigenvectors )
=
0
(by hypothesis)
{\displaystyle {\begin{aligned}\langle p,p,\rangle &=(p(A)v_{1},p(A)v_{1})\\&=(p(A)\sum _{i=1}^{n}\alpha _{i}u_{i},p(A)\sum _{i=1}^{n}\alpha _{i}u_{i})\\&=(\sum _{i=1}^{n}\beta _{i}u_{i},\sum _{i=1}^{n}\beta u_{i}){\mbox{ (since }}u_{i}{\mbox{ are eigenvectors )}}\\&=0{\mbox{ (by hypothesis) }}\end{aligned}}\!\,}
Notice that
β
i
{\displaystyle \beta _{i}\!\,}
is a linear combination of
α
i
{\displaystyle \alpha _{i}\!\,}
, the coefficients of the polynomial
A
{\displaystyle A\!\,}
, and the scaling coefficient
m
i
{\displaystyle m_{i}\!\,}
of the eigenvector e.g.
A
u
i
=
m
i
u
i
{\displaystyle Au_{i}=m_{i}u_{i}\!\,}
Since
u
i
≠
0
∀
i
{\displaystyle u_{i}\neq 0\forall i\!\,}
and
α
i
≠
0
{\displaystyle \alpha _{i}\neq 0\!\,}
, this implies
p
=
0
{\displaystyle p=0\!\,}
.
If
p
=
0
{\displaystyle p=0\!\,}
, then
⟨
p
,
p
⟩
=
(
p
(
A
)
⏟
0
v
1
,
p
(
A
)
⏟
0
v
1
)
=
0
{\displaystyle \langle p,p\rangle =(\underbrace {p(A)} _{0}v_{1},\underbrace {p(A)} _{0}v_{1})=0\!\,}
By induction.
β
2
v
2
=
A
v
1
−
α
1
v
1
−
β
1
v
0
⏟
0
v
2
=
1
β
2
[
A
v
1
−
α
1
v
1
]
v
2
=
p
1
(
A
)
v
1
p
1
(
t
)
=
1
β
2
[
t
−
α
1
]
{\displaystyle {\begin{aligned}\beta _{2}v_{2}&=Av_{1}-\alpha _{1}v_{1}-\beta _{1}\underbrace {v_{0}} _{0}\\v_{2}&={\frac {1}{\beta _{2}}}[Av_{1}-\alpha _{1}v_{1}]\\v_{2}&=p_{1}(A)v_{1}\quad p_{1}(t)={\frac {1}{\beta _{2}}}[t-\alpha _{1}]\end{aligned}}\!\,}
β
3
v
3
=
A
v
2
−
α
2
v
2
−
β
2
v
1
v
3
=
1
β
3
[
A
v
2
⏟
∈
p
2
(
A
)
v
1
−
α
2
v
2
⏟
∈
p
1
(
A
)
v
1
−
β
2
v
1
⏟
∈
p
0
(
A
)
v
1
]
∈
p
2
(
A
)
v
1
{\displaystyle {\begin{aligned}\beta _{3}v_{3}&=Av_{2}-\alpha _{2}v_{2}-\beta _{2}v_{1}\\v_{3}&={\frac {1}{\beta _{3}}}[\underbrace {Av_{2}} _{\in p_{2}(A)v_{1}}-\underbrace {\alpha _{2}v_{2}} _{\in p_{1}(A)v_{1}}-\underbrace {\beta _{2}v_{1}} _{\in p_{0}(A)v_{1}}]\\&\in p_{2}(A)v_{1}\end{aligned}}\!\,}
Claim:
v
j
=
p
j
−
1
(
A
)
v
1
{\displaystyle v_{j}=p_{j-1}(A)v_{1}\!\,}
Hypothesis:
Suppose
v
j
=
p
j
−
1
(
A
)
v
1
{\displaystyle v_{j}=p_{j-1}(A)v_{1}\!\,}
v
j
−
1
=
p
j
−
2
(
A
)
v
1
{\displaystyle v_{j-1}=p_{j-2}(A)v_{1}\!\,}
where
p
j
−
1
{\displaystyle p_{j-1}}
(respectively
p
j
−
2
{\displaystyle p_{j-2}}
) has degree
j
−
1
{\displaystyle j-1}
(respectively
j
−
2
{\displaystyle j-2}
). Then for
β
j
+
1
≠
0
{\displaystyle \beta _{j+1}\neq 0}
v
j
+
1
=
A
p
j
−
1
(
A
)
−
α
j
p
j
−
1
(
A
)
−
β
j
p
j
−
2
(
A
)
β
j
+
1
v
1
{\displaystyle v_{j+1}={\frac {Ap_{j-1}(A)-\alpha _{j}p_{j-1}(A)-\beta _{j}p_{j-2}(A)}{\beta _{j+1}}}v_{1}}
which is as desired.
Since
v
j
=
p
j
−
1
(
A
)
v
1
{\displaystyle v_{j}=p_{j-1}(A)v_{1}\!\,}
and
⟨
p
,
q
⟩
≡
(
p
(
A
)
v
1
,
q
(
A
)
v
1
)
{\displaystyle \langle p,q\rangle \equiv (p(A)v_{1},q(A)v_{1})\!\,}
, it is equivalent to show that
(
v
i
,
v
j
)
=
0
{\displaystyle (v_{i},v_{j})=0\!\,}
for
i
≠
j
{\displaystyle i\neq j\!\,}
.
Since
v
j
+
1
=
1
β
j
+
1
A
v
j
−
α
j
β
j
v
j
−
β
j
+
1
β
j
v
j
−
1
{\displaystyle v_{j+1}={\frac {1}{\beta _{j+1}}}Av_{j}-{\frac {\alpha _{j}}{\beta _{j}}}v_{j}-{\frac {\beta _{j+1}}{\beta _{j}}}v_{j-1}\!\,}
,
it is then sufficient to show that
(
v
j
+
1
,
v
j
)
=
(
v
j
+
1
,
v
j
−
1
)
=
0
{\displaystyle (v_{j+1},v_{j})=(v_{j+1},v_{j-1})=0\!\,}
Claim
(
v
j
+
1
,
v
j
)
=
0
∀
j
{\displaystyle (v_{j+1},v_{j})=0\quad \forall j\!\,}
edit
By induction.
v
2
=
1
β
2
(
A
v
1
−
α
1
v
1
)
(
v
2
,
v
1
)
=
1
β
2
[
(
A
v
1
,
v
1
)
−
α
1
⏟
(
A
v
1
,
v
1
)
(
v
1
,
v
1
)
⏟
1
]
=
0
{\displaystyle {\begin{aligned}v_{2}&={\frac {1}{\beta _{2}}}(Av_{1}-\alpha _{1}v_{1})\\(v_{2},v_{1})&={\frac {1}{\beta _{2}}}[(Av_{1},v_{1})-\underbrace {\alpha _{1}} _{(Av_{1},v_{1})}\underbrace {(v_{1},v_{1})} _{1}]\\&=0\end{aligned}}\!\,}
Assume:
(
v
j
,
v
j
−
1
)
=
0
{\displaystyle (v_{j},v_{j-1})=0\!\,}
Claim:
(
v
j
+
1
,
v
j
)
=
0
{\displaystyle (v_{j+1},v_{j})=0\!\,}
v
j
+
1
=
1
β
j
+
1
[
A
v
j
−
α
j
v
j
−
β
j
v
j
−
1
]
(
v
j
+
1
,
v
j
)
=
1
β
j
+
1
[
(
A
v
j
,
v
j
)
−
α
j
⏟
(
A
v
j
,
v
j
)
(
v
j
,
v
j
)
⏟
1
−
β
j
(
v
j
−
1
,
v
j
)
⏟
0
]
=
0
{\displaystyle {\begin{aligned}v_{j+1}&={\frac {1}{\beta _{j+1}}}[Av_{j}-\alpha _{j}v_{j}-\beta _{j}v_{j-1}]\\(v_{j+1},v_{j})&={\frac {1}{\beta _{j+1}}}[(Av_{j},v_{j})-\underbrace {\alpha _{j}} _{(Av_{j},v_{j})}\underbrace {(v_{j},v_{j})} _{1}-\beta _{j}\underbrace {(v_{j-1},v_{j})} _{0}]\\&=0\end{aligned}}\!\,}
Claim
(
v
j
+
1
,
v
j
−
1
)
=
0
{\displaystyle (v_{j+1},v_{j-1})=0\!\,}
edit
By induction.
v
3
=
1
β
3
[
A
v
2
−
α
2
v
2
−
β
2
v
1
]
(
v
3
,
v
1
)
=
1
β
3
[
(
A
v
2
,
v
1
)
−
α
2
(
v
2
,
v
1
)
⏟
0
−
β
2
(
v
1
,
v
1
)
⏟
1
]
=
1
β
3
[
(
A
v
2
,
v
1
)
−
β
2
]
=
1
β
3
[
(
v
2
,
A
v
1
)
−
β
2
]
(because A symmetric)
=
1
β
3
[
β
2
−
β
2
]
=
0
(see below)
{\displaystyle {\begin{aligned}v_{3}&={\frac {1}{\beta _{3}}}[Av_{2}-\alpha _{2}v_{2}-\beta _{2}v_{1}]\\(v_{3},v_{1})&={\frac {1}{\beta _{3}}}[(Av_{2},v_{1})-\alpha _{2}\underbrace {(v_{2},v_{1})} _{0}-\beta _{2}\underbrace {(v_{1},v_{1})} _{1}]\\&={\frac {1}{\beta _{3}}}[(Av_{2},v_{1})-\beta _{2}]\\&={\frac {1}{\beta _{3}}}[(v_{2},Av_{1})-\beta _{2}]{\mbox{ (because A symmetric) }}\\&={\frac {1}{\beta _{3}}}[\beta _{2}-\beta _{2}]=0{\mbox{ (see below) }}\end{aligned}}\!\,}
β
2
v
2
=
A
v
1
−
α
1
v
1
β
2
(
v
2
,
v
2
)
⏟
1
=
(
A
v
1
,
v
2
)
−
α
1
(
v
1
,
v
2
)
⏟
0
β
2
=
(
A
v
1
,
v
2
)
{\displaystyle {\begin{aligned}\beta _{2}v_{2}&=Av_{1}-\alpha _{1}v_{1}\\\beta _{2}\underbrace {(v_{2},v_{2})} _{1}&=(Av_{1},v_{2})-\alpha _{1}\underbrace {(v_{1},v_{2})} _{0}\\\beta _{2}&=(Av_{1},v_{2})\end{aligned}}\!\,}
Assume:
(
v
j
,
v
j
−
2
)
=
0
{\displaystyle (v_{j},v_{j-2})=0\!\,}
Claim:
(
v
j
+
1
,
v
j
−
1
)
=
0
{\displaystyle (v_{j+1},v_{j-1})=0\!\,}
(
v
j
+
1
,
v
j
−
1
)
=
1
β
j
+
1
[
(
A
v
j
,
v
j
−
1
−
α
j
(
v
j
,
v
j
−
1
)
⏟
0
−
β
j
(
v
j
−
1
,
v
j
−
1
)
⏟
1
=
1
β
j
+
1
[
(
A
v
j
,
v
j
−
1
)
−
β
j
)
]
=
1
β
j
+
1
[
(
v
j
,
A
v
j
−
1
)
−
β
j
]
(because A symmetric)
=
1
β
j
+
1
[
β
j
−
β
j
]
=
0
(see below)
{\displaystyle {\begin{aligned}(v_{j+1},v_{j-1})&={\frac {1}{\beta _{j+1}}}[(Av_{j},v_{j-1}-\alpha _{j}\underbrace {(v_{j},v_{j-1})} _{0}-\beta _{j}\underbrace {(v_{j-1},v_{j-1})} _{1}\\&={\frac {1}{\beta _{j+1}}}[(Av_{j},v_{j-1})-\beta _{j})]\\&={\frac {1}{\beta _{j+1}}}[(v_{j},Av_{j-1})-\beta _{j}]{\mbox{ (because A symmetric)}}\\&={\frac {1}{\beta _{j+1}}}[\beta _{j}-\beta _{j}]=0{\mbox{ (see below) }}\end{aligned}}\!\,}
β
j
v
j
=
A
v
j
−
1
−
α
j
−
1
v
j
−
1
−
β
j
−
1
v
j
−
2
β
j
(
v
j
,
v
j
)
⏟
1
=
(
A
v
j
−
1
,
v
j
)
−
α
j
−
1
(
v
j
−
1
,
v
j
)
⏟
0
−
β
j
−
1
(
v
j
−
2
,
v
j
)
⏟
0
β
j
=
(
A
v
j
−
1
,
v
j
)
{\displaystyle {\begin{aligned}\beta _{j}v_{j}&=Av_{j-1}-\alpha _{j-1}v_{j-1}-\beta _{j-1}v_{j-2}\\\beta _{j}\underbrace {(v_{j},v_{j})} _{1}&=(Av_{j-1},v_{j})-\alpha _{j-1}\underbrace {(v_{j-1},v_{j})} _{0}-\beta _{j-1}\underbrace {(v_{j-2},v_{j})} _{0}\\\beta _{j}&=(Av_{j-1},v_{j})\end{aligned}}\!\,}
Rewrite given equation on specific interval
edit
For a specific interval
[
t
k
,
t
k
+
1
]
{\displaystyle [t_{k},t_{k+1}]\!\,}
, we have from hypothesis
(
t
k
+
1
−
t
k
2
)
(
f
(
t
k
)
+
f
(
t
k
+
1
)
)
−
∫
t
k
t
k
+
1
f
(
t
)
=
∫
t
k
t
k
+
1
G
(
t
)
f
′
(
t
)
d
t
{\displaystyle ({\frac {t_{k+1}-t_{k}}{2}})(f(t_{k})+f(t_{k+1}))-\int _{t_{k}}^{t_{k+1}}f(t)=\int _{t_{k}}^{t_{k+1}}G(t)f'(t)dt\!\,}
.
Distributing and rearranging terms gives
(
1
)
(
t
k
+
1
−
t
k
2
)
f
(
t
k
)
+
(
t
k
+
1
−
t
k
2
)
f
(
t
k
+
1
)
)
−
∫
t
k
t
k
+
1
f
(
t
)
=
∫
t
k
t
k
+
1
G
(
t
)
f
′
(
t
)
{\displaystyle (1)\qquad ({\frac {t_{k+1}-t_{k}}{2}})f(t_{k})+({\frac {t_{k+1}-t_{k}}{2}})f(t_{k+1}))-\int _{t_{k}}^{t_{k+1}}f(t)=\int _{t_{k}}^{t_{k+1}}G(t)f'(t)\!\,}
Starting with the hint and applying product rule, we get
∫
t
k
t
k
+
1
(
f
(
t
)
G
(
t
)
)
′
d
t
=
∫
t
k
t
k
+
1
f
′
(
t
)
G
(
t
)
d
t
+
∫
t
k
t
k
+
1
f
(
t
)
G
′
(
t
)
d
t
{\displaystyle \int _{t_{k}}^{t_{k+1}}(f(t)G(t))'dt=\int _{t_{k}}^{t_{k+1}}f'(t)G(t)dt+\int _{t_{k}}^{t_{k+1}}f(t)G'(t)dt\!\,}
.
Also, we know from the Fundamental Theorem of Calculus
∫
t
k
t
k
+
1
(
f
(
t
)
G
(
t
)
)
′
d
t
=
f
(
t
k
+
1
)
G
(
t
k
+
1
)
−
f
(
t
k
)
G
(
t
k
)
{\displaystyle \int _{t_{k}}^{t_{k+1}}(f(t)G(t))'dt=f(t_{k+1})G(t_{k+1})-f(t_{k})G(t_{k})\!\,}
.
Setting the above two equations equal to each other and solving for
∫
t
k
t
k
+
1
f
′
(
t
)
G
(
t
)
{\displaystyle \int _{t_{k}}^{t_{k+1}}f'(t)G(t)\!\,}
yields
(
2
)
−
G
(
t
k
)
f
(
t
k
)
+
G
(
t
k
+
1
)
f
(
t
k
+
1
)
−
∫
t
k
t
k
+
1
G
′
(
t
)
f
(
t
)
d
t
=
∫
t
k
t
k
+
1
G
(
t
)
f
′
(
t
)
{\displaystyle (2)\qquad -G(t_{k})f(t_{k})+G(t_{k+1})f(t_{k+1})-\int _{t_{k}}^{t_{k+1}}G'(t)f(t)dt=\int _{t_{k}}^{t_{k+1}}G(t)f'(t)\!\,}
Let
G
′
(
t
)
=
1
{\displaystyle G'(t)=1\!\,}
. Therefore, since
G
(
t
)
{\displaystyle G(t)\!\,}
is linear
(
3
)
G
(
t
)
=
t
+
b
{\displaystyle (3)\qquad G(t)=t+b\!\,}
By comparing equations (1) and (2) we see that
G
(
t
k
+
1
)
=
t
k
+
1
−
t
k
2
{\displaystyle G(t_{k+1})={\frac {t_{k+1}-t_{k}}{2}}\!\,}
and
G
(
t
k
)
=
−
t
k
+
1
−
t
k
2
{\displaystyle G(t_{k})=-{\frac {t_{k+1}-t_{k}}{2}}\!\,}
.
Plugging in either
G
(
t
k
)
{\displaystyle G(t_{k})\!\,}
or
G
(
t
k
+
1
)
{\displaystyle G(t_{k+1})\!\,}
into equation (3), we get that
b
=
−
t
k
+
1
+
t
k
2
{\displaystyle b=-{\frac {t_{k+1}+t_{k}}{2}}\!\,}
Hence
G
(
t
)
=
t
−
t
k
+
1
+
t
k
2
{\displaystyle G(t)=t-{\frac {t_{k+1}+t_{k}}{2}}\!\,}
Apply the previous result to
f
(
x
)
=
x
α
{\displaystyle f(x)=x^{\alpha }\!\,}
,
0
<
α
<
1
{\displaystyle 0<\alpha <1\!\,}
, to obtain a rate of convergence.
Let
C
[
a
,
b
]
{\displaystyle C[a,b]\!\,}
denote the set of all real-valued continuous functions defined on the closed interval
[
a
,
b
]
{\displaystyle [a,b]\!\,}
be positive everywhere in
[
a
,
b
]
{\displaystyle [a,b]\!\,}
. Let
{
Q
n
}
n
=
0
∞
{\displaystyle \{Q_{n}\}_{n=0}^{\infty }\!\,}
be a system of polynomials with
d
e
g
Q
n
=
n
{\displaystyle deg\,Q_{n}=n\!\,}
for each
n
{\displaystyle n\!\,}
, orthogonal with respect to the inner product
⟨
g
,
h
⟩
=
∫
a
b
ρ
(
x
)
g
(
x
)
h
(
x
)
d
x
,
∀
g
,
h
∈
C
[
a
,
b
]
{\displaystyle \langle g,h\rangle =\int _{a}^{b}\rho (x)g(x)h(x)dx,\quad \forall g,h\in C[a,b]\!\,}
For a fixed integer
n
≥
2
{\displaystyle n\geq 2\!\,}
, let
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}\!\,}
be the
n
{\displaystyle n\!\,}
distinct roots of
Q
n
{\displaystyle Q_{n}\!\,}
in
(
a
,
b
)
{\displaystyle (a,b)\!\,}
. Let
r
k
(
x
)
=
(
x
−
x
1
)
⋯
(
x
−
x
k
−
1
)
(
x
−
x
k
+
1
)
⋯
(
x
−
x
n
)
(
x
k
−
x
1
)
⋯
(
x
k
−
x
k
−
1
)
(
x
k
−
x
k
+
1
)
⋯
(
x
k
−
x
n
)
,
k
=
1
,
2
,
…
,
n
{\displaystyle r_{k}(x)={\frac {(x-x_{1})\cdots (x-x_{k-1})(x-x_{k+1})\cdots (x-x_{n})}{(x_{k}-x_{1})\cdots (x_{k}-x_{k-1})(x_{k}-x_{k+1})\cdots (x_{k}-x_{n})}},\quad k=1,2,\ldots ,n\!\,}
be polynomials of degree
n
−
1
{\displaystyle n-1\!\,}
. Show that
∫
a
b
ρ
(
x
)
r
j
(
x
)
r
k
(
x
)
d
x
=
0
,
∀
j
≠
k
{\displaystyle \int _{a}^{b}\rho (x)r_{j}(x)r_{k}(x)dx=0,\forall j\neq k\!\,}
and that
∑
k
=
1
n
∫
a
b
ρ
(
x
)
(
r
k
(
x
)
)
2
d
x
=
∫
a
b
ρ
(
x
)
d
x
{\displaystyle \sum _{k=1}^{n}\int _{a}^{b}\rho (x)(r_{k}(x))^{2}dx=\int _{a}^{b}\rho (x)dx\!\,}
Hint: Use orthogonality to simplify
∫
a
b
ρ
(
x
)
(
∑
k
=
1
n
r
k
(
x
)
)
2
d
x
{\displaystyle \int _{a}^{b}\rho (x)(\sum _{k=1}^{n}r_{k}(x))^{2}dx\!\,}
∫
a
b
ρ
(
x
)
r
j
(
x
)
r
k
(
x
)
d
x
=
∫
a
b
ρ
(
x
)
∏
i
≠
j
x
−
x
i
x
j
−
x
i
∏
i
≠
k
x
−
x
i
x
k
−
x
i
d
x
=
∫
a
b
∏
i
≠
k
,
i
≠
j
n
x
−
x
i
∏
i
≠
j
n
x
j
−
x
i
⏞
Q
n
−
2
∏
i
=
1
n
x
−
x
i
∏
i
≠
k
n
x
k
−
x
i
⏞
n
Q
d
x
=
0
{\displaystyle {\begin{aligned}&\quad \int _{a}^{b}\rho (x)r_{j}(x)r_{k}(x)dx\\&=\int _{a}^{b}\rho (x)\prod _{i\neq j}{\frac {x-x_{i}}{x_{j}-x_{i}}}\prod _{i\neq k}{\frac {x-x_{i}}{x_{k}-x_{i}}}dx\\&=\int _{a}^{b}\overbrace {\frac {\prod _{i\neq k,i\neq j}^{n}x-x_{i}}{\prod _{i\neq j}^{n}x_{j}-x_{i}}} ^{Q_{n-2}}\overbrace {\frac {\prod _{i=1}^{n}x-x_{i}}{\prod _{i\neq k}^{n}x_{k}-x_{i}}} _{n}^{Q}dx\\&=0\end{aligned}}\!\,}
∑
k
=
1
n
∫
a
b
ρ
(
x
)
(
r
k
(
x
)
)
2
d
x
=
∫
a
b
ρ
(
x
)
∑
k
=
1
n
(
r
k
(
x
)
)
2
d
x
=
∫
a
b
ρ
(
x
)
(
∑
k
=
1
n
r
k
(
x
)
)
2
d
x
(from part a)
=
∫
a
b
ρ
(
x
)
(
1
)
2
d
x
(from claim)
=
∫
a
b
ρ
(
x
)
d
x
{\displaystyle {\begin{aligned}&\quad \sum _{k=1}^{n}\int _{a}^{b}\rho (x)(r_{k}(x))^{2}dx\\&=\int _{a}^{b}\rho (x)\sum _{k=1}^{n}(r_{k}(x))^{2}dx\\&=\int _{a}^{b}\rho (x)(\sum _{k=1}^{n}r_{k}(x))^{2}dx{\mbox{ (from part a) }}\\&=\int _{a}^{b}\rho (x)(1)^{2}dx{\mbox{ (from claim) }}\\&=\int _{a}^{b}\rho (x)dx\end{aligned}}\!\,}
∑
k
=
1
n
r
k
(
x
)
=
1
{\displaystyle \sum _{k=1}^{n}r_{k}(x)=1\!\,}
Since
r
k
{\displaystyle r_{k}\!\,}
is a polynomial of degree
n
−
1
{\displaystyle n-1\!\,}
for all
k
{\displaystyle k\!\,}
,
∑
k
=
1
n
r
k
(
x
)
{\displaystyle \sum _{k=1}^{n}r_{k}(x)\!\,}
is a polynomial of degree
n
−
1
{\displaystyle n-1\!\,}
.
Notice that
∑
k
=
1
n
r
k
(
x
i
)
=
1
{\displaystyle \sum _{k=1}^{n}r_{k}(x_{i})=1\!\,}
for
i
=
1
,
2
,
…
,
n
{\displaystyle i=1,2,\ldots ,n\!\,}
where
x
i
{\displaystyle x_{i}\!\,}
are the
n
{\displaystyle n\!\,}
distinct roots of
Q
n
{\displaystyle Q_{n}\!\,}
. Since
∑
k
=
1
n
r
k
(
x
)
{\displaystyle \sum _{k=1}^{n}r_{k}(x)\!\,}
is a polynomial of degree
n
−
1
{\displaystyle n-1\!\,}
and takes on the value 1,
n
{\displaystyle n\!\,}
distinct times
∑
k
=
1
n
r
k
(
x
)
=
1
{\displaystyle \sum _{k=1}^{n}r_{k}(x)=1\!\,}