Gamma
Probability density function
Cumulative distribution function
Parameters
k
>
0
{\displaystyle \scriptstyle k\;>\;0}
shape
θ
>
0
{\displaystyle \scriptstyle \theta \;>\;0\,}
scale
Support
x
∈
(
0
,
∞
)
{\displaystyle \scriptstyle x\;\in \;(0,\,\infty )\!}
PDF
1
Γ
(
k
)
θ
k
x
k
−
1
e
−
x
θ
{\displaystyle \scriptstyle {\frac {1}{\Gamma (k)\theta ^{k}}}x^{k\,-\,1}e^{-{\frac {x}{\theta }}}\,\!}
CDF
1
Γ
(
k
)
γ
(
k
,
x
θ
)
{\displaystyle \scriptstyle {\frac {1}{\Gamma (k)}}\gamma \left(k,\,{\frac {x}{\theta }}\right)\!}
Mean
E
[
X
]
=
k
θ
{\displaystyle \scriptstyle \operatorname {E} [X]=k\theta \!}
E
[
ln
X
]
=
ψ
(
k
)
+
ln
(
θ
)
{\displaystyle \scriptstyle \operatorname {E} [\ln X]=\psi (k)+\ln(\theta )\!}
(see digamma function )
Median
No simple closed form
Mode
(
k
−
1
)
θ
for
k
>
1
{\displaystyle \scriptstyle (k\,-\,1)\theta {\text{ for }}k\;>\;1\,\!}
Variance
Var
[
X
]
=
k
θ
2
{\displaystyle \scriptstyle \operatorname {Var} [X]=k\theta ^{2}\,\!}
Var
[
ln
X
]
=
ψ
1
(
k
)
{\displaystyle \scriptstyle \operatorname {Var} [\ln X]=\psi _{1}(k)\!}
(see trigamma function )
Skewness
2
k
{\displaystyle \scriptstyle {\frac {2}{\sqrt {k}}}\,\!}
Ex. kurtosis
6
k
{\displaystyle \scriptstyle {\frac {6}{k}}\,\!}
Entropy
k
+
ln
θ
+
ln
[
Γ
(
k
)
]
+
(
1
−
k
)
ψ
(
k
)
{\displaystyle \scriptstyle {\begin{aligned}\scriptstyle k&\scriptstyle \,+\,\ln \theta \,+\,\ln[\Gamma (k)]\\\scriptstyle &\scriptstyle \,+\,(1\,-\,k)\psi (k)\end{aligned}}}
The Gamma distribution is very important for technical reasons, since it is the parent of the exponential distribution and can explain many other distributions.
The probability distribution function is:
f
x
(
x
)
=
{
1
a
p
Γ
(
p
)
x
p
−
1
e
−
x
/
a
,
if
x
≥
0
0
,
if
x
<
0
a
,
p
>
0
{\displaystyle f_{x}(x)={\begin{cases}{\frac {1}{a^{p}\Gamma (p)}}x^{p-1}e^{-x/a},&{\mbox{if }}x\geq 0\\0,&{\mbox{if }}x<0\end{cases}}\quad a,p>0}
Where
Γ
(
p
)
=
∫
0
∞
t
p
−
1
e
−
t
d
t
{\displaystyle \Gamma (p)=\int _{0}^{\infty }t^{p-1}e^{-t}\,dt\,}
is the Gamma function . The cumulative distribution function cannot be found unless p=1, in which case the Gamma distribution becomes the exponential distribution. The Gamma distribution of the stochastic variable X is denoted as
X
∈
Γ
(
p
,
a
)
{\displaystyle X\in \Gamma (p,a)}
.
Alternatively, the gamma distribution can be parameterized in terms of a shape parameter
α
=
k
{\displaystyle \alpha =k}
and an inverse scale parameter
β
=
1
/
θ
{\displaystyle \beta =1/\theta }
, called a rate parameter:
g
(
x
;
α
,
β
)
=
K
x
α
−
1
e
−
β
x
f
o
r
x
>
0
.
{\displaystyle g(x;\alpha ,\beta )=Kx^{\alpha -1}e^{-\beta \,x}\ \mathrm {for} \ x>0\,\!.}
where the
K
{\displaystyle K}
constant can be calculated setting the integral of the density function as 1:
∫
−
∞
+
∞
g
(
x
;
α
,
β
)
d
t
=
∫
0
+
∞
K
x
α
−
1
e
−
β
x
d
x
=
1
{\displaystyle \int _{-\infty }^{+\infty }g(x;\alpha ,\beta )\mathrm {d} t\,=\int _{0}^{+\infty }Kx^{\alpha -1}e^{-\beta \,x}\mathrm {d} x\,=1}
following:
K
∫
0
+
∞
x
α
−
1
e
−
β
x
d
x
=
1
{\displaystyle K\int _{0}^{+\infty }x^{\alpha -1}e^{-\beta \,x}\mathrm {d} x\,=1}
K
=
1
∫
0
+
∞
x
α
−
1
e
−
β
x
d
x
{\displaystyle K={\frac {1}{\int _{0}^{+\infty }x^{\alpha -1}e^{-\beta \,x}\mathrm {d} x}}}
and, with change of variable
y
=
β
x
{\displaystyle y=\beta x}
:
K
=
1
∫
0
+
∞
y
α
−
1
β
α
−
1
e
−
y
d
y
β
=
1
1
β
α
∫
0
+
∞
y
α
−
1
e
−
y
d
y
=
β
α
∫
0
+
∞
y
α
−
1
e
−
y
d
y
=
β
α
Γ
(
α
)
{\displaystyle {\begin{aligned}K&={\frac {1}{\int _{0}^{+\infty }{\frac {y^{\alpha -1}}{\beta ^{\alpha -1}}}e^{-y}{\frac {\mathrm {d} y}{\beta }}}}\\&={\frac {1}{{\frac {1}{\beta ^{\alpha }}}\int _{0}^{+\infty }y^{\alpha -1}e^{-y}\mathrm {d} y}}\\&={\frac {\beta ^{\alpha }}{\int _{0}^{+\infty }y^{\alpha -1}e^{-y}\mathrm {d} y}}\\&={\frac {\beta ^{\alpha }}{\Gamma (\alpha )}}\end{aligned}}}
following:
g
(
x
;
α
,
β
)
=
x
α
−
1
β
−
α
e
−
β
x
Γ
(
α
)
f
o
r
x
>
0
.
{\displaystyle g(x;\alpha ,\beta )=x^{\alpha -1}{\frac {\beta ^{-\alpha }\,e^{-\beta \,x}}{\Gamma (\alpha )}}\ \mathrm {for} \ x>0\,\!.}
Probability Density Function
edit
We first check that the total integral of the probability density function is 1.
∫
−
∞
∞
1
a
p
Γ
(
p
)
x
p
−
1
e
−
x
/
a
d
x
{\displaystyle \int _{-\infty }^{\infty }{\frac {1}{a^{p}\Gamma (p)}}x^{p-1}e^{-x/a}dx}
Now we let y=x/a which means that dy=dx/a
1
Γ
(
p
)
∫
0
∞
y
p
−
1
e
−
y
d
y
{\displaystyle {\frac {1}{\Gamma (p)}}\int _{0}^{\infty }y^{p-1}e^{-y}dy}
1
Γ
(
p
)
Γ
(
p
)
=
1
{\displaystyle {\frac {1}{\Gamma (p)}}\Gamma (p)=1}
E
[
X
]
=
∫
−
∞
∞
x
⋅
1
a
p
Γ
(
p
)
x
p
−
1
e
−
x
/
a
d
x
{\displaystyle \operatorname {E} [X]=\int _{-\infty }^{\infty }x\cdot {\frac {1}{a^{p}\Gamma (p)}}x^{p-1}e^{-x/a}dx}
Now we let y=x/a which means that dy=dx/a .
E
[
X
]
=
∫
0
∞
a
y
⋅
1
Γ
(
p
)
y
p
−
1
e
−
y
d
y
{\displaystyle \operatorname {E} [X]=\int _{0}^{\infty }ay\cdot {\frac {1}{\Gamma (p)}}y^{p-1}e^{-y}dy}
E
[
X
]
=
a
Γ
(
p
)
∫
0
∞
y
p
e
−
y
d
y
{\displaystyle \operatorname {E} [X]={\frac {a}{\Gamma (p)}}\int _{0}^{\infty }y^{p}e^{-y}dy}
E
[
X
]
=
a
Γ
(
p
)
Γ
(
p
+
1
)
{\displaystyle \operatorname {E} [X]={\frac {a}{\Gamma (p)}}\Gamma (p+1)}
We now use the fact that
Γ
(
z
+
1
)
=
z
Γ
(
z
)
{\displaystyle \Gamma (z+1)=z\Gamma (z)}
E
[
X
]
=
a
Γ
(
p
)
p
Γ
(
p
)
=
a
p
{\displaystyle \operatorname {E} [X]={\frac {a}{\Gamma (p)}}p\Gamma (p)=ap}
We first calculate E[X^2]
E
[
X
2
]
=
∫
−
∞
∞
x
2
⋅
1
a
p
Γ
(
p
)
x
p
−
1
e
−
x
/
a
d
x
{\displaystyle \operatorname {E} [X^{2}]=\int _{-\infty }^{\infty }x^{2}\cdot {\frac {1}{a^{p}\Gamma (p)}}x^{p-1}e^{-x/a}dx}
Now we let y=x/a which means that dy=dx/a .
E
[
X
2
]
=
∫
0
∞
a
2
y
2
⋅
1
a
Γ
(
p
)
y
p
−
1
e
−
y
a
d
y
{\displaystyle \operatorname {E} [X^{2}]=\int _{0}^{\infty }a^{2}y^{2}\cdot {\frac {1}{a\Gamma (p)}}y^{p-1}e^{-y}ady}
E
[
X
2
]
=
a
2
Γ
(
p
)
∫
0
∞
y
p
+
1
e
−
y
d
y
{\displaystyle \operatorname {E} [X^{2}]={\frac {a^{2}}{\Gamma (p)}}\int _{0}^{\infty }y^{p+1}e^{-y}dy}
E
[
X
2
]
=
a
2
Γ
(
p
)
Γ
(
p
+
2
)
=
p
a
2
(
p
+
1
)
{\displaystyle \operatorname {E} [X^{2}]={\frac {a^{2}}{\Gamma (p)}}\Gamma (p+2)=pa^{2}(p+1)}
Now we use calculate the variance
Var
(
X
)
=
E
[
X
2
]
−
(
E
[
X
]
)
2
{\displaystyle \operatorname {Var} (X)=\operatorname {E} [X^{2}]-(\operatorname {E} [X])^{2}}
Var
(
X
)
=
p
a
2
(
p
+
1
)
−
(
a
p
)
2
=
p
a
2
{\displaystyle \operatorname {Var} (X)=pa^{2}(p+1)-(ap)^{2}=pa^{2}}