Gamma Gamma Hey!


One of the frustrating things about mathematics is that there are many important and popular functions that are easy to formulate but are extremely difficult to evaluate. We've already come across just such a function in the normal CDF, \(\Phi\)[1]
\[ \Phi_{\mu,\sigma}(x) = \int_{-\infty}^x \frac{1}{\sqrt{2\pi}\sigma} e^{-\tfrac{(y-\mu)^2}{2\sigma^2}} \mathrm{d}y \]
for which we were forced to resort to an arcane numerical approximation, filled to the brim with magic numbers. In this post, we shall take a look at a whole family of such functions; the gamma function and its relatives.

So without further ado; hey ho, let's go!

The Gamma Function

Derivation 1: Two Properties Of Γ(z)
Firstly, applying integration by parts, we have
\[ \begin{align*} \Gamma(z+1) &= \int_0^\infty x^z e^{-x} \mathrm{d}x\\ & = \left[-x^z e^{-x}\right]_0^\infty - \int_0^\infty -z x^{z-1}e^{-x} \mathrm{d}x\\ & = 0 + z \int_0^\infty x^{z-1} e^{-x} \mathrm{d}x = z \; \Gamma(z) \end{align*} \]
Secondly, setting \(z\) to one yields
\[ \begin{align*} \Gamma(1) &= \int_0^\infty x^0 e^{-x} \mathrm{d}x = \int_0^\infty e^{-x} \mathrm{d}x\\ &= \left[-e^{-x}\right]_0^\infty = 1 \end{align*} \]
Like the normal CDF, the gamma function is defined by an integral expression
\[ \Gamma(z) = \int_0^\infty x^{z-1} e^{-x} \mathrm{d}x \]
and, again like the normal CDF, it cannot be expressed with a simple formula. It does, however, have the interesting properties that
\[ \begin{align*} \Gamma(z+1) &= z \; \Gamma(z)\\ \Gamma(1) & = 1 \end{align*} \]
as shown by derivation 1, implying that for non-negative integer \(n\)
\[ \Gamma(n+1) = n! \]
where the exclamation mark stands for the factorial.
The integral is well defined for both real and complex \(z\), but only if the real part is positive. However, we can extend the gamma function to numbers with a negative real part using the relation
Derivation 2: Relating Γ And Φ
Firstly, we have
\[ \begin{align*} \Phi_{0,1}(\infty) &= \int_{-\infty}^{\infty} \frac{1}{\sqrt{2\pi}} e^{-\frac{1}{2}y^2} \mathrm{d}y\\ &= 2 \times \int_{0}^{\infty} \frac{1}{\sqrt{2\pi}} e^{-\frac{1}{2}y^2} \mathrm{d}y\\ &= \sqrt{\frac{2}{\pi}} \times \int_{0}^{\infty} e^{-\frac{1}{2}y^2} \mathrm{d}y \end{align*} \]
Substituting \(x=\frac{1}{2}y^2\) yields
\[ \mathrm{d}x = y \; \mathrm{d}y \implies \mathrm{d}y = \frac{\mathrm{d}x}{y} = \frac{\mathrm{d}x}{\sqrt{2x}} \]
and hence
\[ \begin{align*} \Phi_{0,1}(\infty) &= \sqrt{\frac{2}{\pi}} \times \int_{0}^{\infty} e^{-x} \frac{\mathrm{d}x}{\sqrt{2x}}\\ &= \frac{1}{\sqrt{\pi}} \times \int_{0}^{\infty} x^{-\frac{1}{2}} \; e^{-x} \; \mathrm{d}x\\ &= \frac{1}{\sqrt{\pi}} \times \Gamma\left(\tfrac{1}{2}\right) = 1 \end{align*} \]
\[ \Gamma(1-z) \; \Gamma(z) = \frac{\pi}{\sin(\pi z)} \]
known as Euler's reflection formula. Note that this implies that the gamma function is infinite for non-negative integers, which makes some sense if we define the factorial as
\[ n! = n \times (n-1)! \]
so that it recurses off to infinity for negative \(n\). It also implies that
\[ \begin{align*} \Gamma\left(1-\tfrac{1}{2}\right) \; \Gamma\left(\tfrac{1}{2}\right) &= \frac{\pi}{\sin \tfrac{\pi}{2}}\\ \Gamma\left(\tfrac{1}{2}\right)^2 &= \pi\\ \Gamma\left(\tfrac{1}{2}\right) &= \sqrt{\pi} \end{align*} \]
which, rather satisfyingly in my opinion, can be used to prove that \(\Phi_{0,1}(\infty) = 1\), as shown by derivation 2.


We shall implement the gamma function with a Lanczos approximation[2], which take the form
\[ \Gamma(z) = \sqrt{2\pi} \left(z+g-\tfrac{1}{2}\right)^{z-\frac{1}{2}}e^{-\left(z+g-\tfrac{1}{2}\right)}\left(a + \sum_{k=0}^N\frac{b_k}{z+k}\right) \]
where \(\sum\) is the summation sign and the magic numbers \(a\) and \(b_k\) depend on the choices of \(g\) and \(N\), which in turn determine the accuracy of the approximation, and we shall take from the GNU scientific library's implementation[3]. Since this approximation works for real and complex arguments, we shall use our dispatch mechanism to support both, with the real argument overload given in listing 1.

Listing 1: Real ak.gamma
var RT2PI = Math.sqrt(2*ak.PI);

function gammaR(z) {
 var t, x;
 if(z<0.5) return ak.PI / (Math.sin(ak.PI*z) * gammaR(1-z));
 t  = z + 6.5;
 x  = 9.9999999999980993e-1;
 x += 6.7652036812188510e+2 / z;
 x -= 1.2591392167224028e+3 / (z+1);
 x += 7.7132342877765313e+2 / (z+2);
 x -= 1.7661502916214059e+2 / (z+3);
 x += 1.2507343278686905e+1 / (z+4);
 x -= 1.3857109526572012e-1 / (z+5);
 x += 9.9843695780195716e-6 / (z+6);
 x += 1.5056327351493116e-7 / (z+7);

 return RT2PI * Math.pow(t, z-0.5) * Math.exp(-t) * x;
ak.gamma = function(z) {return ak.gamma[ak.type(z)](z)};
ak.overload(ak.gamma, ak.NUMBER_T, gammaR);

Note that we're using the reflection formula to calculate the gamma function for arguments less than one half, including negative arguments.
Program 1 plots the gamma function over a range of negative and positive values, clearly showing the infinite results for non-positive integer arguments.

Program 1: The Real Gamma Function

The complex overload takes a similar form, albeit using ak overloaded arithmetic operators, as shown by listing 2.

Listing 2: Complex ak.gamma
function gammaC(z) {
 var t, x;
 if(<0.5) {
  return ak.div(ak.PI, ak.mul(ak.sin(ak.mul(ak.PI,z)), gammaC(ak.sub(1, z))));
 t = ak.add(z, 6.5);
 x = 9.9999999999980993e-1;
 x = ak.add(x, ak.div(6.7652036812188510e+2, z));
 x = ak.sub(x, ak.div(1.2591392167224028e+3, ak.add(z,1)));
 x = ak.add(x, ak.div(7.7132342877765313e+2, ak.add(z,2)));
 x = ak.sub(x, ak.div(1.7661502916214059e+2, ak.add(z,3)));
 x = ak.add(x, ak.div(1.2507343278686905e+1, ak.add(z,4)));
 x = ak.sub(x, ak.div(1.3857109526572012e-1, ak.add(z,5)));
 x = ak.add(x, ak.div(9.9843695780195716e-6, ak.add(z,6)));
 x = ak.add(x, ak.div(1.5056327351493116e-7, ak.add(z,7)));

 return ak.mul(RT2PI, ak.mul(ak.pow(t, ak.sub(z,0.5)), ak.div(x, ak.exp(t))));
ak.overload(ak.gamma, ak.COMPLEX_T, gammaC);

Program 2 generates a contour plot of the magnitude of the complex gamma function with the brightness of each point on a logarithmic scale.

Program 2: The Complex Gamma Function

again clearly showing infinite values for non-positive integer arguments.


A common use of the gamma function is the efficient calculation of factorials and a common use of factorials is the calculation of permutations and combinations, given by
\[ \begin{align*} ^nP_r &= \frac{n!}{(n-r)!}\\ ^nC_r &= \frac{n!}{r! \times (n-r)!} \end{align*} \]
respectively. The factorial terms in these expressions can easily overflow floating point arithmetic even when they would eventually cancel out to yield a representable result. For such calculations it is often better to work with logarithms instead; for example
\[ \begin{align*} \ln ^nC_r &= \ln \frac{n!}{r! \times (n-r)!}\\ &= \ln \frac{\Gamma(n+1)}{\Gamma(r+1) \times \Gamma(n-r+1)}\\ &= \ln \Gamma(n+1) - \ln \Gamma(r+1) - \ln \Gamma(n-r+1) \end{align*} \]
This is much less likely to overflow and we can trivially recover the value of the combination with
\[ ^nC_r = e^{\ln ^nC_r} \]
To approximate the logarithm of the gamma function we can simply take the logarithm of the Lanczos approximation
\[ \begin{align*} \ln \Gamma(z) &= \ln \left(\sqrt{2\pi} \left(z+g-\tfrac{1}{2}\right)^{z-\frac{1}{2}}e^{-\left(z+g-\tfrac{1}{2}\right)}\left(a + \sum_{k=0}^N\frac{b_k}{z+k}\right)\right)\\ &= \ln \left(\sqrt{2\pi}\right) + \left(z-\tfrac{1}{2}\right) \ln \left(z+g-\tfrac{1}{2}\right) - \left(z+g-\tfrac{1}{2}\right) + \ln \left(a + \sum_{k=0}^N\frac{b_k}{z+k}\right) \end{align*} \]
as is done by ak.logGamma in listing 3.

Listing 3: ak.logGamma
var LOG_PI = Math.log(ak.PI);
var LOG_RT2PI = Math.log(RT2PI);
ak.logGamma = function(z) {
 var t, x;
 if(z<0)   return ak.NaN;
 if(z===0) return ak.INFINITY;
 if(z<0.5) return LOG_PI - Math.log(Math.sin(ak.PI*z)) - ak.logGamma(1-z);
 t  = z + 6.5;
 x  = 9.9999999999980993e-1;
 x += 6.7652036812188510e+2 / z;
 x -= 1.2591392167224028e+3 / (z+1);
 x += 7.7132342877765313e+2 / (z+2);
 x -= 1.7661502916214059e+2 / (z+3);
 x += 1.2507343278686905e+1 / (z+4);
 x -= 1.3857109526572012e-1 / (z+5);
 x += 9.9843695780195716e-6 / (z+6);
 x += 1.5056327351493116e-7 / (z+7);

 return LOG_RT2PI + (z-0.5)*Math.log(t) - t + Math.log(x);

The Incomplete Gamma Functions

It turns out that it's often useful to integrate the gamma function's integrand over ranges other than from zero to infinity. In particular, we shall be interested in the integrals
Derivation 3: Factorial-Like Properties Of γ(s, x) And Γ(s, x)
Using integration by parts again yields
\[ \begin{align*} \gamma(s+1, x) &= \int_0^x t^s e^{-t} \mathrm{d}t = \left[-t^s e^{-t}\right]_0^x - \int_0^x -s t^{s-1}e^{-t} \mathrm{d}t\\ &= -x^s e^{-x} + s \int_0^x t^{s-1} e^{-t} \mathrm{d}t\\ &= s \; \gamma(s, x) - x^s e^{-x}\\ \\ \Gamma(s+1, x) &= \int_x^\infty t^s e^{-t} \mathrm{d}t = \left[-t^s e^{-t}\right]_x^\infty - \int_0^x -s t^{s-1}e^{-t} \mathrm{d}t\\ &= x^s e^{-x} + s \int_x^\infty t^{s-1} e^{-t} \mathrm{d}t\\ &= s \; \Gamma(s, x) + x^s e^{-x} \end{align*} \]
\[ \begin{align*} \gamma(s, x) &= \int_0^x t^{s-1}e^{-t}\mathrm{d}t\\ \Gamma(s, x) &= \int_x^\infty t^{s-1}e^{-t}\mathrm{d}t \end{align*} \]
known as the lower and upper incomplete gamma functions respectively.

Note that these functions also have factorial-like properties with
\[ \begin{align*} \gamma(s+1, x) &= s \; \gamma(s, x) - x^s e^{-x}\\ \Gamma(s+1, x) &= s \; \Gamma(s, x) + x^s e^{-x} \end{align*} \]
as shown by derivation 3, and trivially satisfy

\(\displaystyle{ \gamma(s, \infty) = \Gamma(s, 0) = \gamma(s, x) + \Gamma(s, x) = \Gamma(s) }\)

Unfortunately they cannot be simply approximated with a predetermined number of terms, but they do at least admit relatively simple series and continued fraction approximations with
\[ \begin{align*} \gamma(s, x) &= e^{-x}x^s \sum_{n=0}^\infty \frac{\Gamma(s)}{\Gamma(s+1+n)}x^n\\ \Gamma(s, x) &= e^{-x}x^s \dfrac{1}{x+1-s-\dfrac{1 \times (1-s)}{x+3-s - \dfrac{2 \times (2-s)}{x+5-s-\dots}}} \end{align*} \]
Derivation 4: Iterating Through A Continued Fraction
Firstly, let us assume that the rule holds for some \(n>1\)
\[ \frac{A_{n}}{B_{n}} = \frac{b_n A_{n-1} + a_n A_{n-2}}{b_n B_{n-1} + a_n B_{n-2}} \]
Next, note that when we add another term we replace \(b_n\) with
\[ b_n + \frac{a_{n+1}}{b_{n+1}} \]
and hence
\[ \begin{align*} \frac{A_{n+1}}{B_{n+1}} &= \frac{\left(b_n + \frac{a_{n+1}}{b_{n+1}}\right) A_{n-1} + a_n A_{n-2}}{\left(b_n + \frac{a_{n+1}}{b_{n+1}}\right) B_{n-1} + a_n B_{n-2}}\\ &= \frac{\left(b_{n+1} b_n + a_{n+1}\right)A_{n-1} + b_{n+1} a_n A_{n-2}}{\left(b_{n+1} b_n + a_{n+1}\right)B_{n-1} + b_{n+1} a_n B_{n-2}}\\ &= \frac{b_{n+1}\left(b_n A_{n-1} + a_n A_{n-2}\right) + a_{n+1} A_{n-1}}{b_{n+1}\left(b_n B_{n-1} + a_n B_{n-2}\right) + a_{n+1} B_{n-1}}\\ &= \frac{b_{n+1} A_n + a_{n+1} A_{n-1}}{b_{n+1} B_n + a_{n+1} B_{n-1}} \end{align*} \]
Finally, we have
\[ \begin{align*} \frac{A_0}{B_0} &= \frac{b_0}{1} = b_0\\ \frac{A_1}{B_1} &= \frac{b_1 A_0 + a_1 A_{-1}}{b_1 B_0 + a_1 B_{-1}} = \frac{b_1 b_0 + a_1}{b_1} = b_0 + \frac{a_1}{b_1} \end{align*} \]
Following the advice of Press et al[4], we shall use the former for \(x < s+1\) and the latter for \(x \geqslant s+1\) since these are roughly the conditions under which they converge most rapidly.

Whilst it is trivial to iterate over the terms of the series, updating the sum until it converges, it is less clear how to do the same for the continued fraction. The trick is to note that if the value resulting from the first \(n\) fractions is
\[ \frac{A_n}{B_n} = b_0 + \dfrac{a_1}{b_1 + \dfrac{a_2}{b_2 + \dots \dfrac{a_n}{b_n}}} \]
\[ \begin{align*} A_{-1} &= 1\\ A_0 &= b_0\\ A_{n+1} &= b_{n+1} A_n + a_{n+1} A_{n-1}\\ \\ B_{-1} &= 0\\ B_0 &= 1\\ B_{n+1} &= b_{n+1} B_n + a_{n+1} B_{n-1} \end{align*} \]
as shown in derivation 4.

Again following the advice of Press et al, we shan't use this directly but shall instead use Lentz's method[5] in which
\[ \begin{align*} C_n &= \frac{A_n}{A_{n-1}}\\ D_n &= \frac{B_n}{B_{n-1}}\\ f_0 &= \frac{A_0}{B_0}\\ f_{n+1} &= \frac{C_{n+1}}{D_{n+1}} f_n = \frac{A_{n+1}}{A_n} \frac{B_n}{B_{n+1}} \frac{A_n}{B_n} = \frac{A_{n+1}}{B_{n+1}}\\ \end{align*} \]
The rules for \(C_n\) and \(D_n\) are easily derived from those for \(A_n\) and \(B_n\)
\[ \begin{align*} C_0 &= \frac{A_0}{A_{-1}} = b_0\\ C_{n+1} &= \frac{b_{n+1} A_n + a_{n+1} A_{n-1}}{A_n} = b_{n+1} + \frac{a_{n+1}}{C_n}\\ \\ D_0 &= \frac{B_0}{B_{-1}} = \infty\\ D_{n+1} &= \frac{b_{n+1} B_n + a_{n+1} B_{n-1}}{B_n} = b_{n+1} + \frac{a_{n+1}}{D_n} \end{align*} \]
One thing that we must be mindful of when implementing the incomplete gamma function with Lentz's method is that if \(C_n\) or \(D_n\) are close to zero then we may be faced with updates that are, numerically speaking, a little bit dodgy. We can work around this by setting a lower bound on the magnitude of these terms; specifically the square of the floating point epsilon.

Finally, the incomplete gamma functions are just as susceptible to overflow as the gamma function was, but rather than taking logs we shall simply express them as fractions of it
\[ \begin{align*} P(s, x) &= \frac{\gamma(s, x)}{\Gamma(s)}\\ Q(s, x) &= \frac{\Gamma(s, x)}{\Gamma(s)} \end{align*} \]
known as the regularised incomplete gamma functions.

ak.gammaP And ak.gammaQ

Listing 4 gives the implementations of \(P\) and \(Q\) which, after handling corner cases, simply forward to implementations of the more rapidly converging of the series and continued fraction approximations, gammaSeries and gammaFraction, according to the arguments s and x

Listing 4: ak.gammaP And ak.gammaQ
ak.gammaP = function(s, x) {
 if(!(s>=0) || !(x>=0)) return ak.NaN;
 if(x===0)              return 0;
 if(x===ak.INFINITY)    return 1;
 if(s===0)              return 1;
 if(s===ak.INFINITY)    return 0;
 if(x<s+1)              return gammaSeries(s, x);
 return 1-gammaFraction(s, x);
ak.gammaQ = function(s, x) {
 if(!(s>=0) || !(x>=0)) return ak.NaN;
 if(x===0)              return 1;
 if(x===ak.INFINITY)    return 0;
 if(s===0)              return 0;
 if(s===ak.INFINITY)    return 1;
 if(x<s+1)              return 1-gammaSeries(s, x);
 return gammaFraction(s, x);

The implementation of gammaSeries is given in listing 5

Listing 5: gammaSeries
var GAMMA_STEPS = 10000;
function gammaSeries(s, x) {
 var gamma = (s===0.5) ? 0.5*LOG_PI : ak.logGamma(s);
 var scale = Math.exp(-x + s*Math.log(x) - gamma);
 var n  = 1;   
 var tn = 1/s;
 var sn = tn;
 do {
  tn *= x/(s+n);
  sn += tn;
  if(++n===GAMMA_STEPS) {
   throw new Error('failure to converge in ak.gammaP/ak.gammaQ');

 return scale*sn;

which simply adds up terms of the series expansion, tn, until they become too small to affect the sum, sn.
Listing 6 shows the implementation of gammaFraction which similarly updates the continued fraction, fn, until the multiplication by cn/dn falls within rounding error.

Listing 6: gammaFraction
function gammaFraction(s, x) {
 var gamma = (s===0.5) ? 0.5*LOG_PI : ak.logGamma(s);
 var scale = Math.exp(-x + s*Math.log(x) - gamma);  
 var n  = 1;
 var an = 1;
 var bn = x+1-s;
 var cn = an/GAMMA_EPS;
 var dn = bn;
 var fn = 1/bn;
 do {
  an  = -n*(n-s);
  bn += 2;
  cn  = bn + an/cn;
  dn  = bn + an/dn;

  if(Math.abs(cn)<GAMMA_EPS) cn = cn>=0 ? GAMMA_EPS : -GAMMA_EPS;
  if(Math.abs(dn)<GAMMA_EPS) dn = dn>=0 ? GAMMA_EPS : -GAMMA_EPS;

  fn *= cn/dn;
  if(++n===GAMMA_STEPS) {
   throw new Error('failure to converge in ak.gammaP/ak.gammaQ');
 return scale*fn;

Note how we're constraining the magnitudes of cn and dn to be greater than zero, as described above.

We shall have to wait for examples of why the incomplete gamma functions are useful, but in the meantime their implementations can be found together with those of the complete gamma functions in GammaFunction.js.


[1] Define Normal,, 2014.

[2] Lanczos, C., A Precision Approximation of the Gamma Function, Journal of the Society for Industrial and Applied Mathematics, Series B (Numerical Analysis), Vol. 1, pp. 86-96, Society for Industrial and Applied Mathematics, 1964.

[3] GNU Scientific Library,

[4] Press, W.H. et al, Numerical Recipes in C (2nd ed.), Cambridge University Press, 1992.

[5] Lentz, W.J., Generating Bessel Functions in Mie Scattering Calculations Using Continued Fractions, Applied Optics, Vol. 64, pp. 668-671, 1976.

Leave a comment