Beta Animals


Several years ago we took a look at the gamma function \(\Gamma\)[1], which is a generalisation of the factorial to non-integers, being equal to the factorial of a non-negative integer \(n\) when passed an argument of \(n+1\) and smoothly interpolating between them. Like the normal cumulative distribution function \(\Phi\)[2], it and its related functions are examples of special functions; so named because they frequently crop up in the solutions to interesting mathematical problems but can't be expressed as simple formulae, forcing us to resort to numerical approximation.
This time we shall take a look at another family of special functions derived from the beta function \(\mathrm{B}\)[3].

The Beta Function

Similarly to how the gamma function is defined by the integral
\[ \Gamma(z) = \int_0^\infty x^{z-1} e^{-x} \,\mathrm{d}x \]
the beta function is defined by
\[ \mathrm{B}(a, b) = \int_0^1 t^{a-1} (1-t)^{b-1} \,\mathrm{d}t \]
for \(a\) and \(b\) greater than zero. Derivation 1 shows that we can express it in terms of the gamma function with
\[ \mathrm{B}(a, b) = \frac{\Gamma(a) \times \Gamma(b)}{\Gamma(a+b)} \]
Derivation 1: The Relationship Between B and Γ
Firstly, we have
\[ \Gamma(a) \times \Gamma(b) = \int_0^\infty x^{a-1} e^{-x} \,\mathrm{d}x \times \int_0^\infty y^{b-1} e^{-y} \,\mathrm{d}y = \int_0^\infty \int_0^\infty x^{a-1} y^{b-1} e^{-(x+y)}\,\mathrm{d}x\,\mathrm{d}y \]
The substitutions \(x=tz\) and \(y=(1-t)z\) mean that the regions
\[ \begin{align*} (x, y) &\in [0, \infty] \times [0, \infty]\\ (z, t) &\in [0, \infty] \times [0, 1] \end{align*} \]
where \(\in\) means within, are equivalent and yield the Jacobian matrix and determinant
\[ \begin{align*} \mathbf{J} &= \begin{pmatrix} \frac{\partial x}{\partial t} & \frac{\partial x}{\partial z}\\ \frac{\partial y}{\partial t} & \frac{\partial y}{\partial z} \end{pmatrix} = \begin{pmatrix} z & t\\ -z & 1-t \end{pmatrix}\\ \left|\mathbf{J}\right| &= z(1-t) + zt = z \end{align*} \]
and so using integration by substitution
\[ \begin{align*} \Gamma(a) \times \Gamma(b) &= \int_0^\infty \int_0^1 (tz)^{a-1} ((1-t)z)^{b-1} e^{-(tz+(1-t)z)} |z| \,\mathrm{d}t\,\mathrm{d}z\\ &= \int_0^\infty z^{a-1} z^{b-1} e^{-z} |z| \,\mathrm{d}z \times \int_0^1 t^{a-1} (1-t)^{b-1} \,\mathrm{d}t\\ &= \int_0^\infty z^{a+b-1} e^{-z} \,\mathrm{d}z \times \int_0^1 t^{a-1} (1-t)^{b-1} \,\mathrm{d}t = \Gamma(a+b) \times \mathrm{B}(a, b) \end{align*} \]
since \(z\) is non-negative and so is equal to its magnitude \(|z|\).

Derivation 2: The Symmetry Of B
\[ \begin{align*} t &= 1-u\\ u &= 1-t\\ \frac{\mathrm{d}t}{\mathrm{d}u} &= -1 \end{align*} \]
we have
\[ \begin{align*} \mathrm{B}(a, b) &= \int_0^1 t^{a-1} (1-t)^{b-1} \,\mathrm{d}t\\ &= \int_{1-0}^{1-1} (1-u)^{a-1} u^{b-1} \,\frac{\mathrm{d}t}{\mathrm{d}u}\mathrm{d}u\\ &= \int_1^0 -u^{b-1} (1-u)^{a-1} \,\mathrm{d}u\\ &= \int_0^1 u^{b-1} (1-u)^{a-1} \,\mathrm{d}u = \mathrm{B}(b, a) \end{align*} \]
implying that
\[ \mathrm{B}(b, a) = \mathrm{B}(a, b) \]
which can be confirmed directly from the integral with the substitution \(t = 1-u\), as shown by derivation 2.

More Properties Of The Beta Function

We can also therefore use the properties of the gamma function to find some more properties of the beta function. For example, from
\[ \Gamma(1) = 1 \]
\[ \Gamma(z+1) = z \Gamma(z) \]
we get
\[ \mathrm{B}(a, 1) = \frac{\Gamma(a) \times \Gamma(1)}{\Gamma(a+1)} = \frac{\Gamma(a) \times \Gamma(1)}{a\Gamma(a)} = \frac{1}{a} \]
\[ \mathrm{B}(a+1, b) = \frac{\Gamma(a+1) \times \Gamma(b)}{\Gamma(a+1+b)} = \frac{a \Gamma(a) \times \Gamma(b)}{(a+b)\Gamma(a+b)} = \frac{a}{a+b} \times \mathrm{B}(a, b) \]
which, by the symmetry of the beta function, means that
\[ \mathrm{B}(a+1, b) + \mathrm{B}(a, b+1) = \frac{a}{a+b} \times \mathrm{B}(a, b) + \frac{b}{a+b} \times \mathrm{B}(a, b) = \frac{a+b}{a+b} \times \mathrm{B}(a, b) = \mathrm{B}(a, b) \]
Given Euler's reflection formula
\[ \Gamma(1-z)\Gamma(z) = \frac{\pi}{\sin(\pi z)} \]
we also have
\[ \mathrm{B}(a, 1-a) = \frac{\Gamma(a) \times \Gamma(1-a)}{\Gamma(a+1-a)} = \frac{\Gamma(1-a) \times \Gamma(a)}{\Gamma(1)} = \frac{\pi}{\sin(\pi a)} \]
\[ \begin{align*} \mathrm{B}(a, b) \times \mathrm{B}(a+b, 1-b) &= \frac{\Gamma(a) \times \Gamma(b)}{\Gamma(a+b)} \times \frac{\Gamma(a+b) \times \Gamma(1-b)}{\Gamma(a+b+1-b)}\\ &= \frac{\Gamma(a) \times \Gamma(a+b)}{\Gamma(a+b) \times \Gamma(a+1)} \times \Gamma(1-b) \Gamma(b)\\ &= \frac{\Gamma(a)}{a\Gamma(a)} \times \frac{\pi}{\sin(\pi b)} = \frac{\pi}{a\sin(\pi b)} \end{align*} \]
Furthermore, for a non-negative integer \(z\)
\[ \Gamma(z+1) = z! \]
where the exclamation mark stands for the factorial, so that if \(a\) and \(b\) are non-negative integers
\[ \begin{align*} \frac{1}{(a+b+1) \times \mathrm{B}(a+1, b+1)} &= \frac{\Gamma(a+1+b+1)}{(a+b+1) \times \Gamma(a+1) \times \Gamma(b+1)}\\ &= \frac{(a+b+1)!}{(a+b+1) \times a! \times b!} = \frac{(a+b)!}{a! \times b!} = {}^{a+b}C_a \end{align*} \]
\[ {}^nC_r = \frac{n!}{r! \times (n-r)!} \]
is the combination which counts the number of ways that we can choose \(r\) from \(n\) objects when their order doesn't matter.

Generalising The Beta Function

Using Euler's reflection formula, the gamma function has been generalised to non-positive as well as positive arguments, as demonstrated by program 1 using our ak.gamma function.

Program 1: The Real Gamma Function

Whilst the definition of the beta function requires its arguments to be positive, since we're using gamma functions rather than the integral from that definition to calculate it, we can easily generalise it to non-positive arguments too. Of course we must keep in mind the fact that the gamma function isn't finite for non-positive integer arguments and so the beta function won't be either.

The gamma function is also valid for complex arguments with positive real parts and has been generalised to the entire complex plane in the same way that it was for non-positive numbers, with the same caveat that it is not finite when its argument's real part is a non-positive integer and its imaginary part is zero. We may consequently just as easily generalise the beta function to the complex plane as well.

Implementing The Beta Function

To support both real and complex arguments, the ak.complex function defined in listing 1 uses our overloading mechanism to dispatch to betaR for numbers and to betaC for ak.complex objects.

Listing 1: ak.beta
function betaR(a, b) {
 return a>=0 && b>=0 ? Math.exp(ak.logGamma(a)+ak.logGamma(b)-ak.logGamma(a+b))
                     : ak.gamma(a)*ak.gamma(b)/ak.gamma(a+b);

function betaC(a, b) {
 return ak.div(ak.mul(ak.gamma(a), ak.gamma(b)), ak.gamma(ak.add(a, b)));

ak.beta = function(a, b) {return ak.beta[ak.type(a)][ak.type(b)](a, b);};

ak.overload(ak.beta, [ak.COMPLEX_T, ak.COMPLEX_T], betaC);
ak.overload(ak.beta, [ak.COMPLEX_T, ak.NUMBER_T],  betaC);
ak.overload(ak.beta, [ak.NUMBER_T,  ak.COMPLEX_T], betaC);
ak.overload(ak.beta, [ak.NUMBER_T,  ak.NUMBER_T],  betaR);

In the former we use ak.logGamma to compute the logarithm of the beta function and then return its exponential to minimise the chance of overflow when both \(a\) and \(b\) are non-negative. Note that ak.logGamma returns infinity for an argument of zero and so ak.beta will return infinity if one of them is zero and NaN if both are. When either \(a\) or \(b\) is negative we must use ak.gamma instead and accept the risk of overflow since ak.logGamma returns NaN. In the latter we use our overloaded arithmetic operators to calculate the complex beta function from the results of the ak.complex overload of ak.gamma.
Whilst we're about it we might as well implement the logarithm of the beta function too and so listing 2 does so with ak.logBeta.

Listing 2: ak.logBeta
ak.logBeta = function(a, b) {
 return ak.logGamma(a)+ak.logGamma(b)-ak.logGamma(a+b);

Note that since ak.logGamma will return NaNs when either \(a\) or \(b\) are negative ak.logBeta is only defined for non-negative arguments.

Program 2 plots the result of ak.beta for arguments from minus four to plus four, using the arctangent to transform results in the interval \([-\infty,\infty]\) to \([-\frac12\pi,\frac12\pi]\) whilst preserving their relative sizes, in the sense that
\[ x < y \rightarrow \arctan(x) < \arctan(y) \]
where \(\rightarrow\) means implies, with the brightness of the points set logarithmically away from grey at zero.

Program 2: The Beta Function

Here you can see that the beta function smoothly falls toward zero quite quickly for non-negative \(a\) and \(b\), alternately grows toward plus or minus infinity between each non-positive integer when one of them is negative and behaves ever more extremely as both arguments become increasingly negative, which is hardly surprising given the behaviour of the gamma function for negative arguments.

Sadly, we don't have enough dimensions to plot the entire complex beta function and so program 3 takes a slice through it by setting the imaginary parts of the arguments to fixed values, contained in ai and bi, and plotting the real and imaginary parts of the results for real parts of the arguments from minus to plus four using the same transformation and brightness scale as program 2.

Program 3: The Complex Beta Function

The reason that it is much so better behaved than the real beta function is because its infinities lie on the real axis where the imaginary part is zero; if you decrease the values in ai and bi by an order of magnitude you'll see the misbehaviour return.
Note that unless ai and bi contain the same value, the graphs will no longer display the symmetry of the beta function since equivalent points on the two axes represent different complex numbers.

The Incomplete Beta Function

Derivation 3: Reflecting Bx
From the definition of the incomplete beta function we have
\[ \begin{align*} \mathrm{B}_{1-x}(a, b) &= \int_0^{1-x} t^{a-1} (1-t)^{b-1} \,\mathrm{d}t\\ &= \int_0^1 t^{a-1} (1-t)^{b-1} \,\mathrm{d}t - \int_{1-x}^1 t^{a-1} (1-t)^{b-1} \,\mathrm{d}t\\ &= \mathrm{B}(a, b) - \int_{1-x}^1 t^{a-1} (1-t)^{b-1} \,\mathrm{d}t \end{align*} \]
Integrating the second term by substitution with \(t = 1-u\) yields
\[ \begin{align*} \int_{1-x}^1 t^{a-1} (1-t)^{b-1} \,\mathrm{d}t &= \int_{1-(1-x)}^{1-1} (u-1)^{a-1} u^{b-1} \,\frac{\mathrm{d}t}{\mathrm{d}u}\mathrm{d}u\\ &= \int_x^0 -u^{b-1} (u-1)^{a-1} \,\mathrm{d}u\\ &= \int_0^x u^{b-1} (u-1)^{a-1} \,\mathrm{d}u = \mathrm{B}_x(b, a) \end{align*} \]
as required.
Just as is done with the gamma function we can create an incomplete version of the beta function[4] by integrating over a range other than zero to one
\[ \mathrm{B}_x(a, b) = \int_0^x t^{a-1} (1-t)^{b-1} \,\mathrm{d}t \]
We can also regularise this by dividing by the complete beta function so that the result lies between zero and one
\[ I_x(a, b) = \frac{\mathrm{B}_x(a, b)}{\mathrm{B}(a, b)} \]
We can reflect \(x\) about \(\frac12\) with
\[ \mathrm{B}_{1-x}(a, b) = \mathrm{B}(a, b)-\mathrm{B}_{x}(b, a) \]
as proven in derivation 3, and consequently
\[ I_{1-x}(a, b) = 1-I_x(b, a) \]
by exploiting the symmetry of \(\mathrm{B}\).

A property that the incomplete beta function shares with the complete beta function is
\[ \mathrm{B}_x(a+1, b) + \mathrm{B}_x(a, b+1) = \mathrm{B}_x(a, b) \]
which follows from another pair of similar properties
\[ \begin{align*} \mathrm{B}_x(a+1, b) &= \frac{a}{a+b}\mathrm{B}_x(a, b) - \frac{x^a (1-x)^b}{a+b}\\ \\ \mathrm{B}_x(a, b+1) &= \frac{b}{a+b}\mathrm{B}_x(a, b) + \frac{x^a (1-x)^b}{a+b} \end{align*} \]
as proven by derivation 4.

Derivation 4: Bx(a+1, b) And Bx(a, b+1)
\[ f(x) = \frac{x^a (1-x)^b}{a+b} \]
we have
\[ \begin{align*} \frac{a}{a+b}\mathrm{B}_x(a, b) - \frac{x^a (1-x)^b}{a+b} &= \frac{a}{a+b} \int_0^x t^{a-1} (1-t)^{b-1} \,\mathrm{d}t - \int_0^x f^\prime(t) \,\mathrm{d}t\\ &= \int_0^x \frac{a t^{a-1} (1-t)^{b-1}}{a+b} - \left(\frac{a t^{a-1} (1-t)^b}{a+b} - \frac{b t^a (1-t)^{b-1}}{a+b}\right) \,\mathrm{d}t\\ &= \int_0^x \frac{t^{a-1} (1-t)^{b-1}}{a+b} \times \left(a - a(1-t) + bt\right)\,\mathrm{d}t\\ &= \int_0^x \frac{t^{a-1} (1-t)^{b-1}}{a+b} \times \left(a+b\right)t\,\mathrm{d}t\\ &= \int_0^x t^a (1-t)^{b-1}\,\mathrm{d}t = B_x(a+1, b) \end{align*} \]
\[ \begin{align*} \frac{b}{a+b}\mathrm{B}_x(a, b) + \frac{x^a (1-x)^b}{a+b} &= \int_0^x \frac{b t^{a-1} (1-t)^{b-1}}{a+b} + \left(\frac{a t^{a-1} (1-t)^b}{a+b} - \frac{b t^a (1-t)^{b-1}}{a+b}\right) \,\mathrm{d}t\\ &= \int_0^x \frac{t^{a-1} (1-t)^{b-1}}{a+b} \times \left(b + a(1-t) - bt\right)\,\mathrm{d}t\\ &= \int_0^x \frac{t^{a-1} (1-t)^{b-1}}{a+b} \times (a+b) \times(1-t)\,\mathrm{d}t = B_x(a, b+1) \end{align*} \]

The incomplete beta function can't be expressed in terms of the incomplete gamma function but fortunately it can be calculated with the continued fraction
\[ \frac{a \mathrm{B}_x(a, b)}{x^a (1-x)^b} = \dfrac{1}{1-\dfrac{d_1}{1+\dfrac{d_2}{1-\dfrac{d_3}{1+\dfrac{d_4}{1-\dots}}}}} \]
\[ \begin{align*} d_{2m} &= \frac{m \times (b-m) \times x}{(a+2m-1) \times (a+2m)}\\ \\ d_{2m+1} &= \frac{(a+m) \times (a+b+m) \times x}{(a+2m) \times (a+2m+1)} \end{align*} \]
This converges rapidly for
\[ x \leqslant \frac{a+1}{a+b+2} \]
and otherwise we can exploit the fact that
\[ x \geqslant \frac{a+1}{a+b+2} \rightarrow 1-x \leqslant 1-\frac{a+1}{a+b+2} = \frac{(a+b+2)-(a+1)}{a+b+2} = \frac{b+1}{a+b+2} \]
so that it will do so for \(\mathrm{B}_{1-x}(b, a)\) and we can therefore rearrange the reflection formula to efficiently evaluate it with
\[ \mathrm{B}_{x}(a, b) = \mathrm{B}(b, a)-\mathrm{B}_{1-x}(b, a) \]
To calculate it we shall use Lentz's method[5] which iteratively produces the results of the first \(n\) terms of a continued fraction
\[ f_n = b_0 + \dfrac{a_1}{b_1 + \dfrac{a_2}{b_2 + \dots \dfrac{a_n}{b_n}}} \]
\[ \begin{align*} C_0 &= b_0 & D_0 &= \infty & f_0 &= b_0\\ C_{n+1} &= b_{n+1} + \frac{a_{n+1}}{C_n} & D_{n+1} &= b_{n+1} + \frac{a_{n+1}}{D_n} & f_{n+1} &= \frac{C_{n+1}}{D_{n+1}} \times f_n \end{align*} \]
which we proved to be correct when we looked at the incomplete gamma function[1]. Since \(b_0\) is zero in this case we would set \(f_0\) to zero and then multiply it by infinity, so we must start at \(n\) equal to one instead
\[ \begin{align*} C_1 &= \infty & D_1 &= 1 & f_1 &= 1\\ C_{n+1} &= 1 + (-1)^n \times \frac{d_n}{C_n} & D_{n+1} &= 1 + (-1)^n \times \frac{d_n}{D_n} & f_{n+1} &= \frac{C_{n+1}}{D_{n+1}} \times f_n \end{align*} \]

Implementing The Incomplete Beta Function

Just as we did for the incomplete gamma function, we shall only be implementing the regularised version of the incomplete beta function which we can recover from the continued fraction with
\[ I_x(a, b) = \frac{a \mathrm{B}_x(a, b)}{x^a (1-x)^b} \times \frac{x^a (1-x)^b}{a \mathrm{B}(a,b)} \]
The betaFraction function defined in listing 3 does so, restricting the magnitudes of \(C_n\) and \(D_n\) to be no smaller than BETA_EPS in order to avoid the production of NaNs during the calculation, terminating the iteration when the ratio of \(C_n\) and \(D_n\) equals one to within rounding error and finally using logarithms to perform the multiplication in order to minimise overflow and underflow.

Listing 3: betaFraction
var BETA_STEPS = 10000;

function betaFraction(a, b, x) {
 var scale = a*Math.log(x) + b*Math.log(1-x) - Math.log(a) - ak.logBeta(a, b);
 var n  = 1;
 var an = 1;
 var cn = 1/BETA_EPS;
 var dn = 1;
 var fn = 1;
 var m;

 do {
  if(n%2===0) {m = n/2; an = m*(b-m)*x/((a+n-1)*(a+n));}
  else {m = (n-1)/2; an = -(a+m)*(a+b+m)*x/((a+n-1)*(a+n));}
  cn = 1 + an/cn;
  dn = 1 + an/dn;

  if(Math.abs(cn)<BETA_EPS) cn = cn>=0 ? BETA_EPS : -BETA_EPS;
  if(Math.abs(dn)<BETA_EPS) dn = dn>=0 ? BETA_EPS : -BETA_EPS;

  fn *= cn/dn;
  if(++n===BETA_STEPS) throw new Error('failure to converge in ak.betaP/betaQ');

 return Math.exp(scale + Math.log(fn));

In order to be consistent with our implementation of the regularised incomplete gamma function ak.gammaP our implementation of \(I_x\) is called ak.betaP, as shown by listing 4.

Listing 4: ak.betaP
ak.betaP = function(a, b, x) {
 a = Number(a);
 b = Number(b);
 if(!(a>0) || !(b>0)) return ak.NaN;

 x = Number(x);
 if(!(x>=0) || !(x<=1)) return ak.NaN;
 if(x===0) return 0;
 if(x===1) return 1;

 return x<(a+1)/(a+b+2) ? betaFraction(a, b, x) : 1-betaFraction(b, a, 1-x);

Similarly, the implementation of the complement of \(I_x\), which is the result of subtracting it from one, is given by ak.betaQ in listing 4.

Listing 5: ak.betaQ
ak.betaQ = function(a, b, x) {
 a = Number(a);
 b = Number(b);
 if(!(a>0) || !(b>0)) return ak.NaN;

 x = Number(x);
 if(!(x>=0) || !(x<=1)) return ak.NaN;
 if(x===0) return 1;
 if(x===1) return 0;

 return x<(a+1)/(a+b+2) ? 1-betaFraction(a, b, x) : betaFraction(b, a, 1-x);

In both cases we return NaN if either \(a\) or \(b\) is non-positive or if \(x\) is outside of the interval \([0,1]\), treating its limits as special cases.
Note that when \(x\) is greater than or equal to \(\frac{a+1}{a+b+2}\) we're calculating the complement with
\[ 1-I_x(a, b) = 1-\left(1-I_{1-x}(b, a)\right) = I_{1-x}(b, a) \]
which means that it won't be subject to significant rounding and cancellation errors, unlike subtracting the result of ak.betaP from one.

Finally, program 4 plots the results of ak.betaP in green and ak.betaQ in red from zero to one for a range of values of \(a\) and \(b\)

Program 4: The Incomplete Beta Functions

and you can find the implementations of all of these functions in BetaFunction.js.


[1] Gamma Gamma Hey!,, 2015.

[2] Define Normal,, 2014.

[3] Askey, R.A. & Roy, R., Beta Function, Ch. 5, Sec. 12, Digital Library of Mathematical Functions, National Institute of Standards and Technology.

[4] Paris, R.B., Incomplete Beta Functions, Ch. 8, Sec. 17, Digital Library of Mathematical Functions, National Institute of Standards and Technology.

[5] Lentz, W.J., Generating Bessel Functions in Mie Scattering Calculations Using Continued Fractions, Applied Optics, Vol. 64, pp. 668-671, 1976.

Leave a comment