What's The Lucky Number?

| 0 Comments

Over the last few months we have been looking at Bernoulli processes[1] which are sequences of Bernoulli trails, being observations of a Bernoulli distributed random variable with a success probability of \(p\). We have seen that the number of failures before the first success follows the geometric distribution[2] and the number of failures before the \(r^\mathrm{th}\) success follows the negative binomial distribution[3], which are the discrete analogues of the exponential[4] and gamma[5] distributions respectively.
This time we shall take a look at the binomial distribution which governs the number of successes out of \(n\) trials and is the discrete version of the Poisson distribution[6].

The Binomial Distribution

Since the successes of Bernoulli trials are indistinguishable, the number of ways that we can have \(k\) of them out of \(n\) trials is the combination
\[ {}^{n}C_k = \frac{n!}{k! \times (n-k)!} \]
where the exclamation mark stands for the factorial. The probability mass function, or PMF, of the binomial distribution is consequently
\[ f_{n,p}(k) = \frac{n!}{k! \times (n-k)!} \times (1-p)^{n-k} \times p^k \]
and gives the distribution its name from the expansion of the binomial function
\[ (x+y)^n = \sum_{k=0}^n {}^{n}C_k \times y^{n-k} \times x^k \]
The characteristic function, or CF, is defined as the expectation
\[ \hat{f}_{n,p}(t) = E\left[e^{itK}\right] = \sum_{k=0}^n f_{n,p}(k) \times e^{itk} \]
where \(i\) is the imaginary unit and \(\sum\) is the summation sign. This is simply the expansion of another binomial function
\[ \begin{align*} \left(p\times e^{it} + (1-p)\right)^n &= \sum_{k=0}^n {}^{n}C_k \times (1-p)^{n-k} \times \left(p \times e^{it}\right)^k\\ &= \sum_{k=0}^n {}^{n}C_k \times (1-p)^{n-k} \times p^k \times e^{itk} = \sum_{k=0}^n {} f_{n,p}(k) \times e^{itk} \end{align*} \]
The cumulative distribution function, or CDF, is given by
\[ F_{n,p}(k) = I_{1-p}\left(n - \lfloor k \rfloor, \lfloor k \rfloor + 1\right) = 1 - I_p\left(\lfloor k \rfloor + 1, n - \lfloor k \rfloor\right) \]
where \(I_x(a, b)\) is the incomplete beta function[7], as proven in derivation 1.

Derivation 1: The CDF
Firstly, we have
\[ I_x(a, b) = \frac{B_x(a,b)}{B(a,b)} = \frac{\int_0^x t^{a-1} \times (1-t)^{b-1} \,\mathrm{d}t}{\tfrac{\Gamma(a) \times \Gamma(b)}{\Gamma(a+b)}} = \frac{\Gamma(a+b) \times \int_0^x t^{a-1} \times (1-t)^{b-1} \,\mathrm{d}t}{\Gamma(a) \times \Gamma(b)} \]
where \(\Gamma\) is the gamma function, and so for positive integer \(k\)
\[ \begin{align*} 1-F_{n,p}(k) &= \frac{n!}{k! \times (n-k-1)!} \times \int_{t=0}^p t^k \times (1-t)^{n-k-1} \,\mathrm{d}t\\ 1-F_{n,p}(k-1) &= \frac{n!}{(k-1)! \times (n-k)!} \times \int_{t=0}^p t^{k-1} \times (1-t)^{n-k} \,\mathrm{d}t \end{align*} \]
Next, we have
\[ F_{n,p}(k) - F_{n,p}(k-1) = \frac{n!}{k! \times (n-k)!} \times \int_0^p f(t) \, \mathrm{d}t \]
where
\[ f(t) = k \times t^{k-1} \times (1-t)^{n-k} - (n-k) \times t^k \times (1-t)^{n-k-1} \]
By the chain and product rules
\[ \frac{\mathrm{d}}{\mathrm{d}p} (1-p)^{n-k} \times p^k = -(n-k) \times (1-p)^{n-k-1} \times p^k + (1-p)^{n-k} \times k \times p^{k-1} = f(p) \]
and so
\[ F_{n,p}(k) - F_{n,p}(k-1) = \frac{n!}{k! \times (n-k)!} \times (1-p)^{n-k} \times p^k = f_{n,p}\,(k) \]
Finally
\[ \begin{align*} F_{n,p}(0) &= 1 - I_p\left(1, n\right) = 1 - \frac{\Gamma(n+1)}{\Gamma(1) \times \Gamma(n)} \times \int_{t=0}^p (1-t)^{n-1} \,\mathrm{d}t\\ &= 1 - \frac{\Gamma(n+1)}{\Gamma(n)} \times \left[-\frac{(1-t)^n}{n}\right]_0^p = 1 - (1 - (1-p)^n) = (1-p)^n = f_{n,p}(0) \end{align*} \]

The ak Binomial Distribution Functions

To calculate the PMF, the pmf helper function used in listing 1 first calculates its logarithm with
\[ \begin{align*} \ln f_{n,p}(k) &= (n-k) \times \ln (1-p) +k \times \ln p + \ln n! - \ln k! - \ln (n-k)!\\ &= (n-k) \times \ln (1-p) +k \times \ln p + \ln \Gamma(n+1) - \ln \Gamma(k+1) - \ln \Gamma(n-k+1) \end{align*} \]
and then exponentiates it. Note that the PMF is only non-zero for integer \(k\) in the range zero to \(n\).

Listing 1: ak.binomialPMF
function pmf(n, p, lnn, lnp, lnq, k) {
 if(k>=0 && k<=n && k===ak.floor(k)) {
  return Math.exp(k*lnp + (n-k)*lnq + lnn - ak.logGamma(k+1) - ak.logGamma(n-k+1));
 }
 return isNaN(k) ? ak.NaN : 0;
}

ak.binomialPMF = function() {
 var state = {n: 1, p: 0.5};
 var arg0  = arguments[0];
 var lnn, lnp, lnq, f;

 constructors[ak.nativeType(arg0)](state, arg0, arguments);

 lnn = ak.logGamma(state.n+1);
 lnp = Math.log(state.p);
 lnq = Math.log(1-state.p);

 f = function(k){return pmf(state.n, state.p, lnn, lnp, lnq, k);};
 f.n = function(){return state.n;};
 f.p = function(){return state.p;};
 return Object.freeze(f);
};

var constructors = {};

This is then used by ak.binomialPFM which pre-calculates those terms that don't depend on \(k\). As usual, we're initialising its state by dispatching to the methods of a constructors object given in listing 2.

Listing 2: The Constructors
constructors[ak.UNDEFINED_T] = function() {
};

constructors[ak.NUMBER_T] = function(state, n, args) {
 var arg1 = args[1];
 constructors[ak.NUMBER_T][ak.nativeType(arg1)](state, arg1, args);

 state.n = Number(n);
 if(state.n<0 || state.n!==ak.floor(state.n) || !isFinite(state.n)) {
  throw new Error('invalid n in ak.binomial distribution');
 }
};

constructors[ak.NUMBER_T][ak.UNDEFINED_T] = function(state) {
};

constructors[ak.NUMBER_T][ak.NUMBER_T] = function(state, p, args) {
 var arg2 = args[2];
 constructors[ak.NUMBER_T][ak.NUMBER_T][ak.nativeType(arg2)](state, arg2, args);

 state.p = Number(p);
 if(!(state.p>0 && state.p<1)) {
  throw new Error('invalid p in ak.binomial distribution');
 }
};

constructors[ak.NUMBER_T][ak.NUMBER_T][ak.UNDEFINED_T] = function(state) {
};

Here we allow a non-negative integer number of trials \(n\) and a success probability \(p\) that is greater than zero and less than one.
It's demonstrated by program 1 for a fixed number of trials and a range of success probabilities.

Program 1: The Binomial PMF

The ak.binomialCDF implementation of the CDF given in listing 3 uses our ak.betaQ function to calculate the complement of the incomplete beta function

Listing 3: ak.binomialCDF
function cdf(n, p, k) {
  if(k<0)  return 0;
  if(k>=n) return 1;
  return ak.betaQ(ak.floor(k)+1, n-ak.floor(k), p);
}

ak.binomialCDF = function() {
 var state = {n: 1, p: 0.5};
 var arg0  = arguments[0];
 var f;

 constructors[ak.nativeType(arg0)](state, arg0, arguments);

 f = function(k){return cdf(state.n, state.p, k);};
 f.n = function(){return state.n;};
 f.p = function(){return state.p;};
 return Object.freeze(f);
};

and is demonstrated in program 2 by comparing the differences between its values at \(k\) and \(k-1\) to the PMF at \(k\).

Program 2: The PMF Versus The CDF

The ak.binomialCF function implemented in listing 4 exploits the identity
\[ e^{i\theta} = \cos \theta + i \times \sin \theta \]
to directly initialise an ak.complex object with
\[ p \times e^{it} + (1-p) = (1-p) + p\times\cos t + i \times p\times\sin t \]
before raising it to the power of \(n\).

Listing 4: ak.binomialCF
function cf(n, p, t) {
 var z = ak.complex(1-p + p*Math.cos(t), p*Math.sin(t));
 return ak.pow(z, n);
}

ak.binomialCF = function() {
 var state = {n: 1, p: 0.5};
 var arg0  = arguments[0];
 var f;

 constructors[ak.nativeType(arg0)](state, arg0, arguments);

 f = function(t){return cf(state.n, state.p, t);};
 f.n = function(){return state.n;};
 f.p = function(){return state.p;};
 return Object.freeze(f);
};

Program 3 plots its results with their real parts in black and their imaginary parts in red.

Program 3: The Binomial CF

The binomial CDF doesn't have a closed form formula for its inverse and so the ak.binomialInvCDF function given in listing 5 uses a simple bisection search.

Listing 5: ak.binomialInvCDF
function invCDF(n, p, km, cm, c) {
 var k0, k1, c0, c1;

 if(isNaN(c)) return ak.NaN;
 if(c>=1)     return n;
 if(c<=0)     return 0;

 if(c<cm) {k0=0;  k1=km;}
 else     {k0=km; k1=n;}
 km = ak.floor(k0/2 + k1/2);

 while(km!==k0 && km!==k1) {
  if(ak.betaQ(km+1, n-km, p)<c) k0 = km;
  else                          k1 = km;
  km = ak.floor(k0/2 + k1/2);
 }
 return ak.betaQ(k0+1, n-k0, p)<c ? k1 : k0;
}

ak.binomialInvCDF = function() {
 var state = {n: 1, p: 0.5};
 var arg0  = arguments[0];
 var km, cm, f;

 constructors[ak.nativeType(arg0)](state, arg0, arguments);

 km = ak.floor(state.n*state.p);
 cm = ak.betaQ(km+1, state.n-km, state.p);

 f = function(c){return invCDF(state.n, state.p, km, cm, c);};
 f.n = function(){return state.n;};
 f.p = function(){return state.p;};
 return Object.freeze(f);
};

Note that since the distribution is bounded above by \(n\) we can use the mean, trivially given by \(n \times p\), to make a first guess at a bracket for the inverse; either from zero to the floor of the mean or from the latter to \(n\).
Program 4 demonstrates its correctness by calculating the inverse of value of the CDF for \(k\) from zero to \(n\).

Program 4: The Binomial Inverse CDF

The simplest way to make observations of a binomial random variable is to generate \(n\) standard uniform random numbers and count how many of them are less than \(p\), which is reasonable if \(n\) is small. Only slightly more complicated is applying the inverse CDF to a standard uniform observation, which is also reasonable if \(n\) isn't too large. Finally, we can use rejection sampling, which we have previously used for the Gamma and Poisson distributions, drawing an observation \(K\) from a different distribution and keeping it with probability
\[ \frac{f_{n,p}(K)}{c \times f(K)} \]
where \(f\) is the PMF of the distribution that \(K\) was drawn from and \(c\) is the maximum value of the ratio of \(f_{n,p}(k)\) and \(f(k)\) for \(k\) from zero to \(n\).
Given the relationship between the binomial and Poisson distributions it's reasonable to choose the latter for the rejection. With a rate of \(n \times p\) it has the same mean as the binomial and the ratio of PMFs is given by
\[ \frac{\dfrac{n!}{k! \times (n-k)!} \times (1-p)^{n-k} \times p^k}{\dfrac{(n \times p)^k \times e^{- n \times p}}{k!}} = \frac{n! \times (1-p)^{n-k}}{(n-k)! \times n^k \times e^{- n \times p}} = \frac{n! \times (1-p)^n \times e^{n \times p}}{(1-p)^k \times (n-k)! \times n^k} \]
Since the numerator doesn't involve \(k\) this will have a single local maximum if the denominator has a single local minimum, which derivation 2 proves to be true if \(p\) is no greater than one half.

Derivation 2: A Unique Minimum
Representing the denominator with \(a_k\)
\[ \begin{align*} a_k &= (1-p)^k \times (n-k)! \times n^k = (n-k) \times (1-p)^k \times (n-k-1)! \times n^k\\ a_{k+1} &= (1-p)^{k+1} \times (n-k-1)! \times n^{k+1} = (1-p) \times n \times (1-p)^k \times (n-k-1)! \times n^k\\ \end{align*} \]
and the difference between \(a_{k}\) and \(a_{k+1}\) with \(b_k\)
\[ \begin{align*} b_k &= a_k - a_{k+1} = \left(n-k - (1-p) \times n\right) \times (1-p)^k \times (n-k-1)! \times n^k\\ &= \left(p \times n - k\right) \times (1-p)^k \times (n-k-1)! \times n^k\\ &= \left(p \times n - k\right) \times (n-k-1) \times (1-p)^k \times (n-k-2)! \times n^k\\ b_{k+1} &= a_{k+1} - a_{k+2} = \left(p \times n - k-1\right) \times (1-p)^{k+1} \times (n-k-2)! \times n^{k+1}\\ &= \left(p \times n - k-1\right) \times (1-p) \times n \times (1-p)^k \times (n-k-2)! \times n^k \end{align*} \]
If \(b_k\) is non-increasing for \(k\) from zero to \(n-2\), which is where \(b_{k+1}\) is defined, the denominator will have a single local minimum, albeit possibly occupied by multiple values of \(k\) if it ever equals zero. Noting that the common factor is positive and defining
\[ \begin{align*} b^\prime_k &= \left(p \times n - k\right) \times (n-k-1)\\ b^\prime_{k+1} &= \left(p \times n - k-1\right) \times (1-p) \times n = \left(p \times n - k\right) \times (1-p) \times n - (1-p) \times n \end{align*} \]
we therefore require the difference between \(b^\prime_k\) and \(b^\prime_{k+1}\) to be non-negative. Given
\[ \begin{align*} b^\prime_k - b^\prime_{k+1} &= \left(p \times n - k\right) \times (n-k-1) - \left(p \times n - k\right) \times (1-p) \times n + (1-p) \times n\\ &= \left(p \times n - k\right) \times \left(n-k-1 - (1-p) \times n\right) + (1-p) \times n\\ &= \left(p \times n - k\right) \times \left(p \times n-k-1\right) + (1-p) \times n\\ &= \left(p \times n - k\right) \times \left(p \times n-k\right) - (p \times n - k)+ (1-p) \times n\\ &= \left(p \times n - k\right) \times \left(p \times n-k\right) + k + (1 - 2 \times p) \times n \end{align*} \]
the result holds if \(p\) is less than or equal to one half.

Finally, if \(a_k\) equals \(a_{k+1}\) we have
\[ \begin{align*} (1-p)^k \times (n-k)! \times n^k &= (1-p)^{k+1} \times (n-k-1)! \times n^{k+1}\\ (n-k) &= (1-p) \times n\\ k &= p \times n \end{align*} \]
which must be the minimum values since the difference must be strictly decreasing for all other \(k\).

A reasonable first guess for the maximum of the ratio is the neighbourhood of the common mean of the PMFs. Starting from its floor and the next greater integer we can perform a simple search by moving them in the direction of the one with the larger ratio until its value is no longer greater than the other's, at which point the larger of the two must be the maximum.

The directRnd and rejectRnd helper functions defined in listing 6 implement the first and last of these schemes.

Listing 6: The Random Variable Helper Functions
function directRnd(n, p, rnd) {
 var k = 0;
 var i;
 for(i=0;i<n;++i) if(rnd()<p) ++k;
 return k;
}

function rejectRatio(n, p) {
 var logn = Math.log(n);
 var logq = Math.log(1-p);
 var lognum = ak.logGamma(n+1) + n*logq + n*p;

 return function(k) {
  var logden = k*logq + ak.logGamma(n-k+1) + k*logn;
  return Math.exp(lognum-logden);
 };
}

function rejectRnd(n, p, rnd) {
 var lambda = n*p;
 var prnd = ak.poissonRnd(lambda, rnd);
 var ratio = rejectRatio(n, p);
 var k0 = ak.floor(lambda);
 var k1 = k0+1;
 var r0 = ratio(k0);
 var r1 = ratio(k1);
 var c;
 if(r0>r1) {while(k0>0 && r0>r1) {k1=k0;r1=r0;r0=ratio(--k0);}}
 else      {while(k1<n && r0<r1) {k0=k1;r0=r1;r1=ratio(++k1);}}
 c = Math.max(r0, r1);

 return function() {
  var k;
  do {k=prnd();}
  while(k>n || c*rnd()>=ratio(k));
  return k;
 };
}

Note that the rejectRatio helper function calculates the ratio of the PMFs by exponentiating the difference between the logarithms of its numerator and denominator.

The ak.binomialRnd function defined in listing 7 switches between these schemes, using the rejection method for successes if \(p\) is less than or equal to one half and for failures otherwise.

Listing 7: ak.binomialRnd
ak.binomialRnd = function() {
 var state = {n: 1, p: 0.5, rnd: Math.random};
 var arg0  = arguments[0];
 var km, cm, f, g;

 constructors[ak.nativeType(arg0)](state, arg0, arguments);

 if(state.n<=8) {
  f = function(){return directRnd(state.n, state.p, state.rnd);};
 }
 else if(state.n<=64) {
  km = ak.floor(state.n*state.p);
  cm = ak.betaQ(km+1, state.n-km, state.p);
  f = function() {return invCDF(state.n, state.p, km, cm, state.rnd());};
 }
 else if(state.p<=0.5) {
  f = rejectRnd(state.n, state.p, state.rnd);
 }
 else {
  g = rejectRnd(state.n, 1-state.p, state.rnd);
  f = function() {return state.n-g();}
 }

 f.n = function(){return state.n;};
 f.p = function(){return state.p;};
 f.rnd = function(){return state.rnd;};
 return Object.freeze(f);
};

Listing 8 gives the additional constructors used by ak.binomialRnd to allow the specification of alternative standard uniform random number generators.

Listing 8: The Additional Constructors
constructors[ak.FUNCTION_T] = function(state, rnd) {
 state.rnd = rnd;
};

constructors[ak.NUMBER_T][ak.FUNCTION_T] = function(state, rnd) {
 state.rnd = rnd;
};

constructors[ak.NUMBER_T][ak.NUMBER_T][ak.FUNCTION_T] = function(state, rnd) {
 state.rnd = rnd;
};

Finally, program 5 illustrates it by comparing a histogram of its results to an appropriately scaled graph of its PMF.

Program 5: ak.binomialRnd

With that, and with the implementation of these functions in BinomialDistribution.js, our exploration of Bernoulli processes is complete.

References

[1] One Thing Or Another, www.thusspakeak.com, 2020.

[2] If At First You Don't Succeed, www.thusspakeak.com, 2020.

[3] Bad Luck Comes In Ks, www.thusspakeak.com, 2020.

[4] How Long Must I Wait?, www.thusspakeak.com, 2015.

[5] Kth Time's The Charm, www.thusspakeak.com, 2015.

[6] ...And Then Three Came Along At Once!, www.thusspakeak.com, 2015.

[7] Beta Animals, www.thusspakeak.com, 2020.

Leave a comment