As programmers, we typically use random numbers to add novelty to games, such as shuffling a deck of cards in a game of solitaire. Most languages provide a pseudorandom number generator for this purpose; JavaScript has
Program 1 illustrates this by plotting a bar chart of the number of values generated by
As the number of observations grows we see that the regions become more and more evenly populated. If we divide the number of observations in each region by the total number of observations we get the probability that one of those observations, picked at random, will fall in that region and as their number increases these become better and better approximations of the probability that new observations will fall in each region, a fact known as Borel's law of large numbers. For a uniform random variable these would all be equal, hence the behaviour of the bar chart.
Now, since the regions don't overlap an observation must fall into one and only one of them. This means that we can calculate the probability that an observation will fall into one of a number of them by adding the probabilities that it will fall into each of them. The more regions we have, the more finely we can calculate the probabilities of observations falling into subsets of the variable's range. If we take this to the limit of an infinite number of infinitesimally small regions and replace the sum with an integral, we can calculate such probabilities for any interval.
We can represent this limit with a function, known as the probability density function, or PDF for short. For example, a uniform random variable has a PDF that is equal to one over the size of its range for every value in its range and zero everywhere else. Denoting a uniform distribution with range \([a, b]\) as \(U(a, b)\) and its PDF as \(u_{a,b}\) we have
If we integrate this function over the interval \([x_0, x_1]\), assuming it lies within \([a, b]\), we recover the probability that an observed value lies within it
Rather more confusingly, if we set \(x_0\) to \(x_1\) we find that the probability of observing any particular value is zero, even though we will most assuredly observe some of them. This apparent contradiction can be resolved with the formal mathematics of probability theory, but unfortunately that's beyond the scope of this blog. My best advice is to not think too much about it.
The integral of the PDF from \(\infty\) to \(x\) yields the probability that an observation will be less than or equal to \(x\) and is known as the cumulative distribution function, or CDF. For the uniform distribution we'll call it \(U_{a,b}\) and it takes the values
For example, the expected value of the square of a random variable with the distribution \(U(a, b)\) is given by
For these reasons, I'm more comfortable with implementing the functions associated with a distribution as plain old functions. We'll start with the uniform random number generator, given in listing 1.
Note that we've added methods to access the distribution parameters \(a\) and \(b\) and the underlying random number source (remember, in JavaScript functions are objects too). In code I generally prefer to use descriptive names, say
We'll be using our usual
If we pass a single number to
A pair of numbers initialises both bounds, as shown in listing 4.
Note that in all cases we treat infinite, NaN or equal bounds as errors.
The uniform CDF and PDF are given in listing 5.
Note that we're reusing the
Another useful function for a distribution is the inverse of the CDF which returns the value that we have a given probability of observing values less than or equal to. For the uniform distribution we have
All of these functions are defined in UniformDistribution.js. Pay no mind to
Note that the differences in the observed and predicted values are a consequence of using a sample of values; if you run the program a number of times you'll find that the predictions are equally likely to be slightly smaller as they are to be slightly larger than the observations.
Math.random
which returns a value greater than or equal to zero and strictly less than one. These create a sequence of values according to some formula that, whilst not actually random, are sufficiently muddled up so as to be good enough for most purposes.
The Uniform Distribution
Mathematically speaking, such generators approximate random variables that have a uniform distribution. What this means is that observed values are spread out uniformly over the variable's range in the sense that the probability of observing a value within some subset of the range depends only upon the size of the subset and not its particular members. For example, if the variable has a range of \([0, 1]\) then the probability that an observation will fall within \(\left[\tfrac{1}{4}, \tfrac{3}{4}\right]\) is one half, the same probability that it will fall within \(\left[0, \tfrac{1}{2}\right]\).Program 1 illustrates this by plotting a bar chart of the number of values generated by
Math.random
that fall within equal divisions of its range.
Program 1: A Histogram of Math.random



As the number of observations grows we see that the regions become more and more evenly populated. If we divide the number of observations in each region by the total number of observations we get the probability that one of those observations, picked at random, will fall in that region and as their number increases these become better and better approximations of the probability that new observations will fall in each region, a fact known as Borel's law of large numbers. For a uniform random variable these would all be equal, hence the behaviour of the bar chart.
Now, since the regions don't overlap an observation must fall into one and only one of them. This means that we can calculate the probability that an observation will fall into one of a number of them by adding the probabilities that it will fall into each of them. The more regions we have, the more finely we can calculate the probabilities of observations falling into subsets of the variable's range. If we take this to the limit of an infinite number of infinitesimally small regions and replace the sum with an integral, we can calculate such probabilities for any interval.
We can represent this limit with a function, known as the probability density function, or PDF for short. For example, a uniform random variable has a PDF that is equal to one over the size of its range for every value in its range and zero everywhere else. Denoting a uniform distribution with range \([a, b]\) as \(U(a, b)\) and its PDF as \(u_{a,b}\) we have
\[
\begin{align*}
x &\sim U(a, b)\\
u_{a,b}(x) &=
\begin{cases}
0\quad & x < a\\
1/(ba)\quad & a\leqslant x ⩽ b\\
0\quad & x > b
\end{cases}
\end{align*}
\]
Here the symbol \(\sim\) is read drawn from and means that the value to its left is an observation of a random variable with the distribution to its right.If we integrate this function over the interval \([x_0, x_1]\), assuming it lies within \([a, b]\), we recover the probability that an observed value lies within it
\[
\begin{align*}
\int_{x_0}^{x_1} u_{a,b}(x) \mathrm{d}x &= \int_{x_0}^{x_1} \frac{1}{ba} \mathrm{d}x\\
&= \left[\frac{x}{ba}\right]_{x_0}^{x_1}\\
&= \frac{x_1}{ba}  \frac{x_0}{ba}\\
&= \frac{x_1x_0}{ba}
\end{align*}
\]
If we set \(x_0\) to \(a\) and \(x_1\) to \(b\) we trivially get the result that the probability of observing a value in the entire range of the distribution is one.Rather more confusingly, if we set \(x_0\) to \(x_1\) we find that the probability of observing any particular value is zero, even though we will most assuredly observe some of them. This apparent contradiction can be resolved with the formal mathematics of probability theory, but unfortunately that's beyond the scope of this blog. My best advice is to not think too much about it.
The integral of the PDF from \(\infty\) to \(x\) yields the probability that an observation will be less than or equal to \(x\) and is known as the cumulative distribution function, or CDF. For the uniform distribution we'll call it \(U_{a,b}\) and it takes the values
\[
U_{a,b}(x) =
\begin{cases}
0\quad & x < a\\
(xa)/(ba)\quad & a\leqslant x \leqslant b\\
1\quad & x > b
\end{cases}
\]
The probability that an observation falls within our interval \([x_0, x_1]\) is then trivially given by \(U_{a,b}(x_1)  U_{a,b}(x_0)\).
Managing Expectations
Another use of the PDF is the calculation of average, or expected, values of functions of random variables. Just as we would calculate the expected payoff from a dice game by multiplying the payoff of each outcome of the dice by the probability of that outcome and adding the results, so we can calculate the expected value of a function by integrating the product of it and the PDF.For example, the expected value of the square of a random variable with the distribution \(U(a, b)\) is given by
\[
\begin{align*}
\int_{\infty}^{+\infty} x^2 u_{a,b}(x) \mathrm{d}x &= \int_{a}^{b} \frac{x^2}{ba} \mathrm{d}x\\
&= \left[\frac{x^3}{3(ba)}\right]_a^b\\
&= \frac{b^3}{3(ba)}  \frac{a^3}{3(ba)}\\
&= \frac{b^3  a^3}{3(ba)}\\
&= \tfrac{1}{3}\left(a^2 + ab + b^2\right)
\end{align*}
\]
as demonstrated by program 2 for Math.random
, which is approximately distributed as \(U(0, 1)\).
Program 2: Uniform Expectation



ak Uniform Distribution Functions
Now it might seem tempting to implement all things uniform distribution related as methods of some uniform distribution object, but I have something of a philosophical objection to that approach. In some sense we can think of the CDF as being the distribution (more accurately it's a complete definition of it) and the PDF as just a transformation of the CDF, specifically its derivative. Treating a random variable as a property of a distribution gets the relationship the wrong way round since the distribution is a property of the random variable .For these reasons, I'm more comfortable with implementing the functions associated with a distribution as plain old functions. We'll start with the uniform random number generator, given in listing 1.
Listing 1: ak.uniformRnd
ak.uniformRnd = function() { var state = {a: 0, b: 1, rnd: Math.random}; var arg0 = arguments[0]; var f; constructors[ak.nativeType(arg0)](state, arg0, arguments); f = function(){return state.a + state.rnd()*(state.bstate.a);}; f.a = function(){return state.a;}; f.b = function(){return state.b;}; f.rnd = function(){return state.rnd;}; return Object.freeze(f); }; var constructors = {};
Note that we've added methods to access the distribution parameters \(a\) and \(b\) and the underlying random number source (remember, in JavaScript functions are objects too). In code I generally prefer to use descriptive names, say
lowerBound
rather than a
, but for distributions I shall adopt the conventions of mathematics and name the parameters with letters, and usually Greek ones at that. For one thing, that's what mathematicians would expect and for another it's not always obvious what they should be called instead.We'll be using our usual
constructors
dispatch approach to initialise the parameters. The first pair of constructors, shown in listing 2, leave the parameters at their default values of 0 and 1 with the second enabling an alternative source for the underlying random numbers, assuming we have one of course (we shall return to this in another post).Listing 2: ak.uniformRnd Default Parameter Constructors
constructors[ak.UNDEFINED_T] = function() { }; constructors[ak.FUNCTION_T] = function(state, rnd) { state.rnd = rnd; };
If we pass a single number to
ak.uniformRnd
the bounds of the random variable are set to it and zero, as illustrated in listing 3.Listing 3: ak.uniformRnd One Parameter Constructors
constructors[ak.NUMBER_T] = function(state, x, args) { var arg1 = args[1]; constructors[ak.NUMBER_T][ak.nativeType(arg1)](state, x, arg1, args); if(!isFinite(state.a)  !isFinite(state.b)  state.a===state.b) { throw new Error('invalid bounds in ak.uniform distribution'); } }; constructors[ak.NUMBER_T][ak.UNDEFINED_T] = function(state, x) { state.a = Math.min(x, 0); state.b = Math.max(x, 0); }; constructors[ak.NUMBER_T][ak.FUNCTION_T] = function(state, x, rnd) { state.a = Math.min(x, 0); state.b = Math.max(x, 0); state.rnd = rnd; };
A pair of numbers initialises both bounds, as shown in listing 4.
Listing 4: ak.uniformRnd Two Parameter Constructors
constructors[ak.NUMBER_T][ak.NUMBER_T] = function(state, x0, x1, args) { var arg2 = args[2]; state.a = Math.min(x0, x1); state.b = Math.max(x0, x1); constructors[ak.NUMBER_T][ak.NUMBER_T][ak.nativeType(arg2)](state, arg2); }; constructors[ak.NUMBER_T][ak.NUMBER_T][ak.UNDEFINED_T] = function() { }; constructors[ak.NUMBER_T][ak.NUMBER_T][ak.FUNCTION_T] = function(state, rnd) { state.rnd = rnd; };
Note that in all cases we treat infinite, NaN or equal bounds as errors.
The uniform CDF and PDF are given in listing 5.
Listing 5: ak.uniformCDF And ak.uniformPDF
function cdf(a, b, x) { var c = (xa)/(ba); if(c<0) c = 0; else if(c>1) c = 1; return c; } ak.uniformCDF = function() { var state = {a: 0, b: 1}; var arg0 = arguments[0]; var f; constructors[ak.nativeType(arg0)](state, arg0, arguments); f = function(x){return cdf(state.a, state.b, x);}; f.a = function(){return state.a;}; f.b = function(){return state.b;}; return Object.freeze(f); }; ak.uniformPDF = function() { var state = {a: 0, b: 1}; var arg0 = arguments[0]; var f; constructors[ak.nativeType(arg0)](state, arg0, arguments); f = function(x){return x>=state.a && x<=state.b ? 1/(state.bstate.a) : 0;}; f.a = function(){return state.a;}; f.b = function(){return state.b;}; return Object.freeze(f); };
Note that we're reusing the
uniformRnd
constructors to initialise the parameters for both functions.Another useful function for a distribution is the inverse of the CDF which returns the value that we have a given probability of observing values less than or equal to. For the uniform distribution we have
ak.uniformInvCDF
as shown in listing 6.Listing 6: ak.uniformInvCDF
function invCDF(a, b, c) { var x = a + c*(ba); if(x<a) x = a; else if(x>b) x = b; return x; } ak.uniformInvCDF = function() { var state = {a: 0, b: 1}; var arg0 = arguments[0]; var f; constructors[ak.nativeType(arg0)](state, arg0, arguments); f = function(c){return invCDF(state.a, state.b, c);}; f.a = function(){return state.a;}; f.b = function(){return state.b;}; return Object.freeze(f); };
All of these functions are defined in UniformDistribution.js. Pay no mind to
ak.uniformCF
which we shall cover in the next post.
Using The ak Uniform Distribution Functions
In program 3 we compare the number of values generated withak.uniformRnd
falling within some region with the number predicted by ak.uniformCDF
and their average value with that predicted by ak.uniformInvCDF
.
Program 3: ak Uniform Distribution Functions



Note that the differences in the observed and predicted values are a consequence of using a sample of values; if you run the program a number of times you'll find that the predictions are equally likely to be slightly smaller as they are to be slightly larger than the observations.
Leave a comment