How Long Must I Wait?

| 0 Comments

Whilst we're back on the subject of probability distributions[1][2] I should like to cover the one that first drew my interest to probability and statistics. Back in my undergraduate days I was chiefly interested in pure mathematics, in deriving mathematical truths from first principles, and my impression of statistics was, rather unfairly in retrospect, that it involved the rote learning of formulae and how to use them rather than proving why they should be used in the first place.
Then I chanced upon a description of a waiting time problem and learned the error of my ways.

The Waiting Time Problem

Imagine that we have some process in which events occur at random and, crucially, is memoryless, meaning that the probability that an event occurs at any given time in no way depends upon how long we have been waiting for it. Like taxis. When it's raining.

Figure 1: The Exponential PDF
Under such, in my opinion not particularly restrictive, conditions, the occurrences of such events must be exponentially distributed, with the probability that an event will occur no later than time \(x\) being given by the cumulative distribution function
\[ P_\lambda(x) = \begin{cases} 1 - e^{-\lambda x} & x \geqslant 0\\ 0 & x < 0 \end{cases} \]
where \(\lambda\) is the average rate at which the events occur. We can easily recover its probability density function, shown in figure 1, by differentiating the CDF
\[ p_\lambda(x) = \begin{cases} \lambda e^{-\lambda x} & x \geqslant 0\\ 0 & x < 0 \end{cases} \]
Now, what struck me as interesting about this result is just how simple it is to prove. So much so, in fact, that I shan't relegate the proof to a sidebar! Well, at least not entirely...

The Inevitable Wait

The trick to solving this problem is to show that the probability that we must wait longer than \(x\) for an event to occur is given by
\[ S_\lambda(x) = 1 - P_\lambda(x) = e^{-\lambda x} \]
Because the process is memoryless, the probability that we must wait longer than \(x+c\) given that we have already waited \(c\) must be the same as the probability that we must wait longer than \(x\), which we write as
\[ \Pr(X > x+c \;|\; X > c) = \Pr(X > x) \]
where the left hand term is a conditional probability in which the vertical bar means given that. We calculate such probabilities with
\[ \Pr(X \in A \;|\; X \in B) = \frac{\Pr(X \in A \cap B)}{\Pr(X \in B)} \]
Figure 2: The Conditional Probability
where \(A\) and \(B\) are sets, \(\in\) means within and \(A \cap\ B\) is the intersection of \(A\) and \(B\). Figure 2 shows why this is so with a Venn diagram in which the set of all possible values of \(X\) is represented by the tan rectangle, \(A\) by the (mostly) green circle and \(B\) by the (mostly) red circle and the probabilities of observing values in these sets are represented by the areas of these shapes. Given that we know that an observation is from the red circle then we must scale the diagram so that it has an area of one, to reflect that certainty. The probability that it is also in the green circle is thus represented by the rescaled area of the yellow intersection of \(A\) and \(B\), which is clearly given by the above formula.

Now for our problem, if \(X\) is greater than \(x+c\) then it is trivially greater than \(c\) since \(x\) is non-negative, so we have
\[ \Pr(X > x+c \;\wedge\; X > c) = \Pr(X > x+c) \]
where \(\wedge\) means and, and therefore
Derivation 1: The Mean Waiting Time
By definition, the mean is the expected value
\[ \begin{align*} \mathrm{E}[X] &= \int_0^\infty x \times p_\lambda(x) \mathrm{d}x\\ &= \int_0^\infty \lambda x e^{-\lambda x} \mathrm{d}x \end{align*} \]
Substituting \(y = \lambda x\) yields, for positive \(\lambda\)
\[ \mathrm{E}[X] = \int_0^\infty ye^{-y} \frac{\mathrm{d}y}{\lambda} = \frac{1}{\lambda} \int_0^\infty ye^{-y} \mathrm{d}y \]
Finally, integration by parts yields
\[ \begin{align*} \int_0^\infty ye^{-y} \mathrm{d}y &= \left[-ye^{-y}\right]_0^\infty - \int_0^\infty -e^{-y} \mathrm{d}y\\ &= 0 + \int_0^\infty e^{-y} \mathrm{d}y\\ &= \left[-e^{-y}\right]_0^\infty\\ &=1 \end{align*} \]
\[ \begin{align*} \frac{\Pr(X > x+c)}{\Pr(X > c)} &= \Pr(X > x)\\ \Pr(X > x+c) &= \Pr(X > x) \times \Pr(X > c) \end{align*} \]
This is exactly the relationship that we have when multiplying exponentials, so we can conclude that
\[ \Pr(X > x) = e^{ax} \]
for some constant \(a\). Since the probability that we must wait for longer than \(x\) must be less than or equal to one, \(a\) cannot be positive and hence
\[ \Pr(X > x) = e^{-\lambda x} \]
for some non-negative \(\lambda\). Finally, the mean waiting time is \(1/\lambda\), as proven in derivation 1, so \(\lambda\) is the average rate at which the events occur and should therefore be both strictly positive and finite.

Quod erat flippin' demonstrandum baby!

Now you may not be as impressed by this as I was, and that's fine, but I should point out that this distribution is as fundamental to the study of queues and traffic flow as the normal distribution is to population statistics. Not a bad result for a little basic algebra!

ak Exponential Distribution Functions

The implementations of the PDF and CDF are given in listing 1.

Listing 1: ak.exponentialPDF And ak.exponentialCDF
ak.exponentialPDF = function() {
 var state = {lambda: 1};
 var arg0  = arguments[0];
 var f;

 constructors[ak.nativeType(arg0)](state, arg0, arguments);

 f = function(x){return x<0 ? 0 : state.lambda*Math.exp(-state.lambda*x);};
 f.lambda = function(){return state.lambda;};
 return Object.freeze(f);
};

ak.exponentialCDF = function() {
 var state = {lambda: 1};
 var arg0  = arguments[0];
 var f;

 constructors[ak.nativeType(arg0)](state, arg0, arguments);

 f = function(x){return x<=0 ? 0 : 1 - Math.exp(-state.lambda*x);};
 f.lambda = function(){return state.lambda;};
 return Object.freeze(f);
};

var constructors = {};

Note that since waiting times cannot be negative we return zero densities and probabilities for negative arguments, although the density at zero is non-zero, unlike the probability of observing a zero waiting time; a consequence of the latter being an integral over an empty interval.
Furthermore, we are using our usual technique of dispatching to a constructors object, given in listing 2, to initialise the state of the distribution functions and adding property access methods to them.

Listing 2: The constructors Object
constructors[ak.UNDEFINED_T] = function() {
};

constructors[ak.FUNCTION_T] = function(state, rnd) {
 state.rnd = rnd;
};

constructors[ak.NUMBER_T] = function(state, lambda, args) {
 var arg1 = args[1];
 constructors[ak.NUMBER_T][ak.nativeType(arg1)](state, arg1);

 state.lambda = Number(lambda);
 if(state.lambda<=0 || !isFinite(state.lambda)) {
  throw new Error('invalid lambda in ak.exponential distribution');
 }
};

constructors[ak.NUMBER_T][ak.UNDEFINED_T] = function(state) {
};

constructors[ak.NUMBER_T][ak.FUNCTION_T] = function(state, rnd) {
 state.rnd = rnd;
};

Here we enforce that state.lambda is strictly positive and finite, thus representing a valid rate, and provide a facility for passing a custom random number generator to the implementation of the exponential random variable.

We shall implement it by applying the inverse CDF to the standard uniform random number returned by that generator, which we can derive with a little more basic algebra.
\[ \begin{align*} P_\lambda(x) &= 1 - e^{-\lambda x} = c\\ e^{-\lambda x} &= 1-c\\ -\lambda x &= \ln(1-c)\\ x &= -\frac{\ln(1-c)}{\lambda} \end{align*} \]
The implementations of the exponential inverse CDF and random variable are given in listing 3.

Listing 3: ak.exponentialInvCDF And ak.exponentialRnd
function invCDF(l, c) {
 if(c<=0) return 0;
 if(c>=1) return ak.INFINITY;
 return -Math.log(1-c)/l;
}

ak.exponentialInvCDF = function() {
 var state = {lambda: 1};
 var arg0  = arguments[0];
 var f;

 constructors[ak.nativeType(arg0)](state, arg0, arguments);

 f = function(c){return invCDF(state.lambda, c);};
 f.lambda = function(){return state.lambda;};
 return Object.freeze(f);
};

ak.exponentialRnd = function() {
 var state = {lambda: 1, rnd: Math.random};
 var arg0  = arguments[0];
 var f;

 constructors[ak.nativeType(arg0)](state, arg0, arguments);

 f = function(){return invCDF(state.lambda, state.rnd());};
 f.lambda = function(){return state.lambda;};
 f.rnd = function(){return state.rnd;};
 return Object.freeze(f);
};

A bar chart of observations of ak.exponentialRnd is contrasted with a line graph of the expected observations, calculated using ak.exponentialCDF, in program 1.

Program 1: ak.exponentialRnd

A pretty good match, I'm sure you'll agree!

All that's left is the characteristic function, which is given by the expectation
\[ \mathrm{E}\left[e^{itX}\right] = \int_0^\infty e^{itx} \lambda e^{-\lambda x} \mathrm{d}x = \lambda \int_0^\infty e^{(it-\lambda)x} \mathrm{d}x = \lambda \left[\frac{e^{(it-\lambda)x}}{it-\lambda}\right]_0^\infty =-\lambda\frac{1}{it-\lambda} =\left(1-\frac{it}{\lambda}\right)^{-1} \]
and is implemented in listing 4 using the identity
\[ \frac{1}{x + iy} = \frac{x - iy}{x^2 + y^2} \]

Listing 4: ak.exponentialCF
function cf(l, t) {
 var tl = t/l;
 var tl2 = 1+tl*tl;

 if(!isFinite(tl2)) return isNaN(t) ? ak.complex(ak.NaN, ak.NaN) : ak.complex(0);
 return ak.complex(1/tl2, t/(tl2*l));
}

ak.exponentialCF = function() {
 var state = {lambda: 1};
 var arg0  = arguments[0];
 var f;

 constructors[ak.nativeType(arg0)](state, arg0, arguments);

 f = function(t){return cf(state.lambda, t);};
 f.lambda = function(){return state.lambda;};
 return f;
};

There's a lot more to say about the statistical properties of memoryless waiting time processes, but until next time all of these functions can be found in ExponentialDistribution.js.

References

[1] The Product Distribution Business, www.thusspakeak.com, 2015.

[2] Characteristically Pathological, www.thusspakeak.com, 2015.

Leave a comment