# Into The Nth Dimension

A few years ago we took a look at some of the properties of uniformly distributed random variables, whose values have equal probabilities of falling within intervals of equal width within their ranges. A simple generalisation of this are multivariate uniform distributions which govern random vectors that fall within equal volumes with equal probability.
We can easily achieve this by drawing the elements of vectors from uniformly distributed random variables. For example, we can generate a vector $$\mathbf{v}$$ with
$v_i \sim U_i\left(a_i, b_i\right)$
where $$v_i$$ are its elements, $$U_i$$ are independent uniform distributions with bounds of $$a_i$$ and $$b_i$$, and $$\sim$$ means drawn from.

### Basic Properties Of Multivariate Uniform Distributions

It trivially follows that the probability that each element of $$\mathbf{v}$$ is no greater than the corresponding element of another vector $$\mathbf{w}$$ is simply the product of the probabilities that values drawn from the uniform random variables associated with each element are no greater than those corresponding elements
$\prod_i \Pr\left(v_i \leqslant w_i\right)$
where $$\prod$$ is the product sign. This is the multivariate generalisation of the cumulative distribution function, or CDF
$P(\mathbf{w}) = \prod_i \Pr\left(v_i \leqslant w_i\right) = \prod_i P_i\left(w_i\right)$
where $$P_i$$ are the CDFs of the uniform distributions of the elements.

Recall that we originally defined the CDF as the integral of the probability density function, or PDF, and we can generalise this to the multivariate case too with
$p(\mathbf{w}) = \prod_i p_i\left(w_i\right)$
where $$p_i$$ are the PDFs of the uniform distributions of the elements. We can now define the multivariate CDF in terms of this with
$P(\mathbf{w}) = \int_{v_1=-\infty}^{w_1} \dots \int_{v_n=-\infty}^{w_n} p(\mathbf{v}) \; \mathrm{d}v_1 \dots \mathrm{d}v_n = \prod_i P_i\left(w_i\right)$
Note that this only works because the element's distributions are independent and so each of their PDFs remains constant as the elements associated with the others change, allowing us to move them outside of each other's integrals as illustrated by derivation 1.

Derivation 1: The Multivariate Uniform PDF
By the rules of integration we have
\begin{align*} \int \int f(x) \times g(y) \; \mathrm{d}x \, \mathrm{d}y &= \int \left(\int f(x) \times g(y) \; \mathrm{d}x\right) \; \mathrm{d}y\\ &= \int \left(g(y) \times \int f(x) \; \mathrm{d}x\right) \; \mathrm{d}y\\ &= \int g(y) \times \left(\int f(x) \; \mathrm{d}x\right) \; \mathrm{d}y\\ &= \int f(x) \; \mathrm{d}x \times \int g(y) \; \mathrm{d}y \end{align*}
and so
\begin{align*} &\int_{v_1=-\infty}^{w_1} \dots \int_{v_n=-\infty}^{w_n} p(\mathbf{v}) \; \mathrm{d}v_1 \dots \mathrm{d}v_n\\ &\quad= \int_{v_1=-\infty}^{w_1} \dots \int_{v_n=-\infty}^{w_n} p_1\left(v_1\right) \times \dots \times p_n\left(v_n\right) \; \mathrm{d}v_1 \dots \mathrm{d}v_n\\ &\quad= \int_{v_1=-\infty}^{w_1} p_1\left(v_1\right) \; \mathrm{d}v_1 \times \int_{v_2=-\infty}^{w_2} \dots \int_{v_n=-\infty}^{w_n} p_2\left(v_2\right) \times \dots \times p_n\left(v_n\right) \; \mathrm{d}v_2 \dots \mathrm{d}v_n\\ &\quad= \int_{v_1=-\infty}^{w_1} p_1(v_1) \; \mathrm{d}v_1 \times \dots \times \int_{v_n=-\infty}^{w_n} p_n(v_n) \; \mathrm{d}v_n\\ &\quad= \prod_i P_i\left(w_i\right)\\ &\quad= P(\mathbf{w}) \end{align*}

Much as we can use univariate PDFs to calculate the expected values of functions of random variables, multivariate PDFs can be used to calculate the expected values of functions of random vectors. Specifically
$\mathrm{E}\left[f(\mathbf{V})\right] = \int \dots \int f(\mathbf{v}) \times p(\mathbf{v}) \; \mathrm{d}v_1 \dots \mathrm{d}v_n$
where $$\mathbf{V}$$ represents the set of possible outcomes of the random variable.

Of particular interest is the characteristic function, or CF, defined as the expectation
$\hat{f}(\mathbf{t}) = \mathrm{E}\left[e^{i\mathbf{t}\mathbf{V}}\right] = \int \dots \int e^{i\mathbf{t}\mathbf{v}} \times p(\mathbf{v}) \; \mathrm{d}v_1 \dots \mathrm{d}v_n$
Here $$i$$ is the imaginary unit, equal to the square root of minus one, and $$\mathbf{t}\mathbf{v}$$ is the product of the vectors $$\mathbf{t}$$ and $$\mathbf{v}$$, given by
$\mathbf{t}\mathbf{v} = \sum_j t_j \times v_j$
where $$\sum$$ is the summation sign. Rearranging the integrand yields
$e^{i\mathbf{t}\mathbf{v}} \times p(\mathbf{v}) = e^{\sum_j i \times t_j \times v_j} \times \prod_j p_j(v_j) = \left(\prod_j e^{i \times t_j \times v_j}\right) \times \left(\prod_j p_j(v_j)\right) = \prod_j e^{i \times t_j \times v_j} \times p_j(v_j)$
and so we can pull the same trick that we did for the integral of the PDF to get
$\hat{f}(\mathbf{t}) = \prod_i \hat{f_i}(t_i)$
where $$\hat{f_i}$$ are the characteristic functions of the elements' uniform distributions.

### ak.multiUniformRnd

Having defined these functions, it is but a simple matter of programming to implement them and we shall begin with the random vector itself. Listing 1 gives the implementation of ak.multiUniformRnd which follows our usual scheme of dispatching to a constructors object to initialise its state.

Listing 1: ak.multiUniformRnd
ak.multiUniformRnd = function() {
var state = {a: , b: , rnd: Math.random};
var arg0  = arguments;
var f;

constructors[ak.type(arg0)](state, arg0, arguments);

f = function(){return rnd(state.a, state.b, state.rnd);};
f.a = function(){return ak.vector(state.a);};
f.b = function(){return ak.vector(state.b);};
f.rnd = function(){return state.rnd;};

return Object.freeze(f);
};

var constructors = {};


After construction, the a and b members contain arrays representing the lower and upper bounds of the elements respectively and the rnd member contains a pseudo random number generator that we'll use to fill the random vectors. Note that the resulting function forwards to the rnd helper function to generate the vectors and the a, b and rnd methods attached to it provide access to the bounds, wrapped in ak.vector objects, and the pseudo random number generator respectively.
Listing 2 gives the implementation of the helper function which simply iterates over the bounds using the pseudo random number generator to fill an array with random values that lie between them, finally constructing an ak.vector with it.

Listing 2: The rnd Helper Function
function rnd(a, b, rnd) {
var n = a.length;
var x = new Array(n);
var i;

for(i=0;i<n;++i) x[i] = a[i] + rnd()*(b[i]-a[i]);
return ak.vector(x);
}


Rather more tiresome is the implementation of the constructors object. By default ak.multiUniformRnd is essentially a univariate standard uniform variate, albeit wrapping up its values in one dimensional vectors. We should like to be able to initialise it as a standard multivariate uniform, having lower bounds of zero and upper bounds of one, as a multivariate uniform for which every element has the same bounds and as a multivariate uniform for which every element has different bounds.
We should also like to be able to use our own pseudo random number generators, such as ak.mtRnd, as enabled for the default bounds by the constructors given in listing 3.

Listing 3: The Default Bounds Constructors
constructors[ak.UNDEFINED_T] = function() {
};

constructors[ak.FUNCTION_T] = function(state, rnd) {
state.rnd = rnd;
};


Next, listing 4 gives the constructors for standard multivariate uniforms.

Listing 4: The Standard Multivariate Uniform Constructors
constructors[ak.NUMBER_T] = function(state, n, args) {
var arg1 = args;

if(n!==ak.floor(n)) {
throw new Error('invalid dimension in ak.multiUniform distribution');
}
state.a.length = n;
state.b.length = n;

constructors[ak.NUMBER_T][ak.nativeType(arg1)](state, n, arg1, args);
};

constructors[ak.NUMBER_T][ak.UNDEFINED_T] = function(state, n) {
while(n--) {
state.a[n] = 0;
state.b[n] = 1;
}
};

constructors[ak.NUMBER_T][ak.FUNCTION_T] = function(state, n, rnd) {
while(n--) {
state.a[n] = 0;
state.b[n] = 1;
}
state.rnd = rnd;
};


Note that we're only checking that the dimension n is an integer since the bounds will throw an error for us is we try to given them negative lengths.
Listing 5 adds constructors with a single specified bound, in which case the other bound is set to zero.

Listing 5: The Single Bound Constructors
constructors[ak.NUMBER_T][ak.NUMBER_T] = function(state, n, x, args) {
var arg2 = args;
constructors[ak.NUMBER_T][ak.NUMBER_T][ak.nativeType(arg2)](state, n, x, arg2, args);
};

constructors[ak.NUMBER_T][ak.NUMBER_T][ak.UNDEFINED_T] = function(state, n, x) {
var a = Math.min(x, 0);
var b = Math.max(x, 0);

if(!isFinite(x) || x===0) {
throw new Error('invalid bounds in ak.multiUniform distribution');
}

while(n--) {
state.a[n] = a;
state.b[n] = b;
}
};

constructors[ak.NUMBER_T][ak.NUMBER_T][ak.FUNCTION_T] = function(state, n, x, rnd) {
var a = Math.min(x, 0);
var b = Math.max(x, 0);

if(!isFinite(x) || x===0) {
throw new Error('invalid bounds in ak.multiUniform distribution');
}

while(n--) {
state.a[n] = a;
state.b[n] = b;
}
state.rnd = rnd;
};


The last of the constructors to give every element the same bounds specify both the lower and the upper and are given in listing 6.

Listing 6: The Twin Bound Constructors
constructors[ak.NUMBER_T][ak.NUMBER_T][ak.NUMBER_T] = function(state, n, x0, x1, args) {
var arg3 = args;
var a = Math.min(x0, x1);
var b = Math.max(x0, x1);

if(!isFinite(a) || !isFinite(b) || a===b) {
throw new Error('invalid bounds in ak.uniform distribution');
}

while(n--) {
state.a[n] = a;
state.b[n] = b;
}
constructors[ak.NUMBER_T][ak.NUMBER_T][ak.NUMBER_T][ak.nativeType(arg3)](state, arg3);
};

constructors[ak.NUMBER_T][ak.NUMBER_T][ak.NUMBER_T][ak.UNDEFINED_T]= function() {
};

constructors[ak.NUMBER_T][ak.NUMBER_T]
[ak.NUMBER_T][ak.FUNCTION_T] = function(state, rnd) {
state.rnd = rnd;
};


Finally we have the per element bounds constructors; firstly for a single specified vector of bounds, given in listing 7

Listing 7: The Single Vector Bound Constructors
constructors[ak.VECTOR_T] = function(state, x0, args) {
var arg1 = args;
constructors[ak.VECTOR_T][ak.type(arg1)](state, x0, arg1, args);
};

constructors[ak.VECTOR_T][ak.UNDEFINED_T] = function(state, x) {
var n = x.dims();

state.a.length = n;
state.b.length = n;
x = x.toArray();

while(n--) {
if(!isFinite(x[n]) || x[n]===0) {
throw new Error('invalid bounds in ak.uniform distribution');
}
state.a[n] = Math.min(x[n], 0);
state.b[n] = Math.max(x[n], 0);
}
};

constructors[ak.VECTOR_T][ak.FUNCTION_T] = function(state, x, rnd) {
var n = x.dims();

state.a.length = n;
state.b.length = n;
x = x.toArray();

while(n--) {
if(!isFinite(x[n]) || x[n]===0) {
throw new Error('invalid bounds in ak.uniform distribution');
}
state.a[n] = Math.min(x[n], 0);
state.b[n] = Math.max(x[n], 0);
}
state.rnd = rnd;
};


and secondly for both vector bounds specified, given in listing 8.

Listing 8: The Twin Vector Bound Constructors
constructors[ak.VECTOR_T][ak.VECTOR_T] = function(state, x0, x1, args) {
var arg2 = args;
var n = x0.dims();

if(x1.dims()!==n) {
throw new Error('dimension mismatch in ak.multiUniform distribution');
}

state.a.length = n;
state.b.length = n;
x0 = x0.toArray();
x1 = x1.toArray();

while(n--) {
if(!isFinite(x0[n]) || !isFinite(x1[n]) || x0[n]===x1[n]) {
throw new Error('invalid bounds in ak.uniform distribution');
}
state.a[n] = Math.min(x0[n], x1[n]);
state.b[n] = Math.max(x0[n], x1[n]);
}
constructors[ak.VECTOR_T][ak.VECTOR_T][ak.nativeType(arg2)](state, arg2);
};

constructors[ak.VECTOR_T][ak.VECTOR_T][ak.UNDEFINED_T] = function() {
};

constructors[ak.VECTOR_T][ak.VECTOR_T][ak.FUNCTION_T] = function(state, rnd) {
state.rnd = rnd;
};


Note that in every case we are ensuring that the elements of state.a are less than or equal to the corresponding elements of state.b, using Math.min and Math.max where necessary.

With the tedious business of initialisation behind us, we're finally ready to actually use our shiny new uniform random vector generator, as done by program 1.

Program 1: Using ak.multiUniformRnd Now that was admittedly a somewhat anti-climactic demonstration given just how much code we had to put in place to get there, but at least it shows that it works!

### And The Rest!

Of the functions associated with the multivariate uniform distribution the PDF is the simplest, returning a constant when its argument is within the distribution's bounds and zero otherwise, and listing 9 gives its implementation in ak.multiUniformPDF.

Listing 9: ak.multiUniformPDF
function pdf(a, b, p, x) {
var n = a.length;
var i;

if(ak.type(x)!==ak.VECTOR_T) {
throw new Error('invalid argument in ak.multiUniformPDF');
}

if(x.dims()!==n) {
throw new Error('dimension mismatch in ak.multiUniformPDF');
}

x = x.toArray();

for(i=0;i<n && x[i]>=a[i] && x[i]<=b[i];++i);
return i<n ? 0 : p;
}

ak.multiUniformPDF = function() {
var state = {a: , b: , p: 1};
var arg0  = arguments;
var i, n, f;

constructors[ak.type(arg0)](state, arg0, arguments);

n = state.a.length;
for(i=0;i<n;++i) state.p /= state.b[i] - state.a[i];

f = function(x){return pdf(state.a, state.b, state.p, x);};
f.a = function(){return ak.vector(state.a);};
f.b = function(){return ak.vector(state.b);};
return Object.freeze(f);
};


We're reusing the constructors object to initialise the state of the PDF, after which we update its member p to equal the product of the elements' densities, each of which is the reciprocal of the distance between its bounds. The pdf helper function simply checks that its x argument is an ak.vector with the correct number of dimensions before returning zero if it's outside of the distribution's bounds and p otherwise.

So far, so trivial.

Barely any less trivial is the CDF, as implemented by ak.multiUniformCDF in listing 10.

Listing 10: ak.multiUniformCDF
function cdf(a, b, x) {
var n = a.length;
var c = 1;
var i, ci;

if(ak.type(x)!==ak.VECTOR_T) {
throw new Error('invalid argument in ak.multiUniformCDF');
}

if(x.dims()!==n) {
throw new Error('dimension mismatch in ak.multiUniformCDF');
}

x = x.toArray();

for(i=0;i<n;++i) {
ci = (x[i]-a[i])/(b[i]-a[i]);
if(ci<0)      ci = 0;
else if(ci>1) ci = 1;
c *= ci;
}
return c;
}

ak.multiUniformCDF = function() {
var state = {a: , b: };
var arg0  = arguments;
var f;

constructors[ak.type(arg0)](state, arg0, arguments);

f = function(x){return cdf(state.a, state.b, x);};
f.a = function(){return ak.vector(state.a);};
f.b = function(){return ak.vector(state.b);};

return Object.freeze(f);
};


This proceeds in much the same fashion as ak.multiUniformPDF, albeit with the helper function cdf calculating the product of the elements' CDFs
\begin{align*} P_i\left(x_i\right) &= \begin{cases} 0 & x_i < a_i\\ \dfrac{x_i - a_i}{b_i - a_i} & a \leqslant x_i \leqslant b_i\\ 1 & x_i > b_i\\ \end{cases} \\ &= \max\left(0, \min\left(1, \dfrac{x_i - a_i}{b_i - a_i}\right)\right) \end{align*}
Program 2 demonstrates ak.multiUniformCDF by comparing its result to the proportion of a random sample whose elements are all no greater than those of its argument.

Program 2: ak.multiUniformCDF

A pretty good match, I'd say!

Turning up the difficulty level to fiendishly simple, we come to the CF. Once again, this is just the product of the elements' CFs, which you will recall from our treatment of the uniform distribution are given by
$\hat{f_j}\left(t_j\right) = \frac{e^{i t_j b_j} - e^{i t_j a_j}}{i t_j (b_j - a_j)}$
As before, these are undefined at zero since the numerator and denominator are both zero, but if we define
$\delta_j = \frac{t_j b_j - t_j a_j}{2}$
then, as it tends to zero, we have
$\lim_{\delta_j \to 0} \hat{f_j}\left(t_j\right) = \frac{e^{i t_j b_j} + e^{i t_j a_j}}{2}$
Finally, we can avoid the expense of using our ak.complex type to do the arithmetic by exploiting the fact that, for real $$\theta$$
$e^{i\theta} = \cos \theta + i \sin \theta$
and keeping track of the real and imaginary parts of the result explicitly, constructing an ak.complex object with them at the end of the function, as done by ak.multiUniformCF in listing 11.

Listing 11: ak.multiUniformCF
var cf_eps = Math.sqrt(ak.EPSILON);

function cf(a, b, t) {
var n = a.length;
var zr = 1;
var zi = 0;
var i, ta, tb, di, re, im, tr, ti;

if(ak.type(t)!==ak.VECTOR_T) {
throw new Error('invalid argument in ak.multiUniformCF');
}

if(t.dims()!==n) {
throw new Error('dimension mismatch in ak.multiUniformCF');
}

t = t.toArray();

for(i=0;i<n;++i) {
ta = t[i]*a[i];
tb = t[i]*b[i];
di = tb-ta;

if(!isFinite(di)) {
return isNaN(t[i]) ? ak.complex(ak.NaN, ak.NaN) : ak.complex(0);
}

if(Math.abs(di)>cf_eps) {
re = (Math.sin(tb)-Math.sin(ta))/di;
im = (Math.cos(ta)-Math.cos(tb))/di;
}
else {
re = (Math.cos(tb)+Math.cos(ta))/2;
im = (Math.sin(tb)+Math.sin(ta))/2;
}
tr = zr;
ti = zi;

zr = tr*re - ti*im;
zi = tr*im + ti*re;
}
return ak.complex(zr, zi);
}

ak.multiUniformCF = function() {
var state = {a: , b: };
var arg0  = arguments;
var a, b, f;

constructors[ak.type(arg0)](state, arg0, arguments);

f = function(t){return cf(state.a, state.b, t);};
f.a = function(){return ak.vector(state.a);};
f.b = function(){return ak.vector(state.b);};

return Object.freeze(f);
};


Note that if any element of t is NaN we return an ak.complex with NaN real and imaginary parts and that if not every element is finite we return a zero ak.complex value to avoid producing NaNs by calling cos and sin with infinite arguments.

We can test this in the same way that we tested the CDF in program 2, by comparing the average of a sample of the integrand to the result of ak.multiUniformCF, as done by program 3.

Program 3: ak.multiUniformCF

Two for two!

We can get a feel for how the CF behaves overall with a contour plot of the magnitude of its values, as shown by program 4 in which the brighter points represent larger magnitudes.

Program 4: Oooh, Pretty! The grid of black lines is a consequence of the fact that this multivariate CF is the product of two univariate CFs; if either of the latter equal zero then so must the former.

### One Of Our Properties Is Missing

Conspicuous in its absence is the inverse of the CDF, which we have defined for every distribution that we have covered so far. The reason for this is that it's extremely difficult to define the inverse CDF for multivariate distributions, at least in general, because there are an infinite number of points that would yield any given value of the CDF.

Now, a common use of the inverse CDFs of univariate distributions is to map numbers between zero and one to values of their random variables; specifically those for which their CDFs return those numbers. We can achieve something that is at least similar by mapping vectors of values between zero and one to values of multivariate random variables.
For the multivariate uniform distribution this simply means applying the relevant univariate inverse CDF to each element of those vectors, given by
$P_i^{-1}(u_i) = a_i + u_i \times \left(b_i - a_i\right)$
as done by ak.multiUniformMap in listing 12.

Listing 12: ak.multiUniformMap
function map(a, b, c) {
var n = a.length;
var x, i;

if(ak.type(c)!==ak.VECTOR_T) {
throw new Error('invalid argument in ak.multiUniformMap');
}

if(c.dims()!==n) {
throw new Error('dimension mismatch in ak.multiUniformMap');
}

c = c.toArray();

x = new Array(n);
for(i=0;i<n;++i) {
if(c[i]<=0)      x[i] = a[i];
else if(c[i]>=1) x[i] = b[i];
else             x[i] = a[i] + c[i]*(b[i]-a[i]);
}
return ak.vector(x);
}

ak.multiUniformMap = function() {
var state = {a: , b: };
var arg0  = arguments;
var f;

constructors[ak.type(arg0)](state, arg0, arguments);

f = function(c){return map(state.a, state.b, c);};
f.a = function(){return ak.vector(state.a);};
f.b = function(){return ak.vector(state.b);};

return Object.freeze(f);
};


Note that we're replacing elements of the argument that are less than zero with zero and those are greater than one with one, just as we have done for the values of the arguments of our univariate inverse CDFs.

One way that we can use such maps is to evenly sample multivariate random variables using quasi random number generators, such as our ak.haltonRnd generator, as demonstrated by program 5.

Program 5: ak.multiUniformMap Neat, huh?

### And Another Is Required

There's another property of univariate CDFs that doesn't generalise to multivariate CDFs; namely that we can recover the probability that a random variable $$X$$ will be strictly greater than some particular value $$x$$ by subtracting the value of the CDF for it from one
$\Pr(X>x) = 1 - P(x)$
Mathematically this expresses the probability of every possible outcome of an observation of a random variable except those in which it is less than or equal to the given value. In the multivariate case it equals the probability that at least one element of the random variable is greater than the corresponding element of the argument of the CDF, not that all of them are.
The probability of the latter is given by the complementary CDF, which for a multivariate uniform random variable $$\mathbf{v}$$ is defined as
$\bar{P}(\mathbf{w}) = \prod_i \Pr\left(v_i > w_i\right) = \prod_i \left(1 - P\left(w_i\right)\right)$
as implemented by ak.multiUniformCompCDF in listing 13.

Listing 13: ak.multiUniformCompCDF
function compCDF(a, b, x) {
var n = a.length;
var c = 1;
var i, ci;

if(ak.type(x)!==ak.VECTOR_T) {
throw new Error('invalid argument in ak.multiUniformCompCDF');
}

if(x.dims()!==n) {
throw new Error('dimension mismatch in ak.multiUniformCompCDF');
}

x = x.toArray();

for(i=0;i<n;++i) {
ci = (b[i]-x[i])/(b[i]-a[i]);
if(ci<0)      ci = 0;
else if(ci>1) ci = 1;
c *= ci;
}
return c;
}

ak.multiUniformCompCDF = function() {
var state = {a: , b: };
var arg0  = arguments;
var f;

constructors[ak.type(arg0)](state, arg0, arguments);

f = function(x){return compCDF(state.a, state.b, x);};
f.a = function(){return ak.vector(state.a);};
f.b = function(){return ak.vector(state.b);};

return Object.freeze(f);
};


Note that here we are exploiting the fact that
$1 - \frac{x_i-a_i}{b_i-a_i} = \frac{b_i-a_i}{b_i-a_i} - \frac{x_i-a_i}{b_i-a_i} = \frac{\left(b_i-a_i\right)-\left(x_i-a_i\right)}{b_i-a_i} = \frac{b_i-x_i}{b_i-a_i}$
to compute the complementary CDFs of the elements of our multivariate uniform random variable.

Program 6 compares the result of ak.multiUniformCompCDF to the proportion of a random sample having elements that are all greater than those of its argument.

Program 6: ak.multiUniformCompCDF

And with that we are done with the multivariate uniform distribution, except to point out that all of these functions can be found in MultiUniformDistribution.js. ### References

 What Are The Chances Of That?, www.thusspakeak.com, 2014.

 Characteristically Tricksy, www.thusspakeak.com, 2014.

 Halton, Here's A Pseud, www.thusspakeak.com, 2016.

### Gallimaufry  AKCalc ECMA  Endarkenment Turning Sixteen

• Subscribe
• Contact
• Downloads
• GitHub

This site requires HTML5, CSS 2.1 and JavaScript 5 and has been tested with Chrome 26+ Firefox 20+ Internet Explorer 9+