Archimedean Review

| 0 Comments

In the last couple of posts[1][2] we've been taking a look at Archimedean copulas which define the dependency between the elements of vector values of a multivariate random variable by applying a generator function \(\phi\) to the values of the cumulative distribution functions, or CDFs, of their distributions when considered independently, known as their marginal distributions, and applying the inverse of the generator to the sum of the results to yield the value of the multivariate CDF.
We have seen that the densities of Archimedean copulas are rather trickier to calculate and that making random observations of them is trickier still. Last time we found an algorithm for the latter, albeit with an implementation that had troubling performance and numerical stability issues, and in this post we shall add an improved version to the ak library that addresses those issues.

A Brief Review

Recall that for a continuous function \(\phi\), the conditions
\[ \phi : [0,1] \mapsto [0,\infty]\\ \phi(1) = 0\\ x > y \rightarrow \phi(x) < \phi(y)\\ (-1)^k \frac{\mathrm{d}^k}{\mathrm{d}t^k} \phi^{-1}(t) \geqslant 0 \]
where \(\mapsto\) means maps to, \(\rightarrow\) means implies and \(\frac{\mathrm{d}^k}{\mathrm{d}t^k}\phi^{-1}(t)\) is the \(k^\mathrm{th}\) derivative of the inverse of \(\phi\) evaluated at \(t\), for \(k\) less than or equal to \(n\), are sufficient to ensure that the function
\[ C\left(\mathbf{u}\right) = \phi^{-1}\left(\min\left(\sum_{i=0}^{n-1} \phi(u_i), \phi(0)\right)\right) \]
where \(\sum\) is the summation sign, is an \(n\) dimensional Archimedean copula which, like any copula, is the CDF of a multivariate distribution with standard uniform marginals.
To make observations of the random variable \(\mathbf{u}\) governed by that distribution we first defined a vector \(\mathbf{v}\) with the elements
\[ v_i = \phi\left(u_i\right) \]
so that
\[ C\left(\mathbf{u}\right) = \phi^{-1}\left(\min\left(\sum_{i=0}^{n-1} v_i, \phi(0)\right)\right) \]
We then defined another one \(\mathbf{v}^\prime\) with the elements
\[ v^\prime_i = \begin{cases} \sum_{j=0}^{n-1} v_j & i = 0\\ v_i & \mathrm{otherwise} \end{cases} \]
to yield
\[ C\left(\mathbf{u}\right) = \phi^{-1}\left(\min\left(v^\prime_0, \phi(0)\right)\right) \]
From this we were able to work out that the probability density function, or PDF, and CDF of \(v^\prime_0\) were
\[ f^\ast\left(v^\prime_0\right) = \begin{cases} (-1)^n \times \frac{{v^\prime_0}^{n-1}}{(n-1)!} \times \frac{\mathrm{d}^n}{\mathrm{d}t^n}\phi^{-1}\left(v^\prime_0\right) & v^\prime_0 < \phi(0)\\ 0 & \mathrm{otherwise} \end{cases} \]
where the exclamation mark represents the factorial of the term preceding it, and
\[ F^\ast\left(v^\prime_0\right) = \begin{cases} 0 & v^\prime_0 \leqslant 0\\ \sum_{i=1}^n \frac{(-1)^{n+i-1}}{(n-i)!} \times \left[t^{n-i} \frac{\mathrm{d}^{n-i}}{\mathrm{d}t^{n-i}}\phi^{-1}(t)\right]_0^{{v^\prime_0}} & 0 < v^\prime_0 < \phi(0)\\ 1 & \phi(0) \leqslant v^\prime_0 \end{cases} \]
respectively. We numerically inverted the latter using Newton's method[3] so that we could make a random observation of \(v^\prime_0\) with
\[ \begin{align*} u &\sim U(0,1)\\ v^\prime_0 &= {F^\ast}^{-1}(u) \end{align*} \]
where \(\sim\) means drawn from and \(U(0,1)\) is the standard uniform distribution.
Next, we noted that \(v_1\) to \(v_{n-1}\) must be uniformly distributed over the \(n-1\) dimensional analogue of a triangle, known as a simplex, with vertices at the origin and at \(v^\prime_0\) on each of their axes and that we could consequently divide up the observed value of \(v^\prime_0\) by generating \(n-1\) uniform random values between zero and it, labelling them in increasing order as \(x_i\), counting from \(i\) equal to zero, and defining
\[ \begin{align*} v_i &= \begin{cases} x_i - x_{i-1} & 1 \leqslant i < n-1\\ v^\prime_0 - x_{i-1} & i = n-1 \end{cases}\\ v_0 &= v^\prime_0 - \sum_{i=1}^{n-1} v_i \end{align*} \]
which is, in fact, equivalent to
\[ v_i = \begin{cases} x_i & i = 0\\ x_i - x_{i-1} & 1 \leqslant i < n-1\\ v^\prime_0 - x_{i-1} & i = n-1 \end{cases}\\ \]
The final step was to transform this observation of \(\mathbf{v}\) into an observation of \(\mathbf{u}\) with
\[ u_i = \phi^{-1}\left(v_i\right) \]

A Robust Implementation

Recall that we rearranged the formulae for \(f^\ast\) and \(F^\ast\) into
\[ \begin{align*} f^\ast\left(v^\prime_0\right) &= \begin{cases} 0 & v^\prime_0 \leqslant 0\\ (-1)^n \times {v^\prime_0}^{n-1} \times n \times \frac{1}{n!} \times \frac{\mathrm{d}^n}{\mathrm{d}t^n}\phi^{-1}\left(v^\prime_0\right) & 0 < v^\prime_0 < \phi(0)\\ 0 & \phi(0) \leqslant v^\prime_0 \end{cases}\\ F^\ast\left(v^\prime_0\right) &= \begin{cases} 0 & v^\prime_0 \leqslant 0\\ \left(\sum_{i=1}^n (-1)^{n+i-1} \times {v^\prime_0}^{n-i} \times \frac{1}{(n-i)!} \times \frac{\mathrm{d}^{n-i}}{\mathrm{d}t^{n-i}}\phi^{-1}\left(v^\prime_0\right)\right) - 1 & 0 < v^\prime_0 < \phi(0)\\ 1 & \phi(0) \leqslant v^\prime_0 \end{cases} \end{align*} \]
allowing us to use our ak.surreal[4] type to calculate the
\[ \frac{1}{n!} \times \frac{\mathrm{d}^n}{\mathrm{d}t^n}\phi^{-1}\left(v^\prime_0\right) \]
terms in a single shot with its coeffs method. The coeffs function given in listing 1 returns those values if the result of the dnq function that calculates the \(n^\mathrm{th}\) derivative of \(\phi^{-1}\) returns an ak.surreal and otherwise fills an array with the derivatives that it returns divided by the factorials of their orders to the same effect.

Listing 1: Calculating The Coefficients
function coeffs(v, n, dnq) {
 var dq = dnq(v, n);
 var a, f, i;

 if(ak.type(dq)===ak.SURREAL_T) return dq.coeffs();
 a = new Array(n+1);
 f = 1;
 i = 0;
 while(i<n) {
  a[i] = dnq(v, i)/f;
  f *= ++i;
 }
 a[n] = dq/f;
 return a;
}

These are used in the dist function, given in listing 2, to calculate the results of \(F^\ast\) and \(f^\ast\) that it returns in the first and second elements of an array respectively.

Listing 2: Calculating The CDF And PDF Of The Sum
function dist(v, n, p0, dnq) {
 var c, df, f, i;

 if(v<=0)  return [0,0];
 if(v>=p0) return [1,0];

 c = coeffs(v, n, dnq);
 df = Math.pow(-1, n) * Math.pow(v, n-1) * n * c[n];

 f = 1 - c[0];
 for(i=1;i<n;++i) f += Math.pow(-1, n+i-1) * Math.pow(v, n-i) * c[n-i];
 return [Math.max(Math.min(f,1),0), Math.max(df,0)];
}

A significant weakness in the last post's implementation of Archimedean copula random variables was the direct use of Newton's method which can lead to numerical instability. This time we shall use our ak.newtonInverse function instead which creates a function that safeguards the search by switching to the bisection method whenever Newton's method misbehaves.
This will still be a relatively expensive calculation, however, and so instead of using it to search for \({F^\ast}^{-1}(u)\) directly we shall use it to find the values at a set of interpolation nodes which we shall use to create a linear interpolation with our ak.linearInterpolate function[5] to efficiently approximate the inverse.
This approximation will be most accurate if the nodes \(u_i\) are equally spaced in probability so that for \(m\) nodes
\[ u_i = \frac{i}{m} \]
which also means that the interpolation will be as efficient as possible since we will be able to locate the node immediately before a probability \(u\) with
\[ i = \lfloor m \times u \rfloor \]
where the oddly shaped brackets represent the largest integer less than or equal to the value between them.
In order to find the inverse quickly we need to pass the inverter an as small as possible interval \([v_l,v_u]\) that contains it. A reasonable first guess at bounds for the inverse of \(u_{i+1}\) are simply
\[ \begin{align*} v_l &= v_i\\ v_u &= v_i + 2\times\left(v_i-v_{i-1}\right) \end{align*} \]
since \(v_{i+1}\) cannot be smaller than \(v_i\) if the distribution is valid and, if the probability isn't changing too quickly, won't be very much greater than it, relatively speaking.
If this first guess at the upper bound is beneath \(v_{i+1}\) then we can make a second guess with
\[ \begin{align*} v^\prime_l &= v_u\\ v^\prime_u &= v_u + 2\times\left(v_u-v_l\right) \end{align*} \]
then replace \(v_l\) and \(v_u\) with \(v^\prime_l\) and \(v^\prime_u\) respectively, repeating the process until we have bounds that satisfy
\[ F^\ast\left(v_l\right) \leqslant u_{i+1} \leqslant F^\ast\left(v_u\right) \]
The interpolate function given in listing 3 receives \(\phi\) as p and \(\phi^{-1}\) as q and uses this scheme to update the bounds at each step from those of \(v_1\) which it searches for by repeatedly halving an initial guess of
\[ \begin{align*} v_l &= \tfrac12\phi\left(\tfrac12\right)\\ v_u &= \phi\left(\tfrac12\right) \end{align*} \]
until \(F^\ast\left(v_l\right)\) falls beneath it.

Listing 3: Approximating The Inverse
function interpolate(n, p, q, dnq, m, threshold) {
 var p0 = p(0);
 var nodes = isFinite(p0) ? new Array(m+1) : new Array(m);
 var du = 1/m;
 var fdf = function(v){return dist(v, n, p0, dnq);};
 var inv = ak.newtonInverse(fdf, threshold);
 var vu = Math.max(p(0.5), ak.EPSILON);
 var vl = vu*0.5;
 var i, u, v, interp, extrap;

 if(p0<=0 || isNaN(p0) || !isFinite(vu)) {
  throw new Error('invalid generator result in ak.archimedeanCopulaRnd');
 }
 while(vl>0 && fdf(vl)[0]>du) {vu = vl; vl *= 0.5;}

 nodes[0] = {x:0,y:0};
 for(i=1;i<m;++i) {
  u = i*du;
  while(isFinite(vu) && fdf(vu)[0]<u) {v = vu; vu += 2*(vu-vl); vl = v;}
  if(!isFinite(vu)) {
   throw new Error('invalid CDF of sum in ak.archimedeanCopulaRnd');
  }
  v = inv(u, [vl, vu]);
  if(!(v>nodes[i-1].y)) {
   throw new Error('invalid CDF of sum in ak.archimedeanCopulaRnd');
  }
  nodes[i] = {x:u, y:v};
  vu = v + 2*(v-nodes[i-1].y);
  vl = v;
 }
 if(!(p0>nodes[m-1].y)) {
  throw new Error('invalid CDF of sum in ak.archimedeanCopulaRnd');
 }

 if(isFinite(p0)) {
  nodes[m] = {x:1, y:p0};
  return ak.linearInterpolate(nodes);
 }

 interp = ak.linearInterpolate(nodes);
 extrap = extrapolate(nodes[m-2], nodes[m-1]);
 return function(x) {return x<=u ? interp(x) : extrap(x);};
}

Note that if \(\phi(0)\) is finite then we simply need to add a final node and value of \(\left(1, \phi(0)\right)\) and we're done.
If it isn't, however, this won't work since linear interpolation can't cope with infinite values. Instead we shall, somewhat arbitrarily I must admit, assume that the CDF takes the form
\[ F(v) = 1 - e^{a \times v + b} \]
beyond \(v_{m-1}\) so that we can extrapolate the inverse with
\[ v = \frac{\ln(1-u)-b}{a} \]
for
\[ \begin{align*} a &= \frac{u_0 - u_1}{\left(1-u_1\right) \times \left(v_1-v_0\right)}\\ b &= \ln\left(1-u_1\right) - a \times v_1 \end{align*} \]
where \(\left(u_0,v_0\right)\) and \(\left(u_1,v_1\right)\) represent the penultimate and last nodes and values respectively, as justified by derivation 1.

Derivation 1: The Tail Extrapolation
Given the assumed tail CDF
\[ F(v) = 1 - e^{a \times v+b} \]
we can fix the parameters \(a\) and \(b\) by requiring that it passes through the point \(\left(u_1,v_1\right)\) and has a derivative at \(u_1\) that's equal to the gradient from \(\left(u_0,v_0\right)\) to \(\left(u_1,v_1\right)\)
\[ \begin{align*} 1 - e^{a \times v_1+b} &= u_1\\ -a \times e^{a \times v_1+b} &= \frac{u_1-u_0}{v_1-v_0} \end{align*} \]
so that the transition from interpolation to extrapolation is both continuous and smooth.

First we solve these equations for \(a\)
\[ \begin{align*} e^{a \times v_1+b} &= 1 - u_1\\ -a \left(1-u_1\right) &= \frac{u_1-u_0}{v_1-v_0}\\ a &= \frac{u_0 - u_1}{\left(1-u_1\right) \times \left(v_1-v_0\right)} \end{align*} \]
and then for \(b\)
\[ \begin{align*} \ln\left(e^{a \times v_1+b}\right) &= \ln\left(1 - u_1\right)\\ a \times v_1+b &= \ln\left(1 - u_1\right)\\ b &= \ln\left(1 - u_1\right) - a \times v_1 \end{align*} \]
Note that \(a\) must be negative since \(u_0\) is less than \(u_1\), \(u_1\) is less than one and \(v_0\) is less than \(v_1\), and consequently when \(v\) equals infinity the exponential term must equal zero and the CDF one.

Finally, inverting the CDF yields the extrapolation
\[ \begin{align*} 1 - e^{a \times v+b} &= u\\ e^{a \times v+b} &= 1-u\\ a \times v+b &= \ln(1-u)\\ v & = \frac{\ln(1-u) - b}{a} \end{align*} \]

The extrapolate function, which was used by the interpolate function to create the extrapolation, implements this scheme and is defined in listing 4.

Listing 4: Creating The Extrapolation
function extrapolate(node0, node1) {
 var u0 = node0.x; var v0 = node0.y;
 var u1 = node1.x; var v1 = node1.y;
 var a = (u0-u1)/((1-u1)*(v1-v0));
 var b = Math.log(1-u1) - a*v1;

 a = 1/a;
 return function(u) {return (Math.log(1-u)-b)*a;};
}

Another source of inefficiency in our original implementation is the sort operation that we used when splitting up an observation of \(v^\prime_0\) into the terms \(v_i\) of its sum. Fortunately we can do without it by instead defining
\[ \begin{align*} e_i &\sim Exp(1)\\ v_i &= e_i \times \frac{v^\prime_0}{\sum_{j=0}^{n-1} e_j} \end{align*} \]
where \(Exp(1)\) is the exponential distribution with a rate of one.
This works because of two properties of the exponential distribution; running sums of exponential observations that fall within an interval are uniformly distributed on that interval and a constant multiple of an exponential random variable is also an exponential random variable, albeit with a different rate.
The first of these properties is proven in derivation 2 and the second follows trivially from the definition of the exponential CDF. Specifically, an exponential random variable \(E_\lambda\), having a rate of \(\lambda\), has a CDF of
\[ P_\lambda\left(x\right) = 1 - e^{-\lambda \times x} = \Pr\left(E_\lambda \leqslant x\right) \]
so that
\[ \Pr\left(a \times E_\lambda \leqslant x\right) = \Pr\left(E_\lambda \leqslant \frac{x}{a}\right) = 1 - e^{-\lambda \times \tfrac{x}{a}} = 1 - e^{-\tfrac{\lambda}{a} \times x} = P_{\lambda/a}\left(x\right) \]
Together, these properties imply that this new definition of the \(v_i\) yields results that are identically distributed to those produced by the old one but, crucially, without the sort operation.

Derivation 2: Sums Of Exponentials Are Uniforms
Given a uniform random variable \(U(0,T)\) and \(n\) independent observations \(U_i\) of it we have
\[ \Pr\left(\bigwedge_{i=0}^{n-1} U_i \leqslant u_i\right) = \prod_{i=0}^{n-1} \Pr\left(U_i \leqslant u_i\right) = \prod_{i=0}^{n-1} F_U\left(u_i\right) \]
where \(\wedge\) means and for the condition following it for all of the indices from that defined beneath it to that above, \(\prod\) is the product sign and \(F_U\left(x\right)\) is the uniform CDF
\[ F_U\left(x\right) = \min\left(\max\left(\frac{x}{T},0\right),1\right) \]
The joint PDF of those observations is therefore
\[ f_U\left(u_0, \dots, u_{n-1}\right) = \prod_{i=0}^{n-1} \frac{\mathrm{d}F_U}{\mathrm{d}x} \left(u_i\right) = \begin{cases} 0 & \left(\exists \, u_i < 0\right) \vee \left(\exists \, u_i > T\right)\\ \frac{1}{T^n} & \mathrm{otherwise} \end{cases} \]
where \(\exists\) means there exists and \(\vee\) means or.

If we define \(u^\prime_j\) as the \(j^\mathrm{th}\) largest of the \(u_i\) then they must have a joint density of
\[ f_{U^\prime}\left(u^\prime_0, \dots, u^\prime_{n-1}\right) = \begin{cases} \frac{n!}{T^n} & 0 \leqslant u^\prime_0 \leqslant u^\prime_1 \leqslant \dots \leqslant u^\prime_{n-2} \leqslant u^\prime_{n-1} \leqslant T\\ 0 & \mathrm{otherwise} \end{cases} \]
since the densities of all but one of the \(n!\) orderings of the \(u_i\) must be transferred to the one non-decreasing set of \(u^\prime_j\) that has a non-zero density.

Given \(n+1\) independent observations \(E_i\) of an exponential random variable \(Exp(\lambda)\) that satisfy
\[ \left(\sum_{i=0}^{n-1} E_i \leqslant T\right) \wedge \left(\sum_{i=0}^n E_i > T\right) \]
we have
\[ \Pr\left(\bigwedge_{i=0}^{n-1} E_i \leqslant e_i\right) = \dfrac{\left(\prod_{i=0}^{n-1} \Pr\left(E_i \leqslant e_i\right)\right) \times \Pr\left(E_n > T - \sum_{i=0}^{n-1} E_i\right)}{\Pr\left(N_T = n\right)}\\ \]
where the denominator is the probability that a Poisson process with an arrival rate of \(\lambda\) will have \(n\) arrivals in time \(T\). The joint density of \(E_0\) to \(E_{n-1}\) is consequently
\[ f_E\left(e_0, \dots, e_{n-1}\right) = \begin{cases} 0 & \left(\sum_{i=0}^{n-1}e_i > T\right) \vee \left(\exists e_i < 0\right) \\ \dfrac{\left(\prod_{i=0}^{n-1} \frac{\mathrm{d}F_E}{\mathrm{d}x}\left(e_i\right)\right) \times \left(1-F_E\left(T - \sum_{i=0}^{n-1} e_i\right)\right)}{p_{\lambda T}(n)} & \mathrm{otherwise} \end{cases} \]
where \(F_E\) is the exponential CDF and \(p_{\lambda T}\) is the Poisson probability mass function
\[ \begin{align*} F_E(x) &= 1 - e^{-\lambda x}\\ p_{\lambda T}(n) &= \frac{(\lambda T)^n e^{-\lambda T}}{n!} \end{align*} \]
so that
\[ \begin{align*} \dfrac{\left(\prod_{i=0}^{n-1} \frac{\mathrm{d}F_E}{\mathrm{d}x}\left(e_i\right)\right) \times \left(1-F_E\left(T - \sum_{i=0}^{n-1} e_i\right)\right)}{p_{\lambda T}(n)} &= \dfrac{\left(\prod_{i=0}^{n-1} \lambda e^{-\lambda e_i}\right) \times e^{-\lambda\left(T - \sum_{i=0}^{n-1} e_i\right)}}{\frac{1}{n!} (\lambda T)^n e^{-\lambda T}}\\ &= \dfrac{\lambda^n e^{-\lambda \sum_{i=0}^{n-1} e_i} e^{-\lambda T} e^{\lambda \sum_{i=0}^{n-1} e_i}}{\frac{1}{n!} \lambda^n T^n e^{-\lambda T}} = \frac{n!}{T^n} \end{align*} \]
Finally, defining
\[ s_i = \sum_{j=0}^i e_j \]
which we can write as \(\mathbf{s} = \mathbf{M}\times\mathbf{e}\) for vectors \(\mathbf{s}\) and \(\mathbf{e}\) whose elements are \(s_i\) and \(e_i\) respectively, and a matrix \(\mathbf{M}\) with elements
\[ M_{i,j} = \begin{cases} 1 & j \leqslant i\\ 0 & \mathrm{otherwise} \end{cases} \]
Now \(\mathbf{M}\) is the Jacobian of the transformation from \(\mathbf{e}\) to \(\mathbf{s}\) and since it is lower triangular with a leading diagonal of ones it, and its inverse, must have a determinant of one. This means that, for some subset \(S\) of the possible values of \(\mathbf{s}\), the subset \(E\) of the possible values of \(\mathbf{e}\) that \(\mathbf{M}\) maps to them and the joint PDF \(f_S\) of \(\mathbf{s}\), we have
\[ \int_S f_S(\mathbf{x_s}) \, \mathrm{d}\mathbf{x_s} = \int_E f_E(\mathbf{x_e}) \, \mathrm{d}\mathbf{x_e} = \int_S f_E(\mathbf{\mathbf{M}^{-1} \times x_s}) \, \mathrm{d}\mathbf{x_s} = \int_S \frac{n!}{T^n} \, \mathrm{d}\mathbf{x_s} \]
by the rule of integration by substitution. It is therefore the case that
\[ f_S\left(\mathbf{s}\right) = f_{U^\prime}\left(u^\prime_0, \dots, u^\prime_{n-1}\right) \]
and so the running sums of observations of an exponentially distributed random variable that fall within an interval \([0,T]\) must be uniformly distributed within that interval, as required.

The draw function defined in listing 5 uses this scheme to generate values of \(v_i\) and then applies \(\phi^{-1}\) to them to yield a random observation of an Archimedian copula.

Listing 5: Making A Random Observation Of An Archimedian Copula
function draw(q, inv, rnd, a) {
 var n = a.length;
 var v = inv(rnd());
 var s = 0;
 var i;

 if(v===0) return ak.vector(n, 1);
 for(i=0;i<n;++i) {
  a[i] = -Math.log(1-rnd());
  s += a[i];
 }
 if(s===0) return ak.vector(n, q(v/n));
 v /= s;
 for(i=0;i<n;++i) a[i] = Math.max(Math.min(q(a[i]*v), 1), 0);
 return ak.vector(a);
}

The last step is to write a function to create Archimedean random variables using this implementation, as done with ak.archimedeanCopulaRnd in listing 6.

Listing 6: ak.archimedeanCopulaRnd
ak.archimedeanCopulaRnd = function(n, p, q, dnq, arg4, arg5, arg6) {
 var m, threshold, rnd, v, f, inv;

 if(ak.nativeType(arg4)!==ak.NUMBER_T) rnd = arg4;
 else if(ak.nativeType(arg5)!==ak.NUMBER_T) {m = arg4; rnd = arg5;}
 else {m = arg4; threshold = arg5; rnd = arg6;}

 if(ak.nativeType(n)!==ak.NUMBER_T || !isFinite(n) || n!==ak.floor(n) || n<0) {
  throw new Error('invalid dimension in ak.archimedeanCopulaRnd');
 }

 if(ak.nativeType(p)!==ak.FUNCTION_T) {
  throw new Error('invalid generator in ak.archimedeanCopulaRnd');
 }

 if(ak.nativeType(q)!==ak.FUNCTION_T) {
  throw new Error('invalid generator inverse in ak.archimedeanCopulaRnd');
 }

 if(ak.nativeType(dnq)!==ak.FUNCTION_T) {
  throw new Error('invalid generator inverse derivative in ak.archimedeanCopulaRnd');
 }

 if(ak.nativeType(m)===ak.UNDEFINED_T) {
  m = 1024;
 }
 else if(!isFinite(m) || m!==ak.floor(m) || m<2) {
  throw new Error('invalid number of nodes in ak.archimedeanCopulaRnd');
 }

 if(ak.nativeType(rnd)===ak.UNDEFINED_T) {
  rnd = Math.random;
 }
 else if(ak.nativeType(rnd)!==ak.FUNCTION_T) {
  throw new Error('invalid random number generator in ak.archimedeanCopulaRnd');
 }

 switch(n) {
  case 0:  f = function() {return ak.vector(0);};
           break;

  case 1:  f = function() {return ak.vector(1, rnd());};
           break;

  default: v = new Array(n);
           inv = interpolate(n, p, q, dnq, m, threshold);
           f = function() {return draw(q, inv, rnd, v);};
           break;
 }

 f.dims = function() {return n;};
 f.p = function() {return p;};
 f.q = function() {return q;};
 f.dnq = function() {return dnq;};
 f.rnd = function() {return rnd;};

 return Object.freeze(f);
};

Here we're allowing optional arguments specifying the number of interpolation nodes m, the convergence threshold of the inversion and the standard uniform random number generator rnd to be passed in as arg4, arg5 and arg6 so that we can call this function with any of
ak.archimedeanCopulaRnd(n, p, q, dnq) ak.archimedeanCopulaRnd(n, p, q, dnq, rnd) ak.archimedeanCopulaRnd(n, p, q, dnq, m, rnd) ak.archimedeanCopulaRnd(n, p, q, dnq, m, threshold, rnd)

Apart from this we're simply checking that the arguments are valid and providing defaults for the optional arguments before defining the function to make the random observations and adding property access methods to it. Note that we're treating zero and one dimensions as special cases that result in an empty vector and a vector with a single standard uniform element respectively.

An Effective Implementation

To demonstrate this implementation of Archimedean random variables we shall again use the generator and inverse
\[ \begin{align*} \phi_\theta(u) &= (-\ln u)^\theta\\ \phi_\theta^{-1}(t) &= e^{-t^{\frac{1}{\theta}}} \end{align*} \]
which are valid for \(\theta\) greater than or equal to one.

Program 1 plots the density of the Archimedean copula defined by this generator with \(\theta\) equal to two

Program 1: ak.archimedeanCopulaDensity

which you can compare to the output of program 2, which begins by plotting pairs of independent standard uniforms, with the tenth of each plotted on the left hand side and the bottom of the graph, and ends by doing the same for a sample of vectors generated by ak.archimedeanCopulaRnd.

Program 2: ak.archimedeanCopulaRnd

The concentration of that sample clearly reflects the density of the copula and the plots of the marginals to the left and below are clearly uniformly distributed, illustrating that our algorithm is correctly generating random observations of it.

Finally, you can find the implementation of all of the Archimedean copula functions in ArchimedeanCopula.js.

References

[1] Archimedean Skew, www.thusspakeak.com, 2018.

[2] Archimedean View, www.thusspakeak.com, 2018.

[3] On The Shoulders Of Gradients, www.thusspakeak.com, 2014.

[4] You're Going To Have To Think! Why Automatic Differentiation Won't Cure Your Calculus Blues, www.thusspakeak.com, 2014.

[5] Chalk The Lines, www.thusspakeak.com, 2018.

Leave a comment