In the previous post[1] we explored the Cauchy distribution, which, having undefined means and standard deviations, is an example of a pathological distribution. We saw that this is because it has a relatively high probability of generating extremely large values which we concluded was a consequence of its standard random variable being equal to the ratio of two independent standard normally distributed random variables, so that the magnitudes of observations of it can be significantly increased by the not particularly unlikely event that observations of the denominator are close to zero.
Whilst we didn't originally derive the Cauchy distribution in this way, there are others, known as ratio distributions, that are explicitly constructed in this manner and in this post we shall take a look at one of them.
If \(X\) and \(Y\) are independent with PDFs \(p_X\) and \(p_Y\) then this simplifies to
If we instead take the ratio between independent standard normal and standard uniform distributions
Since both the numerator and denominator equal zero when \(x\) equals zero we need to use l'Hopital's rule and calculate the ratio of their derivatives
Its cumulative distribution function is
Given that the PDF is symmetric about zero it must be the case that the CDF equals one half at zero which we may confirm by applying l'Hopital's rule to
It is also the case that not all of the slash distribution's moments are undefined. For example, the second moment about the median is given by
The latter is equal to
Finally, the inverse of the slash distribution's CDF doesn't have a closed form and so we shall have to resort to numerical approximation when we come to implement it.
The slash distribution's random variable is the simplest of its functions and is implemented in listing 1.
Note that we're exploiting the fact that multiplying a standard normal random variable by \(\sigma\) yields a normally distributed random variable with a standard deviation of \(\sigma\), using our
Before we define the function and add the
Program 1 demonstrates the use of
Listing 3 gives the implementation of the PDF with
Note that we're exploiting the fact that
Program 2 plots the slash PDF for various values of \(\sigma\).
The implementation of the CDF proceeds in much the same way, with the
Next we have the characteristic function, implemented by
Noting that
To numerically invert the CDF we can exploit the fact that the PDF is its derivative and use Newton's method[3], implemented by
To allow the convergence threshold for the inverter to be specified we add another method to the constructors that takes it after the location and scale, as shown by listing 7.
Program 4 compares the values of the CDF of the results of
Finally, program 5 compares the probabilities that slash and Cauchy distributed random variables with the same scales will exceed given values, plotting the former in black and the latter in red
showing that the slash distribution is even more likely to produce extremely large observations than the Cauchy distribution.
In conclusion, you can find the implementations of the
[2] Rogers, W. & Tukey, J., Understanding some long-tailed symmetrical distributions, Statistica Neerlandica, Vol. 26, No. 3, 1972.
[3] On The Shoulders Of Gradients, www.thusspakeak.com, 2014.
Whilst we didn't originally derive the Cauchy distribution in this way, there are others, known as ratio distributions, that are explicitly constructed in this manner and in this post we shall take a look at one of them.
Ratio Distributions
If we have a pair of random variables \(X\) and \(Y\) then
\[
\Pr\left(\frac{X}{Y} \leqslant c\right)
= \Pr\left(X \leqslant c \times Y \wedge Y \geqslant 0 \right)
+ \Pr\left(X \geqslant c \times Y \wedge Y < 0\right)
\]
where \(\wedge\) means and. If they have a joint probability density function \(p_{X,Y}(x,y)\) then we can calculate this with the integral
\[
\Pr\left(\frac{X}{Y} \leqslant c\right)
= \int_0^\infty \int_{-\infty}^{c \times y} p_{X, Y}(x, y) \, \mathrm{d} x \, \mathrm{d} y
+ \int_{-\infty}^0 \int_{c \times y}^\infty p_{X, Y}(x, y) \, \mathrm{d} x \, \mathrm{d} y
\]
To recover the density of a ratio distribution we differentiate with respect to \(c\), yielding
\[
\begin{align*}
p_{\frac{X}{Y}}(c) = \frac{\mathrm{d}}{\mathrm{d}c} \Pr\left(\frac{X}{Y} \leqslant c\right)
&= \int_0^\infty y \times p_{X, Y}(c \times y, y) \, \mathrm{d} y
- \int_{-\infty}^0 y \times p_{X, Y}(c \times y, y) \, \mathrm{d} y\\
&= \int_{-\infty}^\infty |y| \times p_{X, Y}(c \times y, y) \, \mathrm{d} y
\end{align*}
\]
where the vertical bars stand for the absolute value of the term between them.If \(X\) and \(Y\) are independent with PDFs \(p_X\) and \(p_Y\) then this simplifies to
\[
p_{\frac{X}{Y}}(c) = \int_{-\infty}^\infty |y| \times p_X(c \times y) \times p_Y(y) \, \mathrm{d} y
\]
which is the form that we used to prove that standard Cauchy random variables are the ratios of standard normal random variables.
The Standard Slash Distribution
Derivation 1: The Standard Density
The densities of the standard normal and uniform distributions are
\[
\begin{align*}
p_X(x) &= \phi(x) = \frac{1}{\sqrt{2\pi}} e^{-\frac12 x^2}\\
p_Y(y) &=
\begin{cases}
1 & 0 \leqslant y \leqslant 1\\
0 & \text{otherwise}
\end{cases}
\end{align*}
\]
and so
\[
\begin{align*}
p_{\frac{X}{Y}}(c) &= \int_{-\infty}^\infty y \times p_X(c \times y) \times p_Y(y) \, \mathrm{d}y\\
&= \int_0^1 y \times \phi(c \times y) \, \mathrm{d}y
\end{align*}
\]
Given
\[
f(c) = -\frac{\phi(c \times y)}{c^2}
= -\frac{1}{\sqrt{2\pi}c^2} e^{-\frac12 (c \times y)^2}
\]
then, by the chain rule, its derivative is
\[
\begin{align*}
f^\prime(c) &= -\frac{1}{\sqrt{2\pi}c^2} e^{-\frac12 (c \times y)^2} \times - c^2 \times y\\
&= y \times \frac{1}{\sqrt{2\pi}} e^{-\frac12 (c \times y)^2}
= y \times \phi(c \times y)
\end{align*}
\]
yielding
\[
p_{\frac{X}{Y}}(c) =\left[-\frac{\phi(c \times y)}{c^2}\right]_0^1
=\frac{\phi(0)-\phi(c)}{c^2}
\]
\[
\begin{align*}
X &\sim N(0,1)\\
Y &\sim U(0,1)
\end{align*}
\]
then we have the standard slash distribution[2], whose PDF is given by
\[
p(x) = \frac{\phi(0)-\phi(x)}{x^2}
\]
where \(\phi\) is the standard normal PDF, as proven by derivation 1.Since both the numerator and denominator equal zero when \(x\) equals zero we need to use l'Hopital's rule and calculate the ratio of their derivatives
\[
\begin{align*}
\frac{\mathrm{d}}{\mathrm{d}x} \left(\phi(0)-\phi(x)\right) &= -\frac{\mathrm{d}}{\mathrm{d}x} \frac{1}{\sqrt{2\pi}} e^{-\frac12 x^2}\\
&= \frac{x}{\sqrt{2\pi}} e^{-\frac12 x^2}
= x \times \phi(x)\\
\frac{\mathrm{d}}{\mathrm{d}x} x^2 &= 2 \times x
\end{align*}
\]
at zero instead, giving
\[
p(0) = \frac{x \times \phi(x)}{2 \times x}\Bigg|_0
= \frac{\phi(0)}{2}
= \frac{1}{2\sqrt{2\pi}}
\]
where the vertical bar means evaluated at.Its cumulative distribution function is
\[
P(x) = \Phi(x) - \frac{\phi(0)-\phi(x)}{x}
\]
where \(\Phi\) is the standard normal CDF, which is easily proven by differentiating it
\[
\begin{align*}
\frac{\mathrm{d}}{\mathrm{d}x} P(x) &= \frac{\mathrm{d}}{\mathrm{d}x} \Phi(x) - \frac{\mathrm{d}}{\mathrm{d}x} \left((\phi(0)-\phi(x)) \times x^{-1}\right)\\
&= \phi(x) + (\phi(0)-\phi(x)) \times x^{-2} + x^{-1} \times \frac{\mathrm{d}}{\mathrm{d}x} \frac{1}{\sqrt{2\pi}} e^{-\frac12 x^2}\\
&= \phi(x) + (\phi(0)-\phi(x)) \times x^{-2} + x^{-1} \times \frac{1}{\sqrt{2\pi}} e^{-\frac12 x^2} \times -x\\
&= \phi(x) + \frac{\phi(0)-\phi(x)}{x^2} - \phi(x)
= \frac{\phi(0)-\phi(x)}{x^2}
\end{align*}
\]
and checking that its limits are zero and one
\[
\begin{align*}
P(-\infty) &= \Phi(-\infty) - \frac{\phi(0)-\phi(-\infty)}{-\infty} = 0\\
P(+\infty) &= \Phi(+\infty) - \frac{\phi(0)-\phi(+\infty)}{+\infty} = 1
\end{align*}
\]
Derivation 2: The Standard Mean
Firstly
Secondly
\[
\begin{align*}
\int_0^\infty \frac{\phi(x)}{x} \, \mathrm{d}x
&= \int_0^1 \frac{\phi(x)}{x} \, \mathrm{d}x + \int_1^\infty \frac{\phi(x)}{x} \, \mathrm{d}x\\
&\geqslant \int_0^1 \frac{\phi(1)}{x} \, \mathrm{d}x + \int_1^\infty \frac{\phi(x)}{x} \, \mathrm{d}x\\
&= \left[\phi(1) \ln |x|\right]_{\,\,0}^{\,\,1} + c = \infty
\end{align*}
\]
since \(\phi(x)\) is decreasing on the interval \([0, \infty]\) and \(\frac{1}{x}\) is finite and decreasing on \([1, \infty]\).Secondly
\[
\begin{align*}
\bar{X} &= \int_{-\infty}^\infty x \times p(x) \, \mathrm{d}x
= \int_{-\infty}^\infty \frac{\phi(0)-\phi(x)}{x} \, \mathrm{d}x\\
&= \int_{-\infty}^\infty \frac{\phi(0)}{x} \, \mathrm{d}x - \int_{-\infty}^\infty \frac{\phi(x)}{x} \, \mathrm{d}x\\
&= \left[\phi(0) \ln |x|\right]_{-\infty}^{\phantom{-}\infty} - \int_{-\infty}^\infty \frac{\phi(x)}{x} \, \mathrm{d}x\\
&= (\infty - \infty) - (\infty - \infty)
\end{align*}
\]
\[
f(x) = \frac{\phi(0)-\phi(x)}{x}
\]
yielding
\[
f(0) = -\frac{\phi^\prime(x)}{1} \Bigg|_0 = x \phi(x) \Big|_0 = 0
\]
so that
\[
P(0) = \Phi(0) - 0 = \frac12
\]
Note that, just like the Cauchy distribution, the slash distribution's mean \(\bar{X}\) is undefined, as proven in derivation 2, and consequently so are its standard deviation and higher central moments
\[
\mathrm{E}\left[(X-\bar{X})^n\right]
\]
where \(\mathrm{E}\) is the expectation.It is also the case that not all of the slash distribution's moments are undefined. For example, the second moment about the median is given by
\[
\begin{align*}
\mathrm{E}\left[(X-0)^2\right]
&= \mathrm{E}\left[X^2\right]
= \int_{-\infty}^{\infty} x^2 \times p(x) \, \mathrm{d}x
= \int_{-\infty}^{\infty} \phi(0) - \phi(x) \, \mathrm{d}x\\
&= \left[x \times \phi(0) - \Phi(x)\right]_{-\infty}^{\phantom{-}\infty}
= \infty - -\infty = \infty
\end{align*}
\]
which, although infinite, is well defined.
Location, Location, Location ... And Scale
A simple generalisation of the standard slash distribution is to give it a location \(\mu\) and positive scale \(\sigma\) by transforming its random variable
\[
Y = \sigma \times X + \mu
\]
so that
\[
Y \leqslant y \rightarrow X \leqslant \frac{y-\mu}{\sigma}
\]
where \(\rightarrow\) means implies. The CDF of \(Y\) is consequently
\[
P_{\mu,\sigma}(y) = P\left(\frac{y-\mu}{\sigma}\right)
= \Phi\left(\frac{y-\mu}{\sigma}\right) - \frac{\phi(0)-\phi\left(\frac{y-\mu}{\sigma}\right)}{\frac{y-\mu}{\sigma}}
\]
and equals \(\frac12\) at \(\mu\). To calculate its PDF we simply differentiate, applying the chain rule to yield
\[
p_{\mu,\sigma}(y) = \frac{\mathrm{d}}{\mathrm{d}y} P_{\mu,\sigma}(y)
= \frac{\mathrm{d}}{\mathrm{d}y} P\left(\frac{y-\mu}{\sigma}\right)
= p\left(\frac{y-\mu}{\sigma}\right) \times \frac{1}{\sigma}
= \frac{\phi(0) - \phi\left(\frac{y-\mu}{\sigma}\right)}{\sigma \times \left(\frac{y-\mu}{\sigma}\right)^2}
\]
having the special case
\[
p_{\mu,\sigma}(\mu) = \frac{1}{2\sqrt{2\pi}\sigma}
\]
The characteristic function of the slash distribution is related to that of the standard slash distribution by
\[
\hat{p}_{\mu,\sigma}(t)
= \mathrm{E}\left[e^{itY}\right]
= \mathrm{E}\left[e^{it(\sigma X + \mu)}\right]
= e^{it\mu} \times \mathrm{E}\left[e^{it\sigma X}\right]
= e^{it\mu} \times \hat{p}(\sigma t)
\]
where \(i\) is the square root of minus one.The latter is equal to
\[
\hat{p}(t) = \sqrt{2\pi} \times \left(\phi(t) + t\Phi(t) - \max(t, 0)\right)
\]
as proven by derivation 3, and so the former is
\[
\hat{p}_{\mu,\sigma}(t) = \sqrt{2\pi} \times \left(\phi(\sigma t) + \sigma t\Phi(\sigma t) - \max(\sigma t, 0)\right) \times e^{it\mu}
\]
Derivation 3: The Standard Characteristic Function
Using the inverse Fourier transform to recover the PDF from the CF yields
Using integration by parts, we have
\[
\begin{align*}
p(x)
&= \frac{1}{2\pi} \int_{-\infty}^\infty e^{-itx} \hat{p}(t) \, \mathrm{d}t
= \frac{1}{\sqrt{2\pi}} \int_{-\infty}^\infty e^{-itx} \times \left(\phi(t) + t\Phi(t) - \max(t, 0)\right) \, \mathrm{d}t\\
&= \frac{1}{\sqrt{2\pi}} \times \left( \int_{-\infty}^\infty e^{-itx} \times \phi(t) \, \mathrm{d}t
+ \int_{-\infty}^0 e^{-itx} \times t\Phi(t) \, \mathrm{d}t
+ \int_0^\infty e^{-itx} \times t\left(\Phi(t) - 1\right) \, \mathrm{d}t\right)\\
&= \frac{1}{\sqrt{2\pi}} \times \left(\hat{\phi}(-x) + I_- + I_+\right)
\end{align*}
\]
where \(\hat{\phi}\) is the standard normal characteristic function.Using integration by parts, we have
\[
\begin{align*}
I_- &= \int_{-\infty}^0 e^{-itx} \times t\Phi(t) \, \mathrm{d}t
= \left[\frac{e^{-itx}}{-ix} \times t\Phi(t)\right]_{-\infty}^0
- \int_{-\infty}^0 \frac{e^{-itx}}{-ix} \times \left(\Phi(t) + t\phi(t)\right) \, \mathrm{d}t\\
&= \frac{1}{ix} \int_{-\infty}^0 e^{-itx} \times \Phi(t) \, \mathrm{d}t
+ \frac{1}{ix} \int_{-\infty}^0 e^{-itx} \times -\frac{\mathrm{d}}{\mathrm{d}t} \phi(t) \, \mathrm{d}t\\
&= \frac{1}{ix}\left[\frac{e^{-itx}}{-ix} \times \Phi(t)\right]_{-\infty}^0
- \frac{1}{ix}\int_{-\infty}^0 \frac{e^{-itx}}{-ix} \times \phi(t) \, \mathrm{d}t\\
&\quad\quad\quad\quad + \frac{1}{ix}\left[e^{-itx} \times -\phi(t)\right]_{-\infty}^0
- \frac{1}{ix}\int_{-\infty}^0 -ix \times e^{-itx} \times -\phi(t) \, \mathrm{d}t\\
&= \frac{1}{2x^2} - \frac{\phi(0)}{ix} - \left(\frac{1}{x^2}+1\right) \times \int_{-\infty}^0 e^{-itx} \times \phi(t) \, \mathrm{d}t
\end{align*}
\]
and similarly
\[
I_+ = \frac{1}{2x^2} + \frac{\phi(0)}{ix} - \left(\frac{1}{x^2}+1\right) \times \int_0^\infty e^{-itx} \times \phi(t) \, \mathrm{d}t
\]
so that
\[
\begin{align*}
p(x) &=
\frac{1}{\sqrt{2\pi}} \times \left(\hat{\phi}(-x) + I_- +I_+\right)\\
&= \frac{1}{\sqrt{2\pi}} \times \left(\hat{\phi}(-x) + \frac{1}{x^2} - \left(\frac{1}{x^2}+1\right) \times \int_{-\infty}^\infty e^{-itx} \times \phi(t) \, \mathrm{d}t\right)\\
&= \frac{1}{\sqrt{2\pi}} \times \left(\frac{1}{x^2} - \frac{1}{x^2} \times \hat{\phi}(-x)\right)
= \frac{1 - \hat{\phi}(-x)}{\sqrt{2\pi} \times x^2}
= \frac{1 - e^{-\frac12 x^2}}{\sqrt{2\pi} \times x^2}
= \frac{\phi(0) - \phi(x)}{x^2}
\end{align*}
\]
as required.
Finally, the inverse of the slash distribution's CDF doesn't have a closed form and so we shall have to resort to numerical approximation when we come to implement it.
The ak Slash Distribution Functions
One problem that we shall have when implementing the slash CDF and PDF is that
\[
\phi(0) - \phi(x)
\]
will be prone to floating point cancellation error when \(x\) is very small. Whilst l'Hopital's rule works for \(x\) equal to zero, to deal with those errors we shall need Taylor's theorem, which states that for a sufficiently well behaved function in the neighbourhood of \(x\) and a sufficiently small \(\delta\)
\[
f(x+\delta) = f(x) + \delta f^{(1)}(x) + \tfrac12 \delta^2 f^{(2)}(x) + \tfrac16 \delta^3 f^{(3)}(x) + \dots + \frac{1}{n!} \delta^n f^{(n)} + O\left(\delta^{n+1}\right)
\]
where the exclamation mark is the factorial and \(f^{(n)}\) is the \(n^\mathrm{th}\) derivative of \(f\). The normal PDF \(\phi\) is just such a function in the neighbourhood of zero and so we can use it to make the approximations
\[
\begin{align*}
\frac{\phi(0)-\phi(\delta)}{\delta} &\approx \left(\tfrac12 \delta - \tfrac1{8} \delta^3\right) \phi(0)\\
\\
\frac{\phi(0)-\phi(\delta)}{\delta^2} &\approx \left(\tfrac12 - \tfrac1{8} \delta^2\right) \phi(0)
\end{align*}
\]
which will be accurate when
\[
\delta^4 < 24\epsilon
\]
where \(\epsilon\) is the difference between one and the smallest floating point number greater than one, representing its precision, as proven in derivation 4.
Derivation 4: The Taylor Series Approximations
Given the derivatives of \(\phi\)
\[
\begin{align*}
\phi^{(1)}(x) &= -x \phi(x)\\
\phi^{(2)}(x) &= -\phi(x) + x^2\phi(x) = \left(-1+x^2\right)\phi(x)\\
\phi^{(3)}(x) &= 2x\phi(x) - \left(-1+x^2\right)x\phi(x) = \left(3x-x^3\right)\phi(x)\\
\phi^{(4)}(x) &= \left(3-3x^2\right)\phi(x) - \left(3x-x^3\right)x\phi(x) = \left(3-6x^2+x^4\right)\phi(x)\\
\phi^{(5)}(x) &= \left(-12x+4x^3\right)\phi(x) - \left(3-6x^2+x^4\right)x\phi(x)
= \left(-15x+10x^3-x^5\right)\phi(x)\\
\phi^{(6)}(x) &= \left(-15+30x^2-5x^4\right)\phi(x) - \left(-15x+10x^3-x^5\right)x\phi(x)\\
&= \left(-15+45x^2-15x^4+x^6\right)\phi(x)
\end{align*}
\]
we have
\[
\begin{align*}
\phi(\delta) &= \phi(0) + \delta \phi^{(1)}(0) + \tfrac12 \delta^2 \phi^{(2)}(0) + \tfrac16 \delta^3 \phi^{(3)}(0) + \tfrac1{24} \delta^4 \phi^{(4)}(0)\\
&\phantom{= \phi(0)}\, + \tfrac1{120} \delta^5 \phi^{(5)}(0) + \tfrac1{720} \delta^6 \phi^{(6)}(0) + O\left(\delta^8\right)\\
&= \phi(0) - \tfrac12 \delta^2 \phi(0) + \tfrac3{24} \delta^4 \phi(0) - \tfrac{15}{720} \delta^6 \phi(0) + O\left(\delta^8\right)\phi(0)
\end{align*}
\]
since they all have a factor of \(\phi(x)\) and odd powers of \(\delta\) have coefficients of zero because \(\phi\) is symmetric about zero. Rearranging yields
\[
\phi(0)-\phi(\delta) = \left(\tfrac12 \delta^2 - \tfrac1{8} \delta^4 + \tfrac1{48} \delta^6 + O\left(\delta^8\right)\right) \phi(0)
\]
and therefore
\[
\begin{align*}
\frac{\phi(0)-\phi(\delta)}{\delta} &= \left(\tfrac12 \delta - \tfrac1{8} \delta^3 + \tfrac1{48} \delta^5 + O\left(\delta^7\right)\right) \phi(0)\\
\\
\frac{\phi(0)-\phi(\delta)}{\delta^2} &= \left(\tfrac12 - \tfrac1{8} \delta^2 + \tfrac1{48} \delta^4 + O\left(\delta^6\right)\right) \phi(0)
\end{align*}
\]
so that the highest order terms will be lost to rounding error when
\[
\begin{align*}
\tfrac1{48}\delta^4 < \tfrac12\epsilon
\end{align*}
\]
The slash distribution's random variable is the simplest of its functions and is implemented in listing 1.
Listing 1: ak.slashRnd
ak.slashRnd = function() { var state = {mu: 0, sigma: 1, rnd: Math.random}; var arg0 = arguments[0]; var n, f; constructors[ak.nativeType(arg0)](state, arg0, arguments); n = ak.normalRnd(state.sigma, state.rnd); f = function(){return state.mu + n()/(1-state.rnd());}; f.mu = function(){return state.mu;}; f.sigma = function(){return state.sigma;}; f.rnd = function(){return state.rnd;}; return Object.freeze(f); }; var constructors = {};
Note that we're exploiting the fact that multiplying a standard normal random variable by \(\sigma\) yields a normally distributed random variable with a standard deviation of \(\sigma\), using our
ak.normalRnd
function, and subtracting the observed values of rnd
, which are expected to be uniform in the interval \([0,1)\), from one to avoid the possibility of dividing by zero.Before we define the function and add the
mu
, sigma
and rnd
property accessors to it, we're initialising the state
by dispatching to the constructors
object's methods which allow us to specify a scale \(\sigma\) or both a location \(\mu\) and a scale \(\sigma\) with an optional random number generator given after the one or the both of them, as shown by listing 2.Listing 2: The Constructors
constructors[ak.UNDEFINED_T] = function() { }; constructors[ak.FUNCTION_T] = function(state, rnd) { state.rnd = rnd; }; constructors[ak.NUMBER_T] = function(state, x, args) { var arg1 = args[1]; constructors[ak.NUMBER_T][ak.nativeType(arg1)](state, x, arg1, args); if(!isFinite(state.mu)) throw new Error('invalid mu in ak.slash distribution'); if(state.sigma<=0 || !isFinite(state.sigma)) { throw new Error('invalid sigma in ak.slash distribution'); } }; constructors[ak.NUMBER_T][ak.UNDEFINED_T] = function(state, x) { state.sigma = Number(x); }; constructors[ak.NUMBER_T][ak.FUNCTION_T] = function(state, x, rnd) { state.sigma = Number(x); state.rnd = rnd; }; constructors[ak.NUMBER_T][ak.NUMBER_T] = function(state, x0, x1, args) { var arg2 = args[2]; state.mu = Number(x0); state.sigma = Number(x1); constructors[ak.NUMBER_T][ak.NUMBER_T][ak.nativeType(arg2)](state, arg2); }; constructors[ak.NUMBER_T][ak.NUMBER_T][ak.UNDEFINED_T] = function() { }; constructors[ak.NUMBER_T][ak.NUMBER_T][ak.FUNCTION_T] = function(state, rnd) { state.rnd = rnd; };
Program 1 demonstrates the use of
ak.slashRnd
by plotting a histogram of its results.
Program 1: Using ak.slashRnd
|
|
---|---|
![]() |
Listing 3 gives the implementation of the PDF with
ak.slashPDF
which uses the pdf
helper function to perform the calculation, switching between the mathematical formula and its Taylor series approximation according to the magnitude of its argument x
.Listing 3: ak.slashPDF
var RT2PI = Math.sqrt(2*ak.PI); var EPS = Math.pow(24*ak.EPSILON, 0.25); function pdf(m, s, x) { x = (x-m)/s; return Math.abs(x)<EPS ? (0.5-0.125*x*x)/(RT2PI*s) : (1-Math.exp(-0.5*x*x))/(RT2PI*s*x*x); } ak.slashPDF = function() { var state = {mu: 0, sigma: 1}; var arg0 = arguments[0]; var f; constructors[ak.nativeType(arg0)](state, arg0, arguments); f = function(x){return pdf(state.mu, state.sigma, x);}; f.mu = function(){return state.mu;}; f.sigma = function(){return state.sigma;}; return Object.freeze(f); };
Note that we're exploiting the fact that
\[
\begin{align*}
\phi(x) &= \frac{1}{\sqrt{2\pi}} e^{-\frac12 x^2}\\
\phi(0) &= \frac{1}{\sqrt{2\pi}}
\end{align*}
\]
to compute the values of the normal PDF.Program 2 plots the slash PDF for various values of \(\sigma\).
Program 2: The Slash PDF
|
|
---|---|
![]() |
The implementation of the CDF proceeds in much the same way, with the
cdf
helper function also switching between the explicit and approximate formulae, using ak.normalCDF
and exploiting the fact that
\[
\Phi\left(\frac{x-\mu}{\sigma}\right) = \Phi_{\mu,\sigma}(x)
\]
as shown by listing 4.Listing 4: ak.slashCDF
function cdf(m, s, x, cx) { x = (x-m)/s; return Math.abs(x)<EPS ? cx - 0.5*x*(1-0.25*x*x)/RT2PI : cx - (1-Math.exp(-0.5*x*x))/(RT2PI*x); } ak.slashCDF = function() { var state = {mu: 0, sigma: 1}; var arg0 = arguments[0]; var c, f; constructors[ak.nativeType(arg0)](state, arg0, arguments); c = ak.normalCDF(state.mu, state.sigma); f = function(x){return cdf(state.mu, state.sigma, x, c(x));}; f.mu = function(){return state.mu;}; f.sigma = function(){return state.sigma;}; return Object.freeze(f); };
Next we have the characteristic function, implemented by
ak.slashCF
in listing 5.Listing 5: ak.slashCF
function cf(m, s, phi, t) { var tm = t*m; var ts = t*s; var c0 = Math.cos(tm); var s0 = Math.sin(tm); var c = Math.exp(-0.5*ts*ts) + RT2PI*(ts*phi(ts) - Math.max(ts,0)); var r = c*c0; var i = c*s0; return ak.complex(r, i); } ak.slashCF = function() { var state = {mu: 0, sigma: 1}; var arg0 = arguments[0]; var phi, f; constructors[ak.nativeType(arg0)](state, arg0, arguments); phi = ak.normalCDF(); f = function(t){return cf(state.mu, state.sigma, phi, t);}; f.mu = function(){return state.mu;}; f.sigma = function(){return state.sigma;}; return Object.freeze(f); };
Noting that
\[
e^{i\theta} = \cos \theta + i \sin \theta
\]
we're explicitly constructing an ak.complex
object with its real and imaginary parts, which are plotted in black and red respectively by program 3.
Program 3: The Slash CF
|
|
---|---|
![]() |
To numerically invert the CDF we can exploit the fact that the PDF is its derivative and use Newton's method[3], implemented by
ak.newtonInverse
which requires a function that returns a two element array containing the value of the function to be inverted and its derivative, using the inverse of the normal CDF as an initial guess at the inverse of the slash CDF, as shown by listing 6.Listing 6: ak.slashInvCDF
ak.slashInvCDF = function() { var state = {mu: 0, sigma: 1}; var arg0 = arguments[0]; var c, i, fx, dfx, fdf, inv, f; constructors[ak.nativeType(arg0)](state, arg0, arguments); c = ak.normalCDF(state.mu, state.sigma); i = ak.normalInvCDF(state.mu, state.sigma); fx = function(x){return cdf(state.mu, state.sigma, x, c(x));}; dfx = function(x){return pdf(state.mu, state.sigma, x);}; fdf = function(c){return [fx(c), dfx(c)];}; inv = ak.newtonInverse(fdf, state.eps); f = function(c){return inv(c, i(c));}; f.mu = function(){return state.mu;}; f.sigma = function(){return state.sigma;}; return Object.freeze(f); };
To allow the convergence threshold for the inverter to be specified we add another method to the constructors that takes it after the location and scale, as shown by listing 7.
Listing 7: Setting The Convergence Threshold
constructors[ak.NUMBER_T][ak.NUMBER_T][ak.NUMBER_T] = function(state, eps) { state.eps = eps; };
Program 4 compares the values of the CDF of the results of
ak.slashInvCDF
to its argument to demonstrate its accuracy.
Program 4: The Slash Inverse CDF
|
|
---|---|
Finally, program 5 compares the probabilities that slash and Cauchy distributed random variables with the same scales will exceed given values, plotting the former in black and the latter in red
Program 5: More Probabilities Of Exceedances
|
|
---|---|
![]() |
showing that the slash distribution is even more likely to produce extremely large observations than the Cauchy distribution.
In conclusion, you can find the implementations of the
ak
slash distribution functions in SlashDistribution.js.

References
[1] Moments Of Pathological Behaviour, www.thusspakeak.com, 2020.[2] Rogers, W. & Tukey, J., Understanding some long-tailed symmetrical distributions, Statistica Neerlandica, Vol. 26, No. 3, 1972.
[3] On The Shoulders Of Gradients, www.thusspakeak.com, 2014.
Leave a comment