Characteristically Pathological


In the last post[1] we took a look at the log normal distribution, the statistical distribution that describes the outcomes of products of random variables.
Whilst the PDF, CDF, inverse CDF and random number generator were trivially implemented by combining logarithms and exponentials with the normal distribution's equivalent functions, it proved to be rather more difficult to implement the characteristic function
\[ \hat p_{\mu,\sigma}(t) = \mathrm{E}\left[e^{itX}\right] = \int_0^\infty \frac{1}{\sqrt{2\pi}\,\sigma x} e^{itx-\tfrac{(\ln x - \mu)^2}{2\sigma^2}} \, \mathrm{d}x \]
Figure 1: The Real Part Of The Exponential
where \(\mathrm{E}\) stands for the expected value of the expression in the square brackets that follow it and \(i\) is the square root of -1.

There is no known formula for the exact result of this integral and when we tried to find an approximate formula using Taylor's Theorem, we found that the resulting polynomial's coefficients grew in magnitude without limit and so it wasn't possible to truncate it. Finally, numerically approximating the integral would be extremely computationally expensive since, for large \(\sigma\), the exponential in the integrand is highly oscillatory, as illustrated by figure 1, and we have already seen that such functions are pathological cases for numerical integration since they require a huge number of evaluations to accurately cancel out the positive and negative areas of the curve[2].

We're Going To Have To Think! Again!

The key to calculating the characteristic function is to find a different way to represent it that is more amenable to numerical integration, similar to how we came at the problem of numerical differentiation from different directions to improve its accuracy[3]-[7].

We shall use Gubner's approach[8] of first expressing the characteristic function of a log normal distribution with \(\mu\) equal to zero as the expectation of a function of a normally distributed random variable \(Z\) with zero mean
\[ X = e^Z \Rightarrow \mathrm{E}\left[e^{itX}\right] = \mathrm{E}\left[\exp\left(ite^Z\right)\right] = \int_{-\infty}^{+\infty} \exp\left(ite^z\right) \; \phi(z) \; \mathrm{d}z \]
where \(\phi\) is \(Z\)'s PDF, and then treating the bounds of the integral as the endpoints of a line \(C\) through the complex plane
\[ C = \{z(u) = u : -\infty<u<\infty\}\\ \int_{-\infty}^{+\infty} \exp\left(ite^z\right) \; \phi(z) \; \mathrm{d}z = \int_C \exp\left(ite^{z}\right) \; \phi(z) \; \mathrm{d}z \]
Finally, we shift \(C\) up along the imaginary axis by a distance of \(\tfrac{1}{2}\pi\)
\[ C = \{z(u) = u + \tfrac{1}{2}\pi \; i: -\infty<u<\infty\}\\ \int_{-\infty}^{+\infty} \exp\left(ite^z\right) \; \phi(z) \; \mathrm{d}z = \int_C \exp\left(ite^{z}\right) \; \phi(z) \; \mathrm{d}z \]
to get a new integral that has exactly the same value as the original for positive \(t\).

That last step might seem a bit far fetched, but one of the properties of complex integration is that for a sufficiently well behaved function, which our integrand happens to be, and a sufficiently simple path, such as a connected series of straight lines, the integral depends only upon that path's endpoints. Proof of this is beyond the scope of this post, so I'm afraid you'll just have to take my word for it this time.

Derivation 1: The Vanishing Integrals
Firstly, note that
\[ \begin{align*} ite^{z+\frac{1}{2}\pi\;i} &= it e^z \left(\cos \frac{\pi}{2} + i \; \sin \frac{\pi}{2}\right)\\ &= it e^z (0 + i)\\ &= -t e^z \end{align*} \]
and so
\[ \exp\left(ite^{z+\frac{1}{2}\pi\;i}\right) = \exp\left(-t e^z\right) \]
Next, we have
\[ \begin{align*} &\phi\left(z+\tfrac{1}{2}\pi\;i\right)\\ &\quad= \frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\frac{(z+\frac{1}{2}\pi\;i)^2}{2\sigma^2}\right)\\ &\quad= \frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\frac{z^2+z\;\pi\;i-\frac{1}{4}\pi^2}{2\sigma^2}\right) \end{align*} \]
The first term equals one when \(z=-\infty\) and, since \(t\) is positive, zero when \(z=+\infty\) and the second term equals zero in both cases. Hence the integrand is zero when \(z\) is infinite.
To see why this implies that the new integral is equal to the original, consider the four points on the complex plane
\[ \begin{align*} z_1 &= -\infty + 0 \; i\\ z_2 &= -\infty + \tfrac{1}{2} \pi \; i\\ z_3 &= +\infty + \tfrac{1}{2} \pi \; i\\ z_4 &= +\infty + 0 \; i \end{align*} \]
By that property, it must be the case that
\[ \begin{align*} \int_{z_1}^{z_4} \exp\left(ite^z\right) \; \phi(z) \; \mathrm{d}z &= \int_{z_1}^{z_2} \exp\left(ite^z\right) \; \phi(z) \; \mathrm{d}z\\ &+ \int_{z_2}^{z_3} \exp\left(ite^z\right) \; \phi(z) \; \mathrm{d}z\\ &+ \int_{z_3}^{z_4} \exp\left(ite^z\right) \; \phi(z) \; \mathrm{d}z \end{align*} \]
Now, the integrand equals zero when the real part of \(z\) is infinite and \(t\) is positive, as shown by derivation 1, and so the first and last integrals on the right hand side must equal zero, yielding
\[ \int_{z_1}^{z_4} \exp\left(ite^z\right) \; \phi(z) \; \mathrm{d}z = \int_{z_2}^{z_3} \exp\left(ite^z\right) \; \phi(z) \; \mathrm{d}z \]
To see why this result is so useful we need to express both as integrals with real, rather than complex, bounds
\[ \begin{align*} \int_{z_1}^{z_4} \exp\left(ite^z\right) \; \phi(z) \; \mathrm{d}z &= \int_{-\infty}^{+\infty} \exp\left(ite^z\right) \; \phi(z) \; \mathrm{d}z\\ \int_{z_2}^{z_3} \exp\left(ite^z\right) \; \phi(z) \; \mathrm{d}z &= \int_{-\infty}^{+\infty} \exp\left(ite^{z+\frac{1}{2}\pi\;i}\right) \; \phi\left(z+\tfrac{1}{2}\pi\;i\right) \; \mathrm{d}z \end{align*} \]
Expanding the first out in full yields
\[ \begin{align*} \hat p_{0,\sigma}(t) &= \int_{-\infty}^{+\infty} \exp\left(ite^z\right) \; \phi(z) \; \mathrm{d}z\\ &= \frac{1}{\sqrt{2\pi}\sigma} \int_{-\infty}^{+\infty} \left(\cos\left(te^z\right) + i \sin\left(te^z\right)\right) \; \exp\left(-\frac{z^2}{2\sigma^2}\right) \; \mathrm{d}z \end{align*} \]
which, for large \(\sigma\), has an integrand with exactly the same highly oscillatory, slowly decaying properties as the one we derived from the log normal PDF last time.
Contrast this with the second
\[ \begin{align*} \hat p_{0,\sigma}(t) &= \int_{-\infty}^{+\infty} \exp\left(ite^{z+\frac{1}{2}\pi\;i}\right) \; \phi\left(z+\tfrac{1}{2}\pi\;i\right) \; \mathrm{d}z\\ &= \frac{1}{\sqrt{2\pi}\sigma} \int_{-\infty}^{+\infty} \exp\left(-te^z\right)\;\exp\left(-\frac{z^2 + z\;\pi\;i - \frac{1}{4}\pi^2}{2\sigma^2}\right) \; \mathrm{d}z\\ &= \frac{1}{\sqrt{2\pi}\sigma} \exp\left(\frac{\pi^2}{8\sigma^2}\right) \int_{-\infty}^{+\infty} \left(\cos\frac{z\;\pi}{2\sigma^2} - i\sin\frac{z\;\pi}{2\sigma^2}\right)\;\exp\left(-\frac{z^2}{2\sigma^2} - te^z\right) \; \mathrm{d}z \end{align*} \]
which has an integrand that decays much more quickly than it oscillates, unless \(\sigma\) is small.

So now we have two integrals for the log normal characteristic function, the first of which is numerically tractable for small \(\sigma\) and the second of which is numerically tractable for large \(\sigma\).
Which is definitely a step in the right direction!

Crossing The T

Of course, we have two slight problems in that the second integral only works for positive \(t\) and both assume that \(\mu\) equals zero. Of these, the second is trivial to work around; if \(X\) is distributed as \(LN(0, \sigma)\) then \(e^\mu X\) is distributed as \(LN(\mu, \sigma)\), since it adds \(\mu\) to the normally distributed logarithm of \(X\), and consequently
\[ \hat p_{0,\sigma}(e^\mu t) = \mathrm{E}\left[\exp\left(i \; e^\mu t \; X\right)\right] = \mathrm{E}\left[\exp\left(it \; e^\mu X\right)\right] = \hat p_{\mu,\sigma}(t) \]
When \(t\) equals zero, every characteristic function equals one since they simplify to
\[ \hat f(0) = \mathrm{E}\left[e^0\right] = \mathrm{E}\left[1\right] = 1 \]
For negative \(t\) we simply rewrite the expectation in terms of sinusoids
\[ \hat f(+t) = \mathrm{E}\left[e^{+itX}\right] = \mathrm{E}\left[\cos(+tX) + i\sin(+tX)\right] = \mathrm{E}\left[\cos(tX)\right] + i\;\mathrm{E}\left[\sin(tX)\right]\\ \hat f(-t) = \mathrm{E}\left[e^{-itX}\right] = \mathrm{E}\left[\cos(-tX) + i\sin(-tX)\right] = \mathrm{E}\left[\cos(tX)\right] - i\;\mathrm{E}\left[\sin(tX)\right] \]
showing that the value of any characteristic function at \(t\) is the complex conjugate of its value at \(-t\), so we can recover the value at negative \(t\) from the value at the positive \(-t\).
Which is definitely a step closer!

Sealing The Envelope

The next problem we have is working out how to numerically integrate from minus to plus infinity, or more realistically working out how to avoid doing so whilst preserving numerical accuracy.
Both integrals are made up of a complex sinusoidal term multiplied by a decaying real exponential term, or envelope. The magnitudes of the sinusoids are trivially equal to one, so to choose the bounds of integration we need only consider the envelopes and, given that we'll be numerically approximating the integrals, it makes sense to choose bounds beyond which the approximation is likely to fall within rounding error. A reasonable choice is therefore the last points at which the value of the integrand is greater than the product of its maximum value and some user specified error threshold \(\epsilon\).

Figure 2: The Second Envelope
Derivation 2: Bounding The Maximum
Since the maximum of the exponential of an expression occurs at the same point maximum of that expression, the maximum of the envelope must occur at the same point as the maximum of
\[ f(z) = -\frac{z^2}{2\sigma^2} - te^z \]
As noted, there is a single maximum and so it must occur at the point at which the derivative
\[ f^\prime(z) = -\frac{z}{\sigma^2} - te^z \]
equals zero.
Since \(t\) is positive, \(f^\prime(0)\) is trivially negative and, since the exponential of a negative number is less then one
\[ f^\prime(-t\sigma^2) = t - te^{-t\sigma^2} \]
is trivially positive, the maximum must lie between \(-t\sigma^2\) and zero.
Finally we can approximate the exponential function with a first order truncated Taylor series
\[ e^z \approx 1+z \]
which we can use to approximate the point at which the envelope's derivative equals zero
\[ \begin{align*} -\frac{z}{\sigma^2} - t(1+z) &\approx 0\\ -\left(\frac{1}{\sigma^2}+t\right)z &\approx t\\ \end{align*} \] \[ z \approx - \frac{t}{\frac{1}{\sigma^2}+t} = -\frac{t\sigma^2}{1+t\sigma^2} \]
Finally, since both \(t\) and \(\sigma^2\) are positive we have
\[ -t\sigma^2 < -\frac{t\sigma^2}{1+t\sigma^2} < 0 \]
and so the maximum must fall within one of the intervals
\[ \left[-t\sigma^2, -\frac{t\sigma^2}{t\sigma^2 + 1}\right],\;\left[-\frac{t\sigma^2}{t\sigma^2 + 1}, 0\right] \]
The envelope of our first integrand has a maximum value of one when \(z\) equals zero and so this scheme yields bounds that satisfy
\[ \begin{align*} \exp\left(-\frac{z^2}{2\sigma^2}\right) &= \epsilon\\ -\frac{z^2}{2\sigma^2} &= \ln \epsilon\\ z^2 &= -2\sigma^2 \ln \epsilon \end{align*} \]
Unfortunately, the envelope of the second integrand
\[ \exp\left(-\frac{z^2}{2\sigma^2} - te^z\right) \]
is a little more difficult to work with since it has no equivalent formulae for the maximum value and the resultant bounds for the integral. Fortunately though, it has some convenient properties that make these questions supremely amenable to numerical approximation.
Firstly, it takes the shape of a skewed bell curve, as illustrated by figure 2, so we can easily search for its maximum by applying our implementation of Newton's method[9] to find the point at which its derivative equals zero.
Secondly, as shown by derivation 2, it is trivial to prove that it must lie within one of the pair of intervals
\[ \left[-t\sigma^2, -\frac{t\sigma^2}{t\sigma^2 + 1}\right],\;\left[-\frac{t\sigma^2}{t\sigma^2 + 1}, 0\right] \]
of which the shared bound should be close to the point at which the envelope takes its maximum value, helping Newton's method to converge quickly.
Finally, we know that the lower and upper bounds of integration must lie to the left and the right of the maximum respectively so we can easily search for intervals that contain them which we can then supply to Newton's method to find them too.
Which is a step closer still!

Switching On Sigma

The final problem we have is to work out whether \(\sigma\) is small or large and consequently whether we should use the first or the second integral. To do so, we need simply keep in mind that the reason that we want to switch between the integrals is to keep the oscillation of the integrands to a minimum and note that a reasonable measure of this oscillation is the width of the range of arguments that the sinusoidal terms take during the integration.
Specifically, if the first integral has bounds \(z_{1,1}\) and \(z_{1,2}\) then its sinusoids will have arguments that range over an interval of width
\[ \left|te^{z_{1,2}} - te^{z_{1,1}}\right| \]
where the vertical bars stand for the absolute value of the expression that they enclose. Similarly, if the second has bounds \(z_{2,1}\) and \(z_{2,2}\) then its sinusoids will have arguments that range over an interval of width
\[ \left|\frac{\pi z_{2,2}}{2\sigma^2} - \frac{\pi z_{2,1}}{2\sigma^2}\right| \]
If the first of these widths is the smaller then \(\sigma\) should be considered small and we should use the first integral to compute the characteristic function, otherwise it should be considered large and we should use the second.
And with this step we have arrived at an effective algorithm!


Our implementation of the log normal characteristic function follows our usual scheme of initiating a state object with its parameters via dynamic dispatch to a constructors object and returning a function, decorated with parameter access methods, that simply forwards to another function to do the actual work, as shown in listing 1.

Listing 1: ak.logNormalCF
ak.logNormalCF = function() {
 var state = {mu: 0, sigma: 1, eps: Math.pow(ak.EPSILON, 0.75), n: 18};
 var arg0  = arguments[0];
 var f, em;

 constructors[ak.nativeType(arg0)](state, arg0, arguments);
 em = Math.exp(;

 f = function(t){return cf(state.sigma, t*em, state.eps, state.n);}; = function(){return;};
 f.sigma = function(){return state.sigma;};
 return Object.freeze(f);

var constructors = {};

Note that, in addition to the distribution's parameters, the state contains an error threshold, eps, that we'll use for calculating the bounds of integration and for terminating the numerical inversion and integration algorithms that we'll need to use when calculating the CF, together with maximum number of steps, n, for the integration algorithm.
By default we leave \(\mu\) and \(\sigma\) at zero and one respectively, with a single argument redefining \(\sigma\) and two arguments redefining \(\mu\) and \(\sigma\), as shown by listing 2.

Listing 2: ak.logNormalCF Constructors
constructors[ak.UNDEFINED_T] = function() {

constructors[ak.NUMBER_T] = function(state, x, args) {
 var arg1 = args[1];
 constructors[ak.NUMBER_T][ak.nativeType(arg1)](state, x, arg1, args);

 if(!isFinite( throw new Error('invalid mu in ak.logNormalCF');
 if(state.sigma<=0 || !isFinite(state.sigma)) {
  throw new Error('invalid sigma in ak.logNormalCF');

constructors[ak.NUMBER_T][ak.UNDEFINED_T] = function(state, x) {
 state.sigma = Number(x);

constructors[ak.NUMBER_T][ak.NUMBER_T] = function(state, x0, x1, args) {
 var arg2 = args[2];
 constructors[ak.NUMBER_T][ak.NUMBER_T][ak.nativeType(arg2)](state, arg2, args); = Number(x0);
 state.sigma = Number(x1);

Note that if the user specifies both \(\mu\) and \(\sigma\) we continue with further arguments to allow the user to set the numerical approximation parameters, as shown in listing 3.

Listing 3: ak.logNormalCF Approximation Parameter Constructors
constructors[ak.NUMBER_T][ak.NUMBER_T][ak.UNDEFINED_T] = function() {

constructors[ak.NUMBER_T][ak.NUMBER_T][ak.NUMBER_T] = function(state, eps, args) {
 var arg3 = args[3];

 state.eps = Math.abs(eps);
 if(isNaN(state.eps)) {
  throw new Error('invalid error threshold in ak.logNormalCF');

 constructors[ak.NUMBER_T][ak.NUMBER_T][ak.NUMBER_T][ak.nativeType(arg3)](state, arg3);

constructors[ak.NUMBER_T][ak.NUMBER_T][ak.NUMBER_T][ak.UNDEFINED_T] = function() {

constructors[ak.NUMBER_T][ak.NUMBER_T][ak.NUMBER_T][ak.NUMBER_T] = function(state, n) {
 state.n = ak.floor(Math.abs(n));
 if(isNaN(state.n)) {
  throw new Error('invalid integration steps passed to ak.logNormalCF');

Now all that's left to do is to implement our algorithm with the helper function cf.

A Simple Matter Of Programming

Whilst we have taken great care to avoid approximation errors in the numerical integrals, we have failed to consider the impact of simple rounding errors upon them.

The first of these occurs when \(\sigma\) is extremely small \(\sigma\), causing \(\sigma^2\) to, at least partially, underflow. In such cases it makes sense to treat the normally distributed logarithm as being entirely concentrated at zero, so that
\[ \mathrm{E}\left[\exp\left(ite^Z\right)\right] = \mathrm{E}\left[\exp\left(ite^0\right)\right] = \exp\left(it\right) = \cos t + i\sin t \]
Note that this also covers the corner case of \(t\) equal to zero, at which all CFs are trivially equal to one.

The second occurs when \(\sigma^2\) is small enough that the envelope of the second integrand decays to zero so quickly that its interval of integration is empty to numerical precision and consequently the range of arguments of the sinusoids appears empty. Unfortunately, in such cases the constant term by which we multiply the integral
\[ \frac{1}{\sqrt{2\pi}\sigma} \exp\left(\frac{\pi^2}{8\sigma^2}\right) \]
will overflow to infinity, yielding meaningless results. Fortunately, we can easily avoid this by always using the first integral if the range of its integrand's sinusoids is reasonably small, say \(2\pi\) or less.

Listing 4 shows how the cf helper function exploits these observations before applying our algorithm as described above, with bounds1 and bounds2 calculating the bounds of integration for the first and second integrals respectively and cf1 and cf2 numerically approximating those integrals.

Listing 4: The cf Helper Function
function cf(s, t, eps, n) {
 var s2 = s*s;
 var abs_t = Math.abs(t);
 var b1, b2, w1, w2;

 if(!isFinite(t)) return ak.complex(1/t, 1/t);
 if(t===0 || s2<ak.MIN_NORMAL) return ak.complex(Math.cos(t), Math.sin(t));

 b1 = bounds1(s2, eps);
 w1 = abs_t*(Math.exp(b1[1])-Math.exp(b1[0]));
 if(w1<2*ak.PI) return cf1(s, t, b1, eps);

 b2 = bounds2(s2, abs_t, eps);
 w2 = 0.5*ak.PI*(b2[1]-b2[0])/s2;
 return w1<w2 ? cf1(s, t, b1, eps, n) : cf2(s, t, b2, eps, n);

The formula for the bounds of the first integral was straightforward and consequently so is bounds1, as shown by listing 5.

Listing 5: The First Integral's Bounds
function bounds1(s2, eps) {
 var z = Math.sqrt(-2*s2*Math.log(eps));
 return [-z, z];

To find the bounds of the second integral we first need the location of the maximum of its integrand's envelope, found by envMax in listing 6.

Listing 6: The Maximum Of The Second Integrand's Envelope
function envMax(s2, t, eps) {
 var ts2 = t*s2;

 var f = function(z) {
  var e = ts2*Math.exp(z);
  return [-z-e, -1-e];

 var m = -ts2/(1+ts2);
 var lo = f(m)[0]<0;
 var l = lo ? -ts2 : m;
 var u = lo ? m : 0;

 return (l!==u) ? ak.newtonInverse(f, eps)(0, [l, u]) : 0;

Recall that the plan for finding the maximum of the envelope is to find the maximum of its logarithm, since they occur at the same place. We proved that it must lie in one of the intervals
\[ \left[-t\sigma^2, -\frac{t\sigma^2}{t\sigma^2 + 1}\right],\;\left[-\frac{t\sigma^2}{t\sigma^2 + 1}, 0\right] \]
and in envMax we decide which one to search by noting that if the derivative at the shared bound, m, is negative then the maximum must lie to its left, otherwise it must fall to its right.
We use ak.newtonInverse to find the maximum by searching for the point at which the derivative equals zero since it allows us to use the second derivative to significantly improve performance when it is close to the shared bound which is, after all, a first order approximation of it. Note that we have rearranged the derivative by multiplying its terms by \(t\sigma^2\), which we are free to do since we are searching for its zero value and so multiplying by a constant term won't change the result. The function f returns both this rearranged derivative and its derivative in an array, as required by ak.newtonInverse.

Now that we have a means of finding the maximum of the envelope it is a relatively simple task to find the upper and lower bounds of integration. We can continue to work with the logarithm of the envelope by noting that
\[ \ln \left(f(x) \times \epsilon\right) = \ln f(x) + \ln \epsilon \]
If \(x\) is the maximum of the envelope then we seek bounds \(x_i\) such that
\[ \ln f\left(x_i\right) = \ln f(x) + \ln \epsilon \]
or, in other words, such that
\[ \begin{align*} -\frac{x_i^2}{2\sigma^2}-te^{x_i} &= -\frac{x^2}{2\sigma^2}-te^x + \ln\epsilon\\ -\tfrac{1}{2}x_i^2-t \sigma^2 e^{x_i} &= -\tfrac{1}{2}x^2 - t \sigma^2 e^x + \sigma^2 \ln\epsilon \end{align*} \]
Given that the bounds of integration must lie to the left and right of the maximum we can easily search for initial brackets for each of the form
\[ \left[x-l_1, x-l_2\right]\\ \left[x+u_1, x+u_2\right] \]
as is done in bounds2 in listing 7.

Listing 7: The Bounds Of The Second Integral
function bounds2(s2, t, eps) {
 var ts2 = t*s2;
 var f = function(z) {
  var e = ts2*Math.exp(z);
  return [-0.5*z*z-e, -z-e];
 var inv = ak.newtonInverse(f, eps);

 var x = envMax(s2, t, eps);
 var e = f(x)[0] + s2*Math.log(eps);
 var l1 = -1;
 var l2 =  0;
 var u1 =  0;
 var u2 =  1;

 while(f(x+l1)[0]>e) {l2 = l1; l1 *= 2;}
 while(f(x+u2)[0]>e) {u1 = u2; u2 *= 2;}

 return [inv(e, [x+l1, x+l2]), inv(e, [x+u1, x+u2])];

Finally, we use ak.newtonInverse to root out the lower and upper bounds of integration within those brackets.

A slightly tricky aspect of the actual integration is that ak.rombergIntegral cannot directly handle complex valued functions, so we must split the integrands into real and imaginary parts and integrate them separately, as shown in listing 8.

Listing 8: The CF Integrals
function cf1(s, t, b, eps, n) {
 var s2 = s*s;
 var c = 1/(Math.sqrt(2*ak.PI)*s);

 var fr = function(z){return Math.cos(t*Math.exp(z)) * Math.exp(-0.5*z*z/s2);};
 var fi = function(z){return Math.sin(t*Math.exp(z)) * Math.exp(-0.5*z*z/s2);};

 var r = c*ak.rombergIntegral(fr, eps, n)(b[0], b[1]);
 var i = c*ak.rombergIntegral(fi, eps, n)(b[0], b[1]);

 return ak.complex(r, i);

function cf2(s, t, b, eps, n) {
 var s2  = s*s;
 var hps2 = 0.5*ak.PI/s2;
 var abs_t = Math.abs(t);
 var c = Math.exp(0.25*ak.PI*hps2)/(Math.sqrt(2*ak.PI)*s);

 var fr = function(z) {
  return  Math.cos(z*hps2) * Math.exp(-0.5*z*z/s2 - abs_t*Math.exp(z));
 var fi = function(z) {
  return -Math.sin(z*hps2) * Math.exp(-0.5*z*z/s2 - abs_t*Math.exp(z));

 var r = c*ak.rombergIntegral(fr, eps, n)(b[0], b[1]);
 var i = c*ak.rombergIntegral(fi, eps, n)(b[0], b[1]);

 return ak.complex(r, t>0 ? i : -i);

Note that in cf2 we work with the absolute value of t throughout since our approximation only works for positive \(t\). As we demonstrated earlier, however, the CF for negative \(t\) is simply the complex conjugate of the CF for its absolute value, so we finally use the sign of t to decide whether or not to negate the imaginary part of the result before we return it.

This implementation of the log normal CF can be found with the rest of the ak log normal functions in LogNormalDistribution.js.

Oh, The Mundanity

Given how much effort we've had to expend to come up with an algorithm for the log normal characteristic function it turns out to be remarkably unremarkable, as illustrated by program 1 in which the real part is plotted in black and the imaginary part is plotted in red.

Program 1: The Log Normal CF

The easiest way to confirm that this is indeed the log normal CF is to approximate the expected value of \(e^{itX}\) with a random sample of the log normally distributed \(X\)
\[ \begin{align*} \mathrm{E}\left[e^{itX}\right] &\approx \frac{1}{n}\sum_{j=1}^n e^{itX_j}\\ X_j &\sim LN(\mu, \sigma) \end{align*} \]
as is done in program 2.

Program 2: Testing ak.logNormalCF

Well I don't know about you, but I'm convinced. And mightily impressed by Gubner's change of path trick!


[1] The Product Distribution Business,, 2015.

[2] The Tip Of The Romberg,, 2015.

[3] You're Going To Have To Think! Why [Insert Algorithm Here] Won't Cure Your Calculus Blues,, 2013.

[4] You're Going To Have To Think! Why Finite Differences Won't Cure Your Calculus Blues,, 2014.

[5] You're Going To Have To Think! Why Polynomial Approximation Won't Cure Your Calculus Blues,, 2014.

[6] You're Going To Have To Think! Why Computer Algebra Won't Cure Your Calculus Blues,, 2014.

[7] You're Going To Have To Think! Why Automatic Differentiation Won't Cure Your Calculus Blues.,, 2014.

[8] Gubner, J., A New Formula for Lognormal Characteristic Functions, IEEE Transactions on Vehicular Technology, Vol. 55, No. 5, IEEE, 2006.

[9] On The Shoulders Of Gradients.,, 2014.

Leave a comment