A Jump To The Left And Some Steps To The Right


Figure 1: Drawing Trapeziums
We have previously seen how we can approximate the integrals of functions by noting that they are essentially the areas under their graphs[1] and so by drawing simple shapes upon those graphs and adding up their areas we can numerically calculate the integrals of functions that we might struggle to solve mathematically.
Specifically, if we join two points \(x_1\) and \(x_2\) on the graph of a function \(f\) with a straight line and another two vertically from them to the \(x\) axis then we've drawn a trapezium, as illustrated by the solid shape in figure 1.
This has an area of
\[ \tfrac{1}{2} \times \left(x_2 - x_1\right) \times \left(f\left(x_1\right) + f\left(x_2\right)\right) \]
and if we draw \(n\) of them equally spaced over an interval from \(a\) to \(b\), which we could do on figure 1 by drawing straight lines down from the ends of the green lines, we have
\[ \int_a^b f(x) \; \mathrm{d}x \approx \sum_{i=1}^n \tfrac{1}{2} h \times \left(f\left(x_i\right) + f\left(x_{i-1}\right)\right) \]
where \(\approx\) means approximately equals, \(\sum\) is the summation sign and
\[ \begin{align*} h &= \frac{b-a}{n}\\ x_i &= a + i \times h \end{align*} \]
We went on to improve the accuracy of the approximation by treating this as a function of \(h\) and approximating it with a polynomial[2]. By evaluating that polynomial at zero we sought to predict the sum of the areas of trapeziums of infinitesimal width, which is in some sense the definition of the integral, albeit one lacking in mathematical rigour.

In regions where a function is rapidly changing we need a lot of trapeziums to accurately approximate its integral. If we restrict ourselves to trapeziums of equal width, as we have done with these schemes, this means that we might spend far too much effort putting trapeziums in regions where a function changes slowly if it also has regions where it changes quickly.

The obvious solution to this is, of course, to use trapeziums of different widths.

Adaptive Approximation

Working out in advance how wide the trapeziums should be in any given region of the function to achieve some particular level of accuracy is a rather tricky proposition. However, if we find that for increasing values \(x_0\), \(x_1\) and \(x_2\) we have
\[ \begin{align*} \tfrac{1}{2} \times \left(x_2 - x_0\right) \times \left(f\left(x_0\right) + f\left(x_2\right)\right) &\approx \tfrac{1}{2} \times \left(x_1 - x_0\right) \times \left(f\left(x_0\right) + f\left(x_1\right)\right)\\ &+ \tfrac{1}{2} \times \left(x_2 - x_1\right) \times \left(f\left(x_1\right) + f\left(x_2\right)\right) \end{align*} \]
then we can probably assume that those trapeziums are narrow enough.

Now this is crying out to be turned into a recursive algorithm; start with a single trapezium, then draw a pair of them spanning its end and mid points on the \(x\) axis and, if the sum of their areas is close to its, stop and return that sum, otherwise do the same for both of the pair and return the sum of the resulting areas, as done by the integral function given in listing 1.

Listing 1: A Recursive Algorithm
function integral(f, x0, x2, eps) {
 var x1 = (x0+x2)/2;
 var h2 = (x2-x0)/2;
 var h4 = h2/2;
 var s02 = h2*(f(x0)+f(x2));
 var s01 = h4*(f(x0)+f(x1));
 var s12 = h4*(f(x1)+f(x2));

 if(ak.diff(s02, s01+s12) >= eps) {
  s01 = integral(f, x0, x1, eps);
  s12 = integral(f, x1, x2, eps);
 return s01+s12;

Here we're using the ak.diff function to compute the normalised difference between the sums, which you will recall tends to the absolute difference for small arguments and to the relative difference for large ones.

Program 1 demonstrates this algorithm by using it to calculate the integral of the exponential function from \(a\) to \(b\), which trivially equals
\[ \int_a^b e^x \mathrm{d}x = \Big[e^x\Big]_a^b = e^b - e^a \]

Program 1: Integrating The Exponential Function

Whilst this certainly confirms that the algorithm approximates the integral, it can hardly be considered to do so efficiently since it evaluates the integrand at the ends of all but the smallest trapeziums many times over.
Fortunately this is easily fixed by passing the values of the function at the end points down through the recursion, as illustrated by listing 2.

Listing 2: A More Efficient Implementation
function refine(f, x0, x2, f0, f2, s02, eps) {
 var x1 = (x0+x2)/2;
 var h2 = (x1-x0)/2;
 var f1 = f(x1);
 var s01 = h2*(f0+f1);
 var s12 = h2*(f1+f2);

 if(ak.diff(s02, s01+s12) >= eps) {
  s01 = refine(f, x0, x1, f0, f1, s01, eps);
  s12 = refine(f, x1, x2, f1, f2, s12, eps);
 return s01+s12;

function integral(f, x0, x2, eps) {
 var f0 = f(x0);
 var f2 = f(x2);
 var s02 = 0.5*(x2-x0)*(f0+f2);
 return refine(f, x0, x2, f0, f2, s02, eps);

Note that we are also passing down the area of the trapezium that we are about to replace with two smaller ones so that we don't waste any effort recalculating it either.

Program 2 demonstrates that this is no less accurate than the original implementation.

Program 2: Integrating The Exponential Again

A further weakness of this algorithm, at least as implemented in programs 1 and 2, is that every level of recursion uses the same convergence threshold. This means that when the normalised difference between the area of a trapezium and the sum of the areas of those into which it has been split is tending towards their absolute difference we risk accumulating absolute errors in the approximation of the integral. We can fix this by dividing the convergence threshold by two as we continue the recursion for trapeziums having absolute areas less than one, as is done in program 3.

Program 3: Integrating The Exponential Yet Again

This significantly improves the accuracy of the approximation, albeit at a noticeable increase in the time that it takes to calculate. Fortunately we can speed up the algorithm by using something a little more accurate than trapeziums to approximate the integral.

I'll Get Me Newton-Cotes

We can interpret using a trapezium to approximate the area under a function over an interval as approximating the function with a linear function over that interval and integrating it instead. Specifically, if we define
\[ g(x) = f\left(x_1\right) + \frac{x - x_1}{x_2 - x_1} \times \left(f\left(x_1\right) - f\left(x_2\right)\right) \]
then, as proven in derivation 1, we have
\[ \int_{x_1}^{x_2} g(x) \; \mathrm{d}x = \tfrac12 \times \left(x_2 - x_1\right) \times \left(f\left(x_2\right) + f\left(x_1\right)\right) \]
which exactly equals the result that we got from drawing a trapezium upon the graph of \(f\).

Derivation 1: The Integral Of The Linear Approximation
For convenience, let us define
\[ \begin{align*} f_1 &= f\left(x_1\right)\\ f_2 &= f\left(x_2\right) \end{align*} \]
Now, we can rearrange the terms in the function \(g\) to yield
\[ g(x) = \frac{f_2 - f_1}{x_2 - x_1} \times x + \left(f_1 - \frac{f_2 - f_1}{x_2 - x_1} \times x_1\right) \]
Consequently, we have
\[ \begin{align*} \int_{x_1}^{x_2} g(x) \; \mathrm{d}x &= \frac{f_2 - f_1}{x_2 - x_1} \times \int_{x_1}^{x_2} x \; \mathrm{d}x + \left(f_1 - \frac{f_2 - f_1}{x_2 - x_1} \times x_1\right) \times \int_{x_1}^{x_2} 1 \; \mathrm{d}x\\ &= \frac{f_2 - f_1}{x_2 - x_1} \times \left[\tfrac12 x^2\right]_{x_1}^{x_2} + \left(f_1 - \frac{f_2 - f_1}{x_2 - x_1} \times x_1\right) \times \big[x\big]_{x_1}^{x_2}\\ &= \tfrac12 \times \frac{f_2 - f_1}{x_2 - x_1} \times \left(x_2^2-x_1^2\right) + \left(f_1 - \frac{f_2 - f_1}{x_2 - x_1} \times x_1\right) \times \left(x_2-x_1\right)\\ &= \tfrac12 \times \left(f_2 - f_1\right) \times \left(x_2+x_1\right) + f_1 \times \left(x_2-x_1\right) - \left(f_2 - f_1\right) \times x_1\\ &= \tfrac12 f_2 x_2 + \tfrac12 f_2 x_1 - \tfrac12 f_1 x_2 - \tfrac12 f_1 x_1 + f_1 x_2 - f_1 x_1 - f_2 x_1 + f_1 x_1\\ &= \tfrac12 f_2 x_2 - \tfrac12 f_2 x_1 + \tfrac12 f_1 x_2 - \tfrac12 f_1 x_1\\ &= \tfrac12 \times \left(x_2 - x_1\right) \times \left(f_2 + f_1\right) \end{align*} \]

Of course there's no reason why we should limit ourselves to a linear approximation of the function. For example, we can fit a quadratic approximation to three points on the function with
\[ \begin{align*} g(x) &= \frac{\left(x-x_1\right) \times \left(x-x_2\right)}{\left(x_3-x_1\right) \times \left(x_3-x_2\right)} \times f\left(x_3\right)\\ &+ \frac{\left(x-x_1\right) \times \left(x-x_3\right)}{\left(x_2-x_1\right) \times \left(x_2-x_3\right)} \times f\left(x_2\right)\\ &+ \frac{\left(x-x_2\right) \times \left(x-x_3\right)}{\left(x_1-x_2\right) \times \left(x_1-x_3\right)} \times f\left(x_1\right) \end{align*} \]
which, as you can easily confirm, satisfies
\[ \begin{align*} g\left(x_1\right) &= f\left(x_1\right)\\ g\left(x_2\right) &= f\left(x_2\right)\\ g\left(x_3\right) &= f\left(x_3\right) \end{align*} \]
If we choose \(x_1\), \(x_2\) and \(x_3\) so that
\[ x_3 - x_2 = x_2 - x_1 = h \]
then, by integrating the quadratic, we get
\[ \int_{x_1}^{x_3} f(x) \; \mathrm{d}x \approx \tfrac13 \times h \times \left(f\left(x_1\right) + 4 \times f\left(x_2\right) + f\left(x_3\right)\right) \]
as shown by derivation 2.

Derivation 2: The Integral Of The Quadratic Approximation
Firstly, defining
\[ \begin{align*} f_1 &= f\left(x_1\right)\\ f_2 &= f\left(x_2\right)\\ f_3 &= f\left(x_3\right) \end{align*} \]
we can express \(g\) as
\[ \begin{align*} g(x) &= \frac{\left(x-x_1\right) \left(x-x_2\right)}{2h \times h} f_3 + \frac{\left(x-x_1\right) \left(x-x_3\right)}{h \times -h} f_2 + \frac{\left(x-x_2\right) \left(x-x_3\right)}{-h \times -2h} f_1\\ &= \frac{x^2 - \left(x_1 + x_2\right) x + x_1 x_2}{2h^2} f_3 - \frac{x^2 - \left(x_1 + x_3\right) x + x_1 x_3}{h^2} f_2 + \frac{x^2 - \left(x_2 + x_3\right) x + x_2 x_3}{2h^2} f_1\\ &= g_3(x)f_3 - g_2(x)f_2 + g_1(x)f_1 \end{align*} \]
\[ \begin{align*} g_3(x) &= \frac{x^2 - \left(x_1 + x_2\right) x + x_1 x_2}{2h^2}\\ g_2(x) &= \frac{x^2 - \left(x_1 + x_3\right) x + x_1 x_3}{h^2}\\ g_1(x) &= \frac{x^2 - \left(x_2 + x_3\right) x + x_2 x_3}{2h^2} \end{align*} \]
Next, we have
\[ \begin{align*} \int_{x_1}^{x_3} 1 \; \mathrm{d}x &= \big[x\big]_{x_1}^{x_3} = \left(x_3-x_1\right) = 2h\\ \int_{x_1}^{x_3} x \; \mathrm{d}x &= \left[\tfrac12x^2\right]_{x_1}^{x_3} = \tfrac12\left(x_3^2-x_1^2\right) = \tfrac12\left(x_3-x_1\right)\left(x_3+x_1\right) = h\left(x_3+x_1\right)\\ \int_{x_1}^{x_3} x^2 \; \mathrm{d}x &= \left[\tfrac13x^3\right]_{x_1}^{x_3} = \tfrac13\left(x_3^3-x_1^3\right) = \tfrac13\left(x_3-x_1\right)\left(x_3^2+x_3x_1+x_1^2\right) = \tfrac23h\left(x_3^2+x_3x_1+x_1^2\right) \end{align*} \]
and so
\[ \begin{align*} \int_{x_1}^{x_3} g_3(x) \; \mathrm{d}x &= \frac{\tfrac23h\left(x_3^2+x_3x_1+x_1^2\right) - h\left(x_3+x_1\right)\left(x_1 + x_2\right) + 2h x_1 x_2}{2h^2}\\ &= \frac{2\left(x_3^2+x_3x_1+x_1^2\right) - 3\left(x_3+x_1\right)\left(x_1 + x_2\right) + 6 x_1 x_2}{6h}\\ &= \frac{2 x_3^2 + 2 x_3 x_1 + 2 x_1^2 - 3 x_3 x_1 - 3 x_3 x_2 - 3 x_1^2 - 3 x_1 x_2 + 6 x_1 x_2}{6h}\\ &= \frac{2 x_3^2 - x_3 x_1 - x_1^2 - 3 x_3 x_2 + 3 x_1 x_2}{6h}\\ &= \frac{x_3 \left(x_3-x_1\right) + \left(x_3^2 - x_1^2\right) - 3 x_2 \left(x_3-x_1\right)}{6h} = \frac{2hx_3 + 2h\left(x_3 + x_1\right) - 6h x_2}{6h}\\ &= \frac{2 x_3 + x_1 - 3 x_2}{3} = \frac{2 \left(x_3 - x_2\right) - \left(x_2 - x_1\right)}{3} = \frac{2h - h}{3} = \frac{h}{3} \end{align*} \]
Similarly, as you may show for yourself, we have
\[ \begin{align*} \int_{x_1}^{x_3} g_2(x) \; \mathrm{d}x &= -\frac{4h}3\\ \int_{x_1}^{x_3} g_1(x) \; \mathrm{d}x &= \frac{h}3 \end{align*} \]
and so
\[ \int_{x_1}^{x_3} g(x) \; \mathrm{d}x = \frac{h}3f_3 + \frac{4h}3f_2 + \frac{h}3f_1 = \tfrac13h\left(f_1 + 4f_2 + f_3\right) \]

The results of approximating integrals by fitting polynomials to equally spaced evaluations of a function and integrating them instead are known as Newton-Cotes formulae, examples of which include
\[ \begin{align*} \int_{x_1}^{x_2} f(x) \; \mathrm{d}x &\approx \tfrac12 h \left(f\left(x_1\right) + f\left(x_2\right)\right) &&\text{Trapezium rule}\\ \int_{x_1}^{x_3} f(x) \; \mathrm{d}x &\approx \tfrac13 h \left(f\left(x_1\right) + 4 f\left(x_2\right) + f\left(x_3\right)\right) &&\text{Simpson's rule}\\ \int_{x_1}^{x_4} f(x) \; \mathrm{d}x &\approx \tfrac38 h \left(f\left(x_1\right) + 3 f\left(x_2\right) + 3 f\left(x_3\right) + f\left(x_4\right)\right) &&\text{Simpson's 3/8 rule}\\ \int_{x_1}^{x_5} f(x) \; \mathrm{d}x &\approx \tfrac2{45} h \left(7 f\left(x_1\right) + 32 f\left(x_2\right) + 12 f\left(x_3\right) + 32 f\left(x_4\right) + 7 f\left(x_5\right)\right) &&\text{Boole's rule} \end{align*} \]
As it happens, an early adaptive numerical approximation algorithm for integrals used Simpson's rule rather than the trapezium rule that we have been using so far[3]. We shall go one step further, however, and use Simpson's 3/8 rule instead.


Adapting the algorithm defined in program 3 from the trapezium rule to Simpson's 3/8 rule is a relatively easy task, as illustrated by listing 3.

Listing 3: simpsonIntegral
  function refine(f, x0, x2, x4, x6, f0, f2, f4, f6, s06, eps, stepper) {
   var x1, x3, x5, h38, f1, f3, f5, s03, s36, e03, e36;

   if(stepper()===0) {
    throw new Error('failure to converge in ak.simpsonIntegral');

   x1  = (x0+x2)/2;
   x3  = (x2+x4)/2;
   x5  = (x4+x6)/2;
   h38 = 3*(x1-x0)/8;

   f1  = f(x1);
   f3  = f(x3);
   f5  = f(x5);
   s03 = h38*(f0+3*f1+3*f2+f3);
   s36 = h38*(f3+3*f4+3*f5+f6);

   if(ak.diff(s03+s36, s06)>=eps) {
    e03 = Math.abs(s03)<1 ? eps/2 : eps;
    e36 = Math.abs(s36)<1 ? eps/2 : eps;

    s03 = refine(f, x0, x1, x2, x3, f0, f1, f2, f3, s03, e03, stepper);
    s36 = refine(f, x3, x4, x5, x6, f3, f4, f5, f6, s36, e36, stepper);
   return s03+s36;

  function simpsonIntegral(f, x0, x6, eps, steps) {
   var x1, x2, x3, x4, x5, h38, f0, f1, f2, f3, f4, f5, f6, s03, s36, stepper;

   if(ak.nativeType(x6)===ak.UNDEFINED_T) {
    x6 = x0;
    x0 = 0;

   if(!isFinite(x0) || !isFinite(x6)) {
    throw new Error('invalid range of integration in ak.simpsonIntegral');

   x1 = (5*x0+x6)/6;
   x2 = (2*x0+x6)/3;
   x3 = (x0+x6)/2;
   x4 = (x0+2*x6)/3;
   x5 = (x0+5*x6)/6;
   h38 = 3*(x1-x0)/8;

   f0 = f(x0);
   f1 = f(x1);
   f2 = f(x2);
   f3 = f(x3);
   f4 = f(x4);
   f5 = f(x5);
   f6 = f(x6);

   s03 = h38*(f0+3*f1+3*f2+f3);
   s36 = h38*(f3+3*f4+3*f5+f6);
   stepper = function(){return steps--;};
   return refine(f, x0, x1, x2, x3, f0, f1, f2, f3, s03, eps, stepper)
        + refine(f, x3, x4, x5, x6, f3, f4, f5, f6, s36, eps, stepper);

Note that, given that this is intended to be the core of a general purpose integrator, we've added a check that the range of integration is finite and the facility to specify the maximum number of recursions with the steps argument in addition to the convergence threshold given by eps, keeping track of the former with the stepper function. Furthermore, if x6 is undefined then we compute the integral from zero to x0 by first setting x6 to x0 and then setting x0 to zero.
Apart from these changes we have simply replaced the division of one application of the trapezium rule into two with the division of one application of Simpson's 3/8 rule into two at each step, starting with an initial division to improve the accuracy of the first steps of the approximation.

All that's left to do is wrap this up in a function that transforms a function into its integral, as done by ak.simpsonIntegral in listing 4.

Listing 4: ak.simpsonIntegral
ak.simpsonIntegral = function(f, threshold, steps) {
 if(ak.nativeType(f)!==ak.FUNCTION_T) {
  throw new Error('invalid function in ak.simpsonIntegral');

 threshold = ak.nativeType(threshold)===ak.UNDEFINED_T ? Math.pow(ak.EPSILON, 0.75)
                                                       : Math.abs(threshold);
 if(isNaN(threshold)) {
  throw new Error('non-numeric threshold passed to ak.simpsonIntegral');

 steps = ak.nativeType(steps)===ak.UNDEFINED_T ? 18 : ak.floor(Math.abs(steps));
 if(isNaN(steps)) throw new Error('non-numeric steps passed to ak.simpsonIntegral');
 steps = Math.pow(2, steps);

 return function(x0, x1) {
  return simpsonIntegral(f, x0, x1, threshold, steps);

Here we're requiring that the first argument of the integral is a function, that the convergence threshold is a number or, if not provided, giving it a default value that corresponds to three quarters of the digits of a double precision floating point number and likewise doing the same for the maximum number of steps, giving it a default of eighteen. In keeping with our other integrators this represents the number of times that we can split every subinterval in two and so we raise two to the power of it to yield a maximum total number of divisions.
Once these actions are out of the way we're just left to return a function that uses simpsonIntegral to approximate the integral from x0 to x1 or, if x1 isn't passed to it and is consequently undefined, from zero to x0 as noted above.

Program 4 shows how to use ak.simpsonIntegral, which is defined in SimpsonIntegral.js, by approximating the integral of the exponential function for a fifth time.

Program 4: Using ak.simpsonIntegral

A reasonably convincing demonstration that it works, I'd say!

Adapt Or Die!

Now you may be forgiven for thinking that it is rather redundant given that we already have an accurate and efficient integrator in ak.rombergIntegral, as demonstrated by program 5 which repeats the numerical integrals from program 4 using it instead of ak.simpsonIntegral.

Program 5: What About ak.rombergIntegral?

That program 5 is both faster and more accurate than program 4 might even move you to consider that its development has been a complete waste of effort. However, ak.simpsonIntegral does have an advantage over ak.rombergIntegral; it is significantly more robust.
To demonstrate this we shall try integrating the normal PDF for ever smaller standard deviations. Program 6 shows the effect of reducing the standard deviation on the normal PDF using our ak.normalPDF function[4].

Program 6: The Normal PDF

So, as the standard deviation is reduced, the vastly greater part of the area under the curve is squeezed into a smaller and smaller interval. Since our ak.rombergIntegral integrator uses trapeziums of equal width to approximate this area at each step it spends an inordinate amount of time constructing trapeziums that have almost zero area when the standard deviation is very small. So much so, in fact, that it will soon fail to converge, as demonstrated by program 7.

Program 7: Romberg Integral Of The PDF

Our ak.simpsonIntegral integrator, on the other hand, will quickly stop evaluating the PDF in regions where it is very close to zero and so has a much better chance of converging, as demonstrated by program 8.

Program 8: Simpson Integral Of The PDF

Of course, both algorithms can entirely fail to find a region of the PDF that isn't close to zero, which you can confirm by setting mu to 0.1 in both programs 7 and 8. Nevertheless, the fact that ak.simpsonIntegral can efficiently handle integrals that are problematic for ak.rombergIntegral makes it a valuable addition to our library.


[1] Trapezium Artistry, www.thusspakeak.com, 2015.

[2] The Tip Of The Romberg, www.thusspakeak.com, 2015.

[3] Kuncir, G., Algorithm 103: Simpson's rule integrator, Communications of the ACM, Vol. 5, No. 6, 1962.

[4] Define Normal, www.thusspakeak.com, 2016.

Leave a comment