# Cubic Line Division

Last time[1] we took a look at how we can use linear interpolation to approximate a function from a set of points on its graph by connecting them with straight lines. As a consequence the result isn't smooth, meaning that its derivative isn't continuous and is undefined at the $$x$$ values of the points, known as the nodes of the interpolation.
In this post we shall see how we can define a smooth interpolation by connecting the points with curves rather than straight lines.

### The Shape Of The Curves

To create a smooth interpolation we must ensure that the curves to the left and to the right of each node have equal values and derivatives at that node. This means that in addition to the set of points $$\left(x_i, y_i\right)$$ we need a set of gradients $$g_i$$ and the functions $$f_i$$ defining each curve must satisfy the four identities
\begin{align*} f_i\left(x_i\right) &= y_i\\ f_i\left(x_{i+1}\right) &= y_{i+1}\\ f^\prime_i\left(x_i\right) &= g_i\\ f^\prime_i\left(x_{i+1}\right) &= g_{i+1} \end{align*}
where $$f^\prime_i$$ is the derivative of $$f_i$$.

The simplest functions that will do so are cubic polynomials
$f_i(x) = a_i \times x^3 + b_i \times x^2 + c_i \times x + d_i$
where the four parameters $$a_i$$, $$b_i$$, $$c_i$$ and $$d_i$$ are uniquely defined by the four identities. Unfortunately, in this form they are just as susceptible to rounding errors as were the linear formulae that we first used to represent the straight lines connecting two points. To fix the problem we can use the same trick that we did for them and calculate each $$f_i$$ with
$t_{0,i}(x) \times y_{i+1} + t_{1,i}(x) \times y_i - t_{0,i}(x) \times t_{1,i}(x) \times \left[x_{0,i}(x) \times g_{i+1} - x_{1,i}(x) \times g_i + \left(t_{1,i}(x)-t_{0,i}(x)\right) \times \Delta y_i\right]$
where
\begin{align*} t_{0,i}(x) &= \frac{x_{0,i}(x)}{\Delta x_i}\\\\ t_{1,i}(x) &= \frac{x_{1,i}(x)}{\Delta x_i} \end{align*}
and
\begin{align*} x_{0,i}(x) &= x-x_i\\ x_{1,i}(x) &= x_{i+1}-x\\ \Delta x_i &= x_{i+1}-x_i\\ \Delta y_i &= y_{i+1}-y_i \end{align*}
Note that since we never multiply together more than three results of $$x_{0,i}$$, $$x_{1,i}$$, $$t_{0,i}$$ and $$t_{1,i}$$, which are all linear functions of $$x$$, this formula is cubic in $$x$$.

Just as we found last time, the rules of IEEE 754 floating point arithmetic[2] guarantee that the values
\begin{align*} t_{0,i}(x_i) &= 0\\ t_{0,i}(x_{i+1}) &= 1\\ t_{1,i}(x_i) &= 1\\ t_{1,i}(x_{i+1}) &= 0 \end{align*}
will be exactly recovered and therefore so will
\begin{align*} f_i(x_i) &= 0 \times y_{i+1} + 1 \times y_i - 0 \times 1 \times \left[0 \times g_{i+1} - \left(x_{i+1}-x_i\right) \times g_i + \left(1-0\right) \times \Delta y_i\right] = y_i\\ f_i(x_{i+1}) &= 1 \times y_{i+1} + 0 \times y_i - 1 \times 0 \times \left[\left(x_{i+1}-x_i\right) \times g_{i+1} - 0 \times g_i + \left(0-1\right) \times \Delta y_i\right] = y_{i+1} \end{align*}
Derivation 1 shows that the gradients at the nodes are also correct, although since they're not an output of the interpolation we don't care about any potential rounding errors in the formula for the derivative.

Derivation 1: The Gradients At The Nodes
Firstly, using a dash to represent the derivative with respect to $$x$$, we have
\begin{align*} x^\prime_{0,i}(x) &= \phantom{-}1 & t^\prime_{0,i}(x) &= \phantom{-}\tfrac{1}{\Delta x_i}\\ x^\prime_{1,i}(x) &= -1 & t^\prime_{1,i}(x) &= -\tfrac{1}{\Delta x_i} \end{align*}
so that
$\frac{\mathrm{d}}{\mathrm{d}x} \left(t_{0,i}(x) \times y_{i+1} + t_{1,i}(x) \times y_i\right) = \frac{y_{i+1}-y_i}{\Delta x_i} = \frac{\Delta y_i}{\Delta x_i}$
Next, by the product rule
$\frac{\mathrm{d}}{\mathrm{d}x} \left(t_{0,i}(x) \times t_{1,i}(x)\right) = t^\prime_{0,i}(x) \times t_{1,i}(x) + t_{0,i}(x) \times t^\prime_{1,i}(x) = \frac{t_{1,i}(x) - t_{0,i}(x)}{\Delta x_i}$
and
$\frac{\mathrm{d}}{\mathrm{d}x} \left(t_{0,i}(x) \times t_{1,i}(x) \times \left[x_{0,i}(x) \times g_{i+1} - x_{1,i}(x) \times g_i + \left(t_{1,i}(x)-t_{0,i}(x)\right) \times \Delta y_i\right]\right)\\ \quad = \frac{t_{1,i}(x) - t_{0,i}(x)}{\Delta x_i} \times \left[x_{0,i}(x) \times g_{i+1} - x_{1,i}(x) \times g_i + \left(t_{1,i}(x)-t_{0,i}(x)\right) \times \Delta y_i\right]\\ \quad\quad + t_{0,i}(x) \times t_{1,i}(x) \times \left[x^\prime_{0,i}(x) \times g_{i+1} - x^\prime_{1,i}(x) \times g_i + \left(t^\prime_{1,i}(x)-t^\prime_{0,i}(x)\right) \times \Delta y_i\right]\\ \quad = \frac{t_{1,i}(x) - t_{0,i}(x)}{\Delta x_i} \times \left[x_{0,i}(x) \times g_{i+1} - x_{1,i}(x) \times g_i + \left(t_{1,i}(x)-t_{0,i}(x)\right) \times \Delta y_i\right]\\ \quad\quad + t_{0,i}(x) \times t_{1,i}(x) \times \left[g_i + g_{i+1} - \frac{2\Delta y_i}{\Delta x_i}\right]$
Putting these together yields
\begin{align*} f^\prime(x) &= \frac{\Delta y_i}{\Delta x_i} - \frac{t_{1,i}(x) - t_{0,i}(x)}{\Delta x_i} \times \left[x_{0,i}(x) \times g_{i+1} - x_{1,i}(x) \times g_i + \left(t_{1,i}(x)-t_{0,i}(x)\right) \times \Delta y_i\right]\\ &\quad- t_{0,i}(x) \times t_{1,i}(x) \times \left[g_i + g_{i+1} - \frac{2\Delta y_i}{\Delta x_i}\right] \end{align*}
Evaluating at the nodes gives
\begin{align*} f^\prime\left(x_i\right) &= \frac{\Delta y_i}{\Delta x_i} - \frac{1-0}{\Delta x_i} \times \left[0 \times g_{i+1} - \Delta x_i \times g_i + \left(1-0\right) \times \Delta y_i\right] - 0 \times 1 \times \left[g_i + g_{i+1} - \frac{2\Delta y_i}{\Delta x_i}\right]\\ &= \frac{\Delta y_i}{\Delta x_i} - \frac{1}{\Delta x_i} \times \left[-\Delta x_i \times g_i + \Delta y_i\right] = \frac{\Delta y_i}{\Delta x_i} + g_i - \frac{\Delta y_i}{\Delta x_i} = g_i \end{align*}
and
\begin{align*} f^\prime\left(x_{i+1}\right) &= \frac{\Delta y_i}{\Delta x_i} - \frac{0 - 1}{\Delta x_i} \times \left[\Delta x_i \times g_{i+1} - 0 \times g_i + \left(0-1\right) \times \Delta y_i\right] - 1 \times 0 \times \left[g_i + g_{i+1} - \frac{2\Delta y_i}{\Delta x_i}\right]\\ &= \frac{\Delta y_i}{\Delta x_i} + \frac{1}{\Delta x_i} \times \left[\Delta x_i \times g_{i+1} - \Delta y_i\right] = \frac{\Delta y_i}{\Delta x_i} + g_{i+1} - \frac{\Delta y_i}{\Delta x_i} = g_{i+1} \end{align*}
as required.

Note that we won't typically know the values of the gradients and so we'll have to estimate them from the data. A simple, and not too unreasonable, scheme is to base them on the finite differences with
$g_i = \begin{cases} \dfrac{\Delta y_i}{\Delta x_i} & i=0\\ \\ \dfrac12 \times \left(\dfrac{\Delta y_i}{\Delta x_i} + \dfrac{\Delta y_{i-1}}{\Delta x_{i-1}}\right) & 0 < i < n-1\\ \\ \dfrac{\Delta y_{i-1}}{\Delta x_{i-1}} & i=n-1 \end{cases}$
for $$n$$ nodes.

### Finding The Interpolation Nodes

We shall use the same tricks to efficiently find the pair of nodes to interpolate between that we did for our implementation of linear interpolation. Specifically, if the nodes are almost uniformly spaced we can be certain that a value $$x$$ will satisfy one of the relations
\begin{align*} x_{i-1} \leqslant &x \leqslant x_i\\ x_i \leqslant &x \leqslant x_{i+1}\\ x_{i+1} \leqslant &x \leqslant x_{i+2} \end{align*}
for
$i = \bigg\lfloor\frac{x-x_0}{x_{n-1}-x_0} \times n\bigg\rfloor$
where the brackets represent the greatest integer that is less than or equal to the term between them, as tested by the canJump function in listing 1.

Listing 1: Testing Whether We Can Jump
function canJump(state) {
var n = state.nodes.length;
var x0 = state.nodes[0].x;
var dx = state.nodes[n-1].x-x0;
var i, j;

for(i=1;i<n-1;++i) {
j = ak.floor(n*((state.nodes[i].x-x0)/dx));
if(j<i-1 || j>i) return false;
}
state.x0 = x0;
state.dx = dx;
return true;
}


The jumpLB function given in listing 2, together with the jumpLBFwd and jumpLBRev helper functions, calculates this index whilst ensuring that it cannot end up pointing before the first node or after the last.

Listing 2: Jumping To The Correct Lower Bound
function jumpLBFwd(x, state) {
var n = state.nodes.length;
var i = Math.min(ak.floor(n*((x-state.x0)/state.dx)), n-2);

if(x<state.nodes[i].x) {if(i>0) --i;}
else if(x>state.nodes[i+1].x) {if(i<n-2) ++i;}
state.lb = i;
}

function jumpLBRev(x, state) {
var n = state.nodes.length;
var i = Math.max(ak.floor(n*((x-state.x0)/state.dx)), 0);

if(x<state.nodes[i].x) {if(i>0) --i;}
else if(x>state.nodes[i+1].x) {if(i<n-2) ++i;}
state.lb = i;
}

function jumpLB(x, state) {
x = Number(x);
if(x>state.nodes[state.lb+1].x) {
if(state.lb<state.nodes.length-2) jumpLBFwd(x, state);
}
else if(x<state.nodes[state.lb].x) {
if(state.lb>0) jumpLBRev(x, state);
}
return x;
}


If the nodes aren't sufficiently evenly distributed then we shall first check whether the nodes immediately to the left or to the right of the current pair enclose $$x$$ and, if not, perform a binary search using our ak._unsafeLowerBound function[3], again taking care not to move past the first or last nodes, as shown by listing 3.

Listing 3: Searching For The Correct Lower Bound
function compare(l, r) {
return l.x-r;
}

function findLBFwd(x, state) {
if(++state.lb<state.nodes.length-2 && x>state.nodes[state.lb+1].x) {
state.lb = ak._unsafeLowerBound(state.nodes, x, compare,
state.lb+2, state.nodes.length-1)-1;
}
}

function findLBRev(x, state) {
if(--state.lb>0 && x<state.nodes[state.lb].x) {
state.lb = ak._unsafeLowerBound(state.nodes, x, compare, 1, state.lb)-1;
}
}

function findLB(x, state) {
x = Number(x);
if(x>state.nodes[state.lb+1].x) {
if(state.lb<state.nodes.length-2) findLBFwd(x, state);
}
else if(x<state.nodes[state.lb].x) {
if(state.lb>0) findLBRev(x, state);
}
return x;
}


Note that we know that ak._unsafeLowerBound always passes values from the array as the first argument to the comparison function compare and so it can safely read their x properties.

### Constructing The Cubic Spline Interpolation

The rest of the implementation is also more or less the same as that of the linear interpolation, beginning with the ak.cubicSplineInterpolate function given in listing 4.

Listing 4: ak.cubicSplineInterpolate
ak.cubicSplineInterpolate = function() {
var state = {nodes:[], lb:0};
var arg0  = arguments[0];
var f;

constructors[ak.nativeType(arg0)](state, arg0, arguments);
prepareNodes(state.nodes);
calcDerivs(state);

if(state.nodes.length===2) {
f = function(x) {
return interpolatePair(Number(x), state.nodes, 0);
};
}
else if(state.nodes.length===3) {
f = function(x) {
x=Number(x);
return interpolatePair(x, state.nodes, x<state.nodes[1].x ? 0 : 1);
};
}
else if(canJump(state)) {
f = function(x) {
return interpolatePair(jumpLB(x, state), state.nodes, state.lb);
};
}
else {
f = function(x) {
return interpolatePair(findLB(x, state), state.nodes, state.lb);
};
}
f.nodes = function() {return copyNodes(state.nodes);};
return Object.freeze(f);
};

var constructors = {};


Once again we've optimised the search for the lower bound node for the cases of two and three nodes by noting that for two nodes it's unnecessary and for three we need only check whether the argument of the interpolation is or is not less than the middle node.

Next, the constructors object has been rewritten to allow the user to specify the gradients at the nodes. Specifically, for a function to calculate them if the values at the nodes are calculated with a function, an array containing them if the latter are contained in an array and as an additional property of the nodes if they are passed as objects, as shown by listing 5.

Listing 5: The Constructors
constructors[ak.ARRAY_T] = function(state, nodes, args) {
var arg1 = args[1];
constructors[ak.ARRAY_T][ak.nativeType(arg1)](state, nodes, arg1, args);
};

constructors[ak.ARRAY_T][ak.ARRAY_T] = function(state, x, y, args) {
var arg2 = args[2];
var n = x.length;
var i;
if(y.length!==n) {
throw new Error('node size mismatch in ak.cubicSplineInterpolate');
}
state.nodes.length = n;
for(i=0;i<n;++i) state.nodes[i] = {x:Number(x[i]), y:Number(y[i])};
constructors[ak.ARRAY_T][ak.ARRAY_T][ak.nativeType(arg2)](state, n, arg2);
};

constructors[ak.ARRAY_T][ak.ARRAY_T][ak.ARRAY_T] = function(state, n, g) {
var i;
if(g.length!==n) {
throw new Error('node size mismatch in ak.cubicSplineInterpolate');
}
for(i=0;i<n;++i) state.nodes[i].g = g[i];
};

constructors[ak.ARRAY_T][ak.ARRAY_T][ak.UNDEFINED_T] = function() {
};

constructors[ak.ARRAY_T][ak.FUNCTION_T] = function(state, x, f, args) {
var arg2 = args[2];
var n = x.length;
var i, xi;
state.nodes.length = n;
for(i=0;i<n;++i) {
xi = Number(x[i]);
state.nodes[i] = {x:xi, y:Number(f(xi))};
}
constructors[ak.ARRAY_T][ak.FUNCTION_T][ak.nativeType(arg2)](state, n, arg2);
};

constructors[ak.ARRAY_T][ak.FUNCTION_T][ak.FUNCTION_T] = function(state, n, df) {
var i;
for(i=0;i<n;++i) state.nodes[i].g = df(state.nodes[i].x);
};

constructors[ak.ARRAY_T][ak.FUNCTION_T][ak.UNDEFINED_T] = function() {
};

constructors[ak.ARRAY_T][ak.UNDEFINED_T] = function(state, nodes) {
var n = nodes.length;
var i, x, y, g;
state.nodes.length = n;
for(i=0;i<n;++i) {
x = nodes[i].x; if(ak.nativeType(x)===ak.FUNCTION_T) x = x();
y = nodes[i].y; if(ak.nativeType(y)===ak.FUNCTION_T) y = y();
g = nodes[i].g; if(ak.nativeType(g)===ak.FUNCTION_T) g = g();
state.nodes[i] = {x:Number(x), y:Number(y), g:g};
}
};


Note that we don't explicitly convert the gradients to numbers since we shall use undefined to mean unknown.

Once again we must sort the nodes so that we can search for those that enclose the arguments of the interpolation efficiently and then check that no two of them are equal and that they and the values at them are numbers, as done by prepareNodes in listing 6.

Listing 6: Initialising The Nodes
function prepareNodes(nodes) {
var n = nodes.length;
var i;

if(n<2) throw new Error('too few nodes in ak.cubicSplineInterpolate');

for(i=0;i<n;++i) {
if(!isFinite(nodes[i].x) || !isFinite(nodes[i].y)) {
throw new Error('invalid node in ak.cubicSplineInterpolate');
}
}

nodes.sort(function(l, r){return l.x-r.x;});

for(i=1;i<n;++i) {
if(nodes[i].x===nodes[i-1].x) {
throw new Error('duplicate node in ak.cubicSplineInterpolate');
}
nodes[i-1].dx = nodes[i].x-nodes[i-1].x;
nodes[i-1].dy = nodes[i].y-nodes[i-1].y;
}
}


The calcDerivs function given in listing 7 replaces any unknown gradients using the scheme described above and checks that they are all finite numbers afterwards.

function calcDerivs(state) {
var n = state.nodes.length;
var i, d0, d1;

if(ak.nativeType(state.nodes[0].g)===ak.UNDEFINED_T) {
state.nodes[0].g = (state.nodes[1].y-state.nodes[0].y)
/ (state.nodes[1].x-state.nodes[0].x);
}

for(i=1;i<n-1;++i) {
if(ak.nativeType(state.nodes[i].g)===ak.UNDEFINED_T) {
d0 = (state.nodes[i].y-state.nodes[i-1].y)
/ (state.nodes[i].x-state.nodes[i-1].x);
d1 = (state.nodes[i+1].y-state.nodes[i].y)
/ (state.nodes[i+1].x-state.nodes[i].x);
state.nodes[i].g = 0.5*d0+0.5*d1;
}
}

if(ak.nativeType(state.nodes[n-1].g)===ak.UNDEFINED_T) {
state.nodes[n-1].g = (state.nodes[n-1].y-state.nodes[n-2].y)
/ (state.nodes[n-1].x-state.nodes[n-2].x);
}

for(i=0;i<n;++i) {
state.nodes[i].g = Number(state.nodes[i].g);
if(!isFinite(state.nodes[i].g)) {
throw new Error('invalid node in ak.cubicSplineInterpolate');
}
}
}


The principal difference between cubic spline and linear interpolation is, of course, the formula that we use to interpolate between a pair of nodes, as implemented by interpolatePair in listing 8.

Listing 8: Interpolating Between Nodes
function interpolatePair(x, nodes, lb) {
var n0 = nodes[lb];
var n1 = nodes[lb+1];
var x0 = x-n0.x;
var x1 = n1.x-x;
var t0 = x0/n0.dx;
var t1 = x1/n0.dx;
return n1.y*t0 + n0.y*t1 - t0*t1*(x0*n1.g - x1*n0.g + n0.dy*(t1-t0));
}


Note that this time I'm only allowing interpolation of numeric values; the decision to omit other types for which the ak library has defined arithmetic overloads was not an arbitrary one, but must I ask you to wait until the next post for a full justification of it.

Finally, the copyNodes function given in listing 9 makes a deep copy of the nodes, their values and their gradients to be returned by the nodes property access method that we added to the interpolation function in listing 4.

Listing 9: Deep Copying The Nodes
function copyNodes(nodes) {
var n = nodes.length;
var copy = new Array(n);
var i;

for(i=0;i<n;++i) copy[i] = {x:nodes[i].x, y:nodes[i].y, g:nodes[i].g};
return copy;
}


This implementation can be found in CubicSplineInterpolate.js.

### Using ak.cubicSplineInterpolate

Program 1 illustrates how much more natural cubic spline interpolation, in green, looks than linear interpolation, in red, by using them both to approximate the sine function, giving the former the exact gradients with the cosine function.

Whilst this even accurately extrapolates beyond the first and last nodes, at least within the range plotted by the graph, this will not generally be the case. For example, program 2 compares linear interpolation to cubic spline interpolation with approximated gradients.

Fortunately we can always guarantee that a cubic spline interpolation will extrapolate identically to linear interpolation by explicitly setting the first and last nodes' gradients to
\begin{align*} g_0 &= \frac{\Delta y_0}{\Delta x_0}\\ g_{n-1} &= \frac{\Delta y_{n-2}}{\Delta x_{n-2}} \end{align*}
then inserting a new node before the first with
\begin{align*} x_{-1} &= x_0 - \Delta x_0\\ y_{-1} &= y_0 - \Delta y_0\\ g_{-1} &= g_0 \end{align*}
and another after the last with
\begin{align*} x_n &= x_{n-1} + \Delta x_{n-2}\\ y_n &= y_{n-1} + \Delta y_{n-2}\\ g_n &= g_{n-1} \end{align*}
so that the first and last curves are straight lines, as demonstrated by program 3.

Program 3: Extrapolating Linearly

Unfortunately it has a much more significant problem and in the next post we shall take a look at both what it is and at one way that we can deal with it.

### References

[1] Chalk The Lines, www.thusspakeak.com, 2018.

[2] IEEE standard for binary floating-point arithmetic. ANSI/IEEE std 754-1985, 1985.

[3] I Still Haven't Found What I'm Looking For, www.thusspakeak.com, 2017.