`ak.minHeap`

implementation of the min-heap structure to cache the distances between clusters, saving us the expense of recalculating them for clusters that don't change from one step in the hierarchy to the next.Recall that we used three different schemes for calculating the distance between a pair of clusters, the average distance between their members, known as average linkage, the distance between their closest members, known as single linkage, and the distance between their farthest members, known as complete linkage, and that I concluded by noting that our algorithm was about as efficient as possible in general but that there is a much more efficient scheme for single linkage clusterings; efficient enough that sorting the clusters in each clustering by size would be the most costly operation and so in this post we shall implement objects to represent clusterings that don't do that. ]]>

We did this by selecting the closest pairs of clusters in one clustering and merging them to create the next, using one of three different measures of the distance between a pair of clusters; the average distance between their members, the distance between their nearest members and the distance between their farthest members, known as average linkage, single linkage and complete linkage respectively.

Unfortunately our implementation came in at a rather costly

We then went on to define the

`ak.clade`

type to represent hierarchical clusterings as trees, so named because that's what they're called in biology when they are used to show the relationships between species and their common ancestors.Now that we have those structures in place we're ready to see how to create hierarchical clusterings and so in this post we shall start with a simple, general purpose, but admittedly rather inefficient, way to do so. ]]>

Note that both of these algorithms use a heuristic, or rule of thumb, to assign data to clusters, but there's another way to construct clusterings; define a heuristic to measure how close to each other a pair of clusters are and then, starting with each datum in a cluster of its own, progressively merge the closest pairs until we end up with a single cluster containing all of the data. This means that we'll end up with a sequence of clusterings and so before we can look at such algorithms we'll need a structure to represent them. ]]>

This time we shall take a look at a clustering algorithm that uses nearest neighbours to identify clusters, contrasting it with the

The

Now I'd like to introduce some more clustering algorithms but there are a few things that we'll need first. ]]>

`ak`

library, taking inspiration from the C++ standard library. Now, JavaScript isn't just short of algorithms in its standard library, it's also lacking all but a few data structures. In fact, from the programmer's perspective, it only has hash maps which use a function to map strings to non-negative integers that are then used as indices into an array of sets of key-value pairs. Technically speaking, even arrays are hash maps of the string representation of their indices, although in practice the interpreter will use more efficient data structures whenever it can.As clever as its implementation might be, it's unlikely that it will always figure out exactly what we're trying to do and pick the most appropriate data structure and so it will occasionally be worth explicitly implementing a data structure so that we can be

The first such data structure that we're going to need is a min-heap which will allow us to efficiently add elements in any order and remove them in ascending order, according to some comparison function. ]]>

Like all copulas they are effectively the CDFs of vector valued random variables whose elements are uniformly distributed when considered independently. Whilst those Archimedean CDFs were relatively trivial to implement, we found that their probability density functions, or PDFs, were somewhat more difficult and that the random variables themselves required some not at all obvious mathematical manipulation to get right.

Having done all the hard work implementing the

`ak.archimedeanCopula`

, `ak.archimedeanCopulaDensity`

and `ak.archimedeanCopulaRnd`

functions we shall now use them to implement some specific families of Archimedean copulas.
]]>
We have seen that the densities of Archimedean copulas are rather trickier to calculate and that making random observations of them is trickier still. Last time we found an algorithm for the latter, albeit with an implementation that had troubling performance and numerical stability issues, and in this post we shall add an improved version to the

`ak`

library that addresses those issues.
]]>
These are known as Archimedean copulas and are valid whenever

Whilst such copulas are relatively easy to implement we saw that their densities are a rather trickier job, in contrast to Gaussian copulas where the reverse is true. In this post we shall see how to draw random vectors from Archimedean copulas which is also much more difficult than doing so from Gaussian copulas. ]]>

Last time, in preparation for interpolating between multidimensional vector nodes, we implemented the

`ak.grid`

type to store ticks on a set of axes and map their intersections to `ak.vector`

objects to represent such nodes arranged at the corners of hyperdimensional rectangular cuboids.With this in place we're ready to take a look at one of the simplest multidimensional interpolation schemes; multilinear interpolation. ]]>

I concluded by noting that, even with this improvement, the shape of a cubic spline interpolation is governed by choices that are not uniquely determined by the points themselves and that linear interpolation is consequently a more mathematically appropriate scheme, which is why I chose to generalise it to other arithmetic types for

The obvious next question is whether or not we can also generalise the

We have also seen how extrapolating such polynomials beyond the first and last nodes can yield less than satisfactory results, which we fixed by specifying the first and last gradients and then adding new first and last nodes to ensure that the first and last polynomials would represent straight lines.

Now we shall see how cubic spline interpolation can break down rather more dramatically and how we might fix it. ]]>

In this post we shall see how we can define a smooth interpolation by connecting the points with curves rather than straight lines. ]]>