We have also seen how extrapolating such polynomials beyond the first and last nodes can yield less than satisfactory results, which we fixed by specifying the first and last gradients and then adding new first and last nodes to ensure that the first and last polynomials would represent straight lines.

Now we shall see how cubic spline interpolation can break down rather more dramatically and how we might fix it. ]]>

In this post we shall see how we can define a smooth interpolation by connecting the points with curves rather than straight lines. ]]>

On the face of it implementing this would seem to be a pretty trivial business, but doing so both accurately and efficiently is a surprisingly tricky affair, as we shall see in this post. ]]>

`ak.borelInterval`

type to represent an interval as a pair of `ak.borelBound`

objects holding its lower and upper bounds.With these in place we're ready to implement a type to represent Borel sets and we shall do exactly that in this post. ]]>

`ak.setUnion`

and `ak.setIntersection`

respectively.Such arrays are necessarily both finite and discrete and so cannot represent continuous subsets of the real numbers such as intervals, which contain every real number within a given range. Of particular interest are unions of countable sets of intervals

`ak`

library to represent them.
]]>
`finite`

determines whether its argument is neither infinite nor NaN and `isnan`

determines whether its argument is NaN; behaviours that shouldn't be particularly surprising since they're more or less equivalent to JavaScript's `isFinite`

and `isNaN`

functions respectively.One recommended function that JavaScript does

`ak`

library, is `nextafter`

which returns the first representable floating point number after its first argument in the direction towards its second.
]]>
`Array.slice`

does, we first implemented `ak.partition`

which divides elements into two ranges; those elements that satisfy some given condition followed by those elements that don't. We saw how this could be used to implement the quicksort algorithm but instead defined `ak.sort`

to sort a range of elements using `Array.sort`

, slicing them out beforehand and splicing them back in again afterwards if they didn't represent whole arrays. We did use it, however, to implement `ak.nthElement`

which puts a the correctly sorted element in a given position position within a range, putting before it elements that are no greater and after it elements that are no smaller. Finally, we implemented `ak.partialSort`

which puts every element in a range up to, but not including, a given position into its correctly sorted place with all of the elements from that position onwards comparing no less than the last correctly sorted element.This time we shall take a look at some of the ways that we can query data after we have manipulated it with these functions. ]]>

`ak.shuffle`

which randomly rearranges the elements of an array. We shall be needing another one of them in the not too distant future and so I have decided to take a short break from numerical computing to add those of them that I use the most frequently to the `ak`

library, starting with a selection of sorting operations.
]]>
These models typically represent the arguments of the function as genes within the binary chromosomes of individuals whose fitnesses are the values of the function for those arguments, exchange genetic information between them with a crossover operator, make small random changes to them with a mutation operator and, most importantly, favour the fitter individuals in the population for reproduction into the next generation with a selection operator.

We used a theoretical analysis of a simple genetic algorithm to suggest improved versions of the crossover operator, as well as proposing more robust schemes for selection and the genetic encoding of the parameters.

In this post we shall use some of them to implement a genetic algorithm for the

`ak`

library.
]]>
This model treated the function being optimised as a non-negative measure of the fitness of individuals to survive and reproduce, replacing negative results with zero, and represented their chromosomes with arrays of bits which were mapped onto its arguments by treating subsets of them as integers that were linearly mapped to floating point numbers with given lower and upper bounds. It simulated sexual reproduction by splitting pairs of the chromosomes of randomly chosen individuals at a randomly chosen position and swapping their bits from it to their ends, and mutations by flipping randomly chosen bits from the chromosomes of randomly chosen individuals. Finally, and most crucially, it set the probability that an individual would be copied into the next generation to its fitness as a proportion of the total fitness of the population, ensuring that that total fitness would tend to increase from generation to generation.

I concluded by noting that, whilst the resulting algorithm was reasonably effective, it had some problems that a theoretical analysis would reveal and that is what we shall look into in this post. ]]>

Now as it happens, physics isn't the only branch of science from which we can draw inspiration for global optimisation algorithms. For example, in biology we have the process of evolution through which the myriad species of life on Earth have become extraordinarily well adapted to their environments. Put very simply this happens because offspring differ slightly from their parents and differences that reduce the chances that they will survive to have offspring of their own are less likely to be passed down through the generations than those that increase those chances.

Noting that

As it turns out, we can generalise this dependence to arbitrary sets of random variables with a fairly simple observation. ]]>