# Recently in Optimisation Category

## Do The Evolution

In the last few posts we have taken a look at genetic algorithms, which use simple models of biological evolution to search for global maxima of functions, being those points at which they return their greatest possible values.
These models typically represent the arguments of the function as genes within the binary chromosomes of individuals whose fitnesses are the values of the function for those arguments, exchange genetic information between them with a crossover operator, make small random changes to them with a mutation operator and, most importantly, favour the fitter individuals in the population for reproduction into the next generation with a selection operator.
We used a theoretical analysis of a simple genetic algorithm to suggest improved versions of the crossover operator, as well as proposing more robust schemes for selection and the genetic encoding of the parameters.
In this post we shall use some of them to implement a genetic algorithm for the ak library.

Full text...

## The Best Laid Schemata

We have seen how we can exploit a simple model of biological evolution, known as a genetic algorithm, to search for global maxima of functions, being those points at which they return their greatest values.
This model treated the function being optimised as a non-negative measure of the fitness of individuals to survive and reproduce, replacing negative results with zero, and represented their chromosomes with arrays of bits which were mapped onto its arguments by treating subsets of them as integers that were linearly mapped to floating point numbers with given lower and upper bounds. It simulated sexual reproduction by splitting pairs of the chromosomes of randomly chosen individuals at a randomly chosen position and swapping their bits from it to their ends, and mutations by flipping randomly chosen bits from the chromosomes of randomly chosen individuals. Finally, and most crucially, it set the probability that an individual would be copied into the next generation to its fitness as a proportion of the total fitness of the population, ensuring that that total fitness would tend to increase from generation to generation.
I concluded by noting that, whilst the resulting algorithm was reasonably effective, it had some problems that a theoretical analysis would reveal and that is what we shall look into in this post.

Full text...

## It's All In The Genes

Last time we took a look at the simulated annealing global minimisation algorithm which searches for points at which functions return their least possible values and which drew its inspiration from the metallurgical process of annealing which minimises the energy state of the crystalline structure of metals by first heating and then slowly cooling them.
Now as it happens, physics isn't the only branch of science from which we can draw inspiration for global optimisation algorithms. For example, in biology we have the process of evolution through which the myriad species of life on Earth have become extraordinarily well adapted to their environments. Put very simply this happens because offspring differ slightly from their parents and differences that reduce the chances that they will survive to have offspring of their own are less likely to be passed down through the generations than those that increase those chances.
Noting that extraordinarily well adapted is more or less synonymous with near maximally adapted, it's not unreasonable to suppose that we might exploit a mathematical model of evolution to search for global maxima of functions, being those points at which they return their greatest possible values.

Full text...

## Annealing Down

A few years ago we saw how we could search for a local minimum of a function, being a point for which it returns a lesser value than any in its immediate vicinity, by taking random steps and rejecting those that lead uphill; an algorithm that we dubbed the blindfolded hill climber. Whilst we have since seen that we could progress towards a minimum much more rapidly by choosing the directions in which to step deterministically, there is a way that we can use random steps to yield better results.

Full text...

## The Tripods Are Here!

Last time we discussed the polytope method, a multivariate function minimisation algorithm that seeks out a local minimum by stepping away from the worst of a set of points, most commonly a simplex; the multivariate generalisation of a triangle.

We got as far as implementing an algorithm for generating regular simplices of any size and location with ak.simplex and in this post we shall finish the job.

Full text...

## The Tripods Are Coming!

Some time ago we took a first look at multivariate function minimisation in which we try to find a point at which a two or more argument function has a local minimum, or in other words a point at which it returns a value no greater than it does at any nearby points.
The approach that we took was that of a blindfolded hill climber; take a tentative step in some direction and if it leads to a better place commit to it, otherwise stay where you are and try again. Our ak.blindfoldMinimum implemented the simplest of such hill climbing algorithms, choosing each trial step at random. We concluded that we should do much better if we chose those steps more intelligently and in this post we shall do just that.

Full text...

## It's All Downhill From Here

Last time we took a look at function optimisation, implementing the bisection and golden section search algorithms to find local minima; the points at which a function returns a smaller value than at any nearby point.

Now this is all fine and dandy, but what if we want to minimise a function that takes two arguments? Or three arguments? Or four, four arguments; mwah, ha, ha, ha, ha!

Er, I seem to be channelling Count von Count. Let me get some garlic...

Full text...

## That Shrinking Feeling

In the last few posts I described the bisection method, Newton's method and the secant method for numerically approximating the inverses of functions. Specifically, given a function f and some target value y these algorithms seek some argument x such that f(x)=y, at least to within some given error threshold.

Related to the problem of inverting functions is the problem of minimising them, in which we seek arguments at which functions return their least values.

Full text...

### Gallimaufry

 AKCalc ECMA Endarkenment Turning Sixteen

This site requires HTML5, CSS 2.1 and JavaScript 5 and has been tested with

 Chrome 26+ Firefox 20+ Internet Explorer 9+