Recently in Optimisation Category

Annealing Down

A few years ago we saw how we could search for a local minimum of a function, being a point for which it returns a lesser value than any in its immediate vicinity, by taking random steps and rejecting those that lead uphill; an algorithm that we dubbed the blindfolded hill climber. Whilst we have since seen that we could progress towards a minimum much more rapidly by choosing the directions in which to step deterministically, there is a way that we can use random steps to yield better results.

Full text...

submit to reddit  

The Tripods Are Here!

Last time we discussed the polytope method, a multivariate function minimisation algorithm that seeks out a local minimum by stepping away from the worst of a set of points, most commonly a simplex; the multivariate generalisation of a triangle.

We got as far as implementing an algorithm for generating regular simplices of any size and location with ak.simplex and in this post we shall finish the job.

Full text...

submit to reddit  

The Tripods Are Coming!

Some time ago we took a first look at multivariate function minimisation in which we try to find a point at which a two or more argument function has a local minimum, or in other words a point at which it returns a value no greater than it does at any nearby points.
The approach that we took was that of a blindfolded hill climber; take a tentative step in some direction and if it leads to a better place commit to it, otherwise stay where you are and try again. Our ak.blindfoldMinimum implemented the simplest of such hill climbing algorithms, choosing each trial step at random. We concluded that we should do much better if we chose those steps more intelligently and in this post we shall do just that.

Full text...

submit to reddit  

It's All Downhill From Here

Last time we took a look at function optimisation, implementing the bisection and golden section search algorithms to find local minima; the points at which a function returns a smaller value than at any nearby point.

Now this is all fine and dandy, but what if we want to minimise a function that takes two arguments? Or three arguments? Or four, four arguments; mwah, ha, ha, ha, ha!

Er, I seem to be channelling Count von Count. Let me get some garlic...

Full text...

submit to reddit  

That Shrinking Feeling

In the last few posts I described the bisection method, Newton's method and the secant method for numerically approximating the inverses of functions. Specifically, given a function f and some target value y these algorithms seek some argument x such that f(x)=y, at least to within some given error threshold.

Related to the problem of inverting functions is the problem of minimising them, in which we seek arguments at which functions return their least values.

Full text...

submit to reddit