Recently in Calculus Category

Adapt Or Try

Over the last few months we have been looking at how we might approximate the solutions to ordinary differential equations, or ODEs, which define the derivative of one variable with respect to another with a function of them both. Firstly with the first order Euler method, then with the second order midpoint method and finally with the generalised Runge-Kutta method.
Unfortunately all of these approaches require the step length to be fixed and specified in advance, ignoring any information that we might use to adjust it during the iteration in order to better trade off the efficiency and accuracy of the approximation. In this post we shall try to automatically modify the step lengths to yield an optimal, or at least reasonable, balance.

Full text...  

A Kutta Above The Rest

We have recently been looking at ordinary differential equations, or ODEs, which relate the derivatives of one variable with respect to another to them with a function so that we cannot solve them with plain integration. Whilst there are a number of tricks for solving such equations if they have very specific forms, we typically have to resort to approximation algorithms such as the Euler method, with first order errors, and the midpoint method, with second order errors.
In this post we shall see that these are both examples of a general class of algorithms that can be accurate to still greater orders of magnitude.

Full text...  

Finding The Middle Ground

Last time we saw how we can use Euler's method to approximate the solutions of ordinary differential equations, or ODEs, which define the derivative of one variable with respect to another as a function of them both, so that they cannot be solved by direct integration. Specifically, it uses Taylor's theorem to estimate the change in the first variable that results from a small step in the second, iteratively accumulating the results for steps of a constant length to approximate the value of the former at some particular value of the latter.
Unfortunately it isn't very accurate, yielding an accumulated error proportional to the step length, and so this time we shall take a look at a way to improve it.

Full text...  

Out Of The Ordinary

Several years ago we saw how to use the trapezium rule to approximate integrals. This works by dividing the interval of integration into a set of equally spaced values, evaluating the function being integrated, or integrand, at each of them and calculating the area under the curve formed by connecting adjacent points with straight lines to form trapeziums.
This was an improvement over an even more rudimentary scheme which instead placed rectangles spanning adjacent values with heights equal to the values of the function at their midpoints to approximate the area. Whilst there really wasn't much point in implementing this since it offers no advantage over the trapezium rule, it is a reasonable first approach to approximating the solutions to another type of problem involving calculus; ordinary differential equations, or ODEs.

Full text...  

Integration, Quasi Mode

Last time we saw how it was possible to use uniformly distributed random variables to approximate the integrals of univariate and multivariate functions, being those that take numbers and vectors as arguments respectively. Specifically, since the integral of a univariate function is equal to the net area under its graph within the interval of integration it must equal its average height multiplied by the length of that interval and, by definition, the expected value of that function for a uniformly distributed random variable on that interval is the average height and can be approximated by the average of a large number of samples of it. This is trivially generalised to multivariate functions with multidimensional volumes instead of areas and lengths.
We have also seen how quasi random sequences fill areas more evenly than pseudo random sequences and so you might be asking yourself whether we could do better by using the former rather than the latter to approximate integrals.

Clever you!

Full text...  

Monte Carlo Or Bust

We have taken a look at a few different ways to numerically approximate the integrals of functions now, all of which have involved exactly integrating simple approximations of those functions over numerous small intervals. Whilst this is an appealing strategy, as is so often the case with numerical computing it is not the only one.

Full text...  

The Nesting Instinct

We have spent some time now looking at how we might numerically approximate the integrals of functions but have thus far completely ignored the problem of integrating functions with more than one argument, known as multivariate functions.
When solving integrals of multivariate functions mathematically we typically integrate over each argument in turn, treating the others as constants as we do so. At each step we remove one argument from the problem and so must eventually run out of them, at which point we will have solved it.
It is consequently extremely tempting to approximate multivariate integrals by recursively applying univariate numerical integration algorithms to each argument in turn.

Full text...  

A Jump To The Left And Some Steps To The Right

We have previously seen how we can approximate the integrals of functions by noting that they are essentially the areas under their graphs and so by drawing simple shapes upon those graphs and adding up their areas we can numerically calculate the integrals of functions that we might struggle to solve mathematically.
Specifically, if we join two points x1 and x2 on the graph of a function f with a straight line and another two vertically from them to the x axis then we've drawn a trapezium.

In regions where a function is rapidly changing we need a lot of trapeziums to accurately approximate its integral. If we restrict ourselves to trapeziums of equal width, as we have done so far, this means that we might spend far too much effort putting trapeziums in regions where a function changes slowly if it also has regions where it changes quickly.

The obvious solution to this is, of course, to use trapeziums of different widths.

Full text...  

The Tip Of The Romberg

Last time we took a first look at the numerical approximation of integrals with the trapezium rule, a rather simplistic algorithm with the unfortunate property that its accuracy was implicitly, rather than explicitly, specified.
As a user of numerical libraries, including my own, I would much rather have an algorithm that works to a given accuracy, or at the very least tries to, than have to figure it out for myself and I suggested that we might apply the lessons learned from our attempts to numerically approximate derivatives to do just that.

Full text...  

Trapezium Artistry

We have covered in some detail the numerical approximation of differentiation but have yet to consider the inverse operation of integration and I think that it's high time that we got around to it...

Full text...