`Array`

objects following the conventions of its methods, such as `slice`

and `sort`

.In this post we shall take a look at an algorithm for finding the centrally ranked element, or median, of an array, which is strongly related to the

`ak.nthElement`

function, and then at a particular use for it.
]]>
Whilst I'm very much of the opinion that statistical distributions are worth describing in their own right, the chi-squared distribution plays a pivotal role in testing whether or not the categories into which a set of observations of some variable quantity fall are consistent with assumptions about the expected numbers in each category, which we shall take a look at in this post. ]]>

Tangentially related is Student's t-distribution which governs the deviation of means of sets of independent

In this post we shall see how it is related to the gamma distribution and implement its various functions in terms of those of the latter. ]]>

Unfortunately all of these approaches require the step length to be fixed and specified in advance, ignoring any information that we might use to adjust it during the iteration in order to better trade off the efficiency and accuracy of the approximation. In this post we shall try to automatically modify the step lengths to yield an optimal, or at least reasonable, balance. ]]>

In this post we shall see that these are both examples of a general class of algorithms that can be accurate to still greater orders of magnitude. ]]>

Unfortunately it isn't very accurate, yielding an accumulated error proportional to the step length, and so this time we shall take a look at a way to improve it. ]]>

This was an improvement over an even more rudimentary scheme which instead placed rectangles spanning adjacent values with heights equal to the values of the function at their midpoints to approximate the area. Whilst there really wasn't much point in implementing this since it offers no advantage over the trapezium rule, it is a reasonable first approach to approximating the solutions to another type of problem involving calculus; ordinary differential equations, or ODEs. ]]>

Now there is nothing about such distributions, known as mixture distributions, that requires that the components are univariate. Given that copulas are simply multivariate distributions with standard uniformly distributed marginals, being the distributions of each element considered independently of the others, we can use the same technique to create new copulas too. ]]>

It is quite tempting, therefore, to use weighted sums of PDFs to construct new PDFs and in this post we shall see how we can use a simple probabilistic argument to do so. ]]>

Unfortunately for large numbers of dimension the calculation of the approximation will still be relatively expensive and will require a significant amount of memory to store and so in this post we shall take a look at an algorithm that only uses the vector of first partial derivatives. ]]>

Now that we've got the theoretical details out of the way it's time to get on with the implementation. ]]>

Before we take a look at them, however, we'll need a way to step toward minima in such directions, known as a line search, and in this post we shall see how we might reasonably do so. ]]>