Recently in Linear Algebra Category

A Well Managed Household

Over the last few months we have seen how we can use a sequence of Householder transformations followed by a sequence of shifted Givens rotations to efficiently find the spectral decomposition of a symmetric real matrix M, formed from a matrix V and a diagonal matrix Λ satisfying

    M × V = V × Λ

implying that the columns of V are the unit eigenvectors of M and their associated elements on the diagonal of Λ are their eigenvalues so that

    V × VT = I

where I is the identity matrix, and therefore

    M = V × Λ × VT

From a mathematical perspective the combination of Householder transformations and shifted Givens rotations is particularly appealing, converging on the spectral decomposition after relatively few matrix multiplications, but from an implementation perspective using ak.matrix multiplication operations is less than satisfactory since it wastefully creates new ak.matrix objects at each step and so in this post we shall start to see how we can do better.

Full text...  

Spryer Francis

Last time we saw how we could use a sequence of Householder transformations to reduce a symmetric real matrix M to a symmetric tridiagonal matrix, having zeros everywhere other than upon the leading, upper and lower diagonals, which we could then further reduce to a diagonal matrix Λ using a sequence of Givens rotations to iteratively transform the elements upon the upper and lower diagonals to zero so that the columns of the accumulated transformations V were the unit eigenvectors of M and the elements on the leading diagonal of the result were their associated eigenvalues, satisfying

    M × V = V × Λ

and, since the transpose of V is its own inverse

    M = V × Λ × VT

which is known as the spectral decomposition of M.
Unfortunately, the way that we used Givens rotations to diagonalise tridiagonal symmetric matrices wasn't particularly efficient and I concluded by stating that it could be significantly improved with a relatively minor change. In this post we shall see what it is and why it works.

Full text...  

FAO The Householder

Some years ago we saw how we could use the Jacobi algorithm to find the eigensystem of a real valued symmetric matrix M, which is defined as the set of pairs of non-zero vectors vi and scalars λi that satisfy

    M × vi = λi × vi

known as the eigenvectors and the eigenvalues respectively, with the vectors typically restricted to those of unit length in which case we can define its spectral decomposition as the product

    M = V × Λ × VT

where the columns of V are the unit eigenvectors, Λ is a diagonal matrix whose ith diagonal element is the eigenvalue associated with the ith column of V and the T superscript denotes the transpose, in which the rows and columns of the matrix are swapped.
You may recall that this is a particularly convenient representation of the matrix since we can use it to generalise any scalar function to it with

    f(M) = V × f(Λ) × VT

where f(Λ) is the diagonal matrix whose ith diagonal element is the result of applying f to the ith diagonal element of Λ.
You may also recall that I suggested that there's a more efficient way to find eigensystems and I think that it's high time that we took a look at it.

Full text...  

You're A Complex Fellow, Mr Cholesky

For a complex matrix M, its transpose MT is the matrix formed by swapping its rows with its columns and its conjugate M* is the matrix formed by negating its imaginary part.
A few posts ago I suggested that the conjugate of the transpose, known as the adjoint MH, was sufficiently important to be given the overloaded arithmetic operator ak.adjoint. The reason for this is that many of the useful properties of the transpose of a real matrix are also true of the adjoint of a complex matrix, not least of which is that complex matrices that are equal to their adjoints, known as Hermitian matrices, behave in many ways like symmetric real matrices.
In this post we shall take a look at some of them.

Full text...  

You've Got Class, Mr Cholesky!

Last time we took a first look at the Cholesky decomposition, being the unique way to represent a positive definite symmetric square matrix M as the product of a lower triangular matrix L having strictly positive elements on its leading diagonal, and by definition zeroes everywhere above it, with its upper triangular transpose LT, formed by switching the row and column indices of each element so that LTij = Lji

    M = L × LT

Finally we noted that, like the Jacobi decomposition, the Cholesky decomposition is useful for solving simultaneous equations, inverting matrices and so on, but left the implementation details to this post.

Full text...  

Are You Definitely Positive, Mr Cholesky?

Some time ago, we took a look at the Jacobi decomposition which is used to find the eigensystems of real symmetric matrices being, for a matrix M, those vectors vi and numbers λi that satisfy

    M × vi = λi × vi

This representation of a matrix turned out to have some tremendously useful consequences, such as dramatically simplifying both the solution of simultaneous linear equations and the calculation of functions of matrices like the inverse and, crucially in the context of this post, the square root.

Full text...  

The Matrix Isn't Real

Last time we saw that because complex numbers support all of the familiar arithmetic operations it was possible to define arithmetic operations for vectors having complex rather than real elements, although for the sake of one's sanity it's probably best not to think too much about the directions in which they point.
Given that our implementation of them with the ak.complexVector class simply required us to use our own overloaded arithmetic operators rather than JavaScript's native ones, it's not unreasonable to expect that doing the same thing for matrices should be no more difficult.

Full text...  

What Are You Pointing At?

One of the nice properties of complex numbers is that all of the arithmetic operations that we are familiar with more or less behave as expected when applied to them, as we demonstrated with our ak.complex class.
We have also seen that whilst vectors are simply quantities having both direction and length, or magnitude, the easiest way to manipulate them is to define them as arrays of coordinates, or elements, and their arithmetic operations in terms of arithmetic operations upon those elements.
So what's to stop us using complex numbers for their elements?

Nothing but a complete lack of imagination!

Full text...  

If The Photo Fits...

In the last post I describe how we might apply principal component analysis to facial images to create the rather spooky looking eigenfaces that capture the most significant differences between them and their somewhat suprisingly attractive averages.

We got as far as implementing helper functions to load bitmaps into ImageData objects and convert them to intensity matrices, deferring the implementation of the algorithm to actually generate eigenfaces, and a supposed application using them, to this post.

Full text...  

Who Are You Calling Average?

The Victorians, for all of their advances in science and engineering, had some pretty peculiar beliefs, one of which was phrenology; the notion that you could accurately determine a person's psychological makeup from the shape of their skull. That this is utterly misguided may seem obvious these days, but keep in mind that we've had over a century in which to figure that out.
That said, this obsession with correlating physiological features with psychological ones did result in a rather interesting observation...

Full text...