The Spectral Apparition


Over the last few months we have seen how we can efficiently implement the Householder transformations[1] and shifted Givens rotations[2] used by Francis's algorithm[3][4] to diagonalise a real symmetric matrix \(\mathbf{M}\), yielding its eigensystem in a matrix \(\mathbf{V}\) whose columns are its eigenvectors and a diagonal matrix \(\boldsymbol{\Lambda}\) whose diagonal elements are their associated eigenvalues, which satisfy
\[ \mathbf{M} = \mathbf{V} \times \boldsymbol{\Lambda} \times \mathbf{V}^\mathrm{T} \]
and together are known as the spectral decomposition of \(\mathbf{M}\).
In this post, we shall add it to the ak library using the householder[5] and givens[6] functions that we have put so much effort into optimising.

Catching Up With Old Friends

Listing 1 shows the implementation of the Householder, with the minor modification from the original of taking the arrays q, d0 and d1 together with their length n as arguments rather than initialising them in the body of the function and returning them to the caller at the end.

Listing 1: The householder Function
function householder(q, d0, d1, n) {
 var c, s, r, d, p, rr, cc, z, qc, qrr, q0;

 for(c=0;c<n-2;++c) {
  z = 0;
  qc = q[c];
  for(r=c+1;r<n;++r) z += Math.abs(qc[r]);

  if(z>0) {
   s = 0;
   for(r=c+1;r<n;++r) {
    qc[r] = q[r][c] /= z;
    s += qc[r]*qc[r];
   s = Math.sqrt(s);
   if(qc[c+1]>0) s = -s;
   d = Math.sqrt(2*s*(s-qc[c+1]));
   qc[c+1] -= s;
   for(cc=c+1;cc<n;++cc) qc[cc] /= d;

   p = 0;
   for(cc=c+1;cc<n;++cc) p += q[cc][c]*qc[cc];
   q[c+1][c] -= 2*qc[c+1]*p;

   for(cc=c+1;cc<n;++cc) {
    p = 0;
    for(rr=c+1;rr<n;++rr) p += qc[rr]*q[rr][cc];
    for(rr=c+1;rr<n;++rr) q[rr][cc] -= 2*qc[rr]*p;

   for(rr=c+1;rr<n;++rr) {
    qrr = q[rr];
    p = 0;
    for(cc=c+1;cc<n;++cc) p += qrr[cc]*qc[cc];
    for(cc=c+1;cc<n;++cc) qrr[cc] -= 2*qc[cc]*p;
  else {
   z += 1;

  d0[c] = qc[c];
  d1[c] = q[c+1][c]*z;

 d0[n-2] = q[n-2][n-2];
 d0[n-1] = q[n-1][n-1];
 d1[n-2] = q[n-1][n-2];

 for(cc=0;cc<n-1;++cc) q[n-1][cc] = 0;
 q[n-1][n-1] = 1;

 for(c=n-3;c>=0;--c) {
  for(cc=0;cc<n;++cc) q[c+1][cc] = 0;
  q[c+1][c+1] = 1;

  for(cc=c+1;cc<n;++cc) q[cc][c+1] -= 2*q[c][cc]*q[c][c+1];

  for(rr=c+2;rr<n;++rr) {
   p = 0;
   for(cc=c+1;cc<n;++cc) p += q[cc][rr]*q[c][cc];
   for(cc=c+1;cc<n;++cc) q[cc][rr] -= 2*q[c][cc]*p;
 q0 = q[0];
 for(cc=1;cc<n;++cc) q0[cc] = 0;
 q0[0] = 1;

Similarly, the givens function has been modified to take the length of the arrays as an argument and the convergence error message has been changed to refer to the ak.spectralDecomposition type that we shall be using it to help implement, as shown by listing 2.

Listing 2: The givens Function
function givens(q, d0, d1, o, n, e, steps) {
 var step = 0;
 var nn, c, r, r0, r1, qr, qr0, qr1, d01, d02, d12, d, x, z, h;
 var s0, c0, s0s0, c0c0, c0s0;
 var m00, m10, m11, m21, m22, m23, mcs;

 while(o<n-2 && Math.abs(d1[o])<=e*(1+Math.abs(d0[o])+Math.abs(d0[o+1]))) ++o;
 c = o;
 for(nn=o+1;nn<n-1 && Math.abs(d1[nn])>e*(1+Math.abs(d0[nn])+Math.abs(d0[nn+1]));++nn);
 while(c!==o || Math.abs(d1[nn-2])>e*(1+Math.abs(d0[nn-2])+Math.abs(d0[nn-1]))) {
  if(step++===steps) throw new Error('failure to converge in ak.spectralDecomposition');

  if(c===o) {
   d01 = d0[nn-1]; d02 = d0[nn-2]; d12 = d1[nn-2];
   d = (d02-d01)/2;
   x = d>0 ? d0[c] - d01 + d12*d12/(d + ak.hypot(d, d12))
           : d0[c] - d01 + d12*d12/(d - ak.hypot(d, d12));
   z = d1[c];
  else {
   x = d1[c-1];
  h = ak.hypot(x, z);
  s0 = -z/h; s0s0 = s0*s0;
  c0 =  x/h; c0c0 = c0*c0;
  c0s0 = c0*s0;
  if(c===o) {
   m00 = d0[c]; m10 = d1[c]; m11 = d0[c+1]; 
   mcs = 2*m10*c0s0;
   d0[c]   = m00*c0c0 + m11*s0s0 - mcs;
   d0[c+1] = m00*s0s0 + m11*c0c0 + mcs;
   d1[c]   = (m00-m11)*c0s0 + m10*(c0c0-s0s0);
   if(c<nn-2) {
    m21 = d1[c+1];
    d1[c+1] = m21*c0;
    z = -m21*s0;
  else {
   m10 = d1[c-1]; m11 = d0[c];
   m21 = d1[c];   m22 = d0[c+1];
   mcs = 2*m21*c0s0;
   d0[c]   = m11*c0c0 + m22*s0s0 - mcs;
   d0[c+1] = m11*s0s0 + m22*c0c0 + mcs;
   d1[c-1] = m10*c0 - z*s0;
   d1[c]   = (m11-m22)*c0s0 + m21*(c0c0-s0s0);
   if(c<nn-2) {
    m23 = d1[c+1];
    d1[c+1] = m23*c0;
    z = -m23*s0;
  for(r=0;r<n;++r) {
   qr = q[r]; qr0 = qr[c]; qr1 = qr[c+1];
   qr[c]   = qr0*c0 - qr1*s0;
   qr[c+1] = qr0*s0 + qr1*c0;
  if(++c===nn-1) c = o;

 if(isNaN(d1[nn-2])) d0[nn-1] = d0[nn-2] = ak.NaN;
 if(isNaN(d0[nn-1]) || isNaN(d0[nn-2])) return ak.NaN;
 return o;

Unfortunately, the givens function doesn't work for two by two matrices and so we shall have to fall back on the Jacobi algorithm that we originally used to find eigensystems with our ak.jacobiDecomposition type[7]. Since we only have to worry about setting a single off diagonal element to zero we can dispense with a lot of the bookkeeping and simply use the single Givens rotation from the Jacobi algorithm[8] given by
\[ \begin{align*} \tan \theta &= \begin{cases} \dfrac{2m_{01}}{\left(m_{11} - m_{00}\right) + \sqrt[+]{\left(m_{11} - m_{00}\right)^2+4m_{01}^2}} & \quad m_{11} \geqslant m_{00}\\ \dfrac{2m_{01}}{\left(m_{11} - m_{00}\right) + \sqrt[-]{\left(m_{11} - m_{00}\right)^2+4m_{01}^2}} & \quad m_{11} < m_{00} \end{cases}\\ \cos \theta &= \frac{1}{\sqrt{\tan^2\theta+1}}\\ \sin \theta &= \cos \theta \times \tan \theta\\ \mathbf{G} &= \begin{pmatrix} \cos\theta & \sin\theta\\ -\sin\theta & \cos\theta \end{pmatrix} \end{align*} \]
to yield a transformed matrix
\[ \mathbf{M}^\prime = \mathbf{G}^\mathrm{T} \times \mathbf{M} \times \mathbf{G} \]
with the elements
\[ \begin{align*} m^\prime_{00} &= m_{00} \times \cos^2 \theta + m_{11} \times \sin^2\theta - m_{01} \times \cos\theta \times \sin\theta\\ m^\prime_{11} &= m_{00} \times \sin^2 \theta + m_{11} \times \cos^2\theta + m_{01} \times \cos\theta \times \sin\theta\\ m^\prime_{01} &= m^\prime_{10} = 0 \end{align*} \]
The jacobi function defined in listing 3 applies this transformation if the off diagonal element is not too small, using the same comparison to the diagonal elements as the givens function, otherwise simply copying its diagonal elements into d0 and d1 and setting q to the identity matrix.

Listing 3: The jacobi Function
function jacobi(q, e) {
 var q00 = q[0][0];
 var q01 = q[0][1]+q[1][0];
 var q11 = q[1][1];
 var t0, c0, s0, c0c0, s0s0, c0s0, qcs, d0, d1;

 if(Math.abs(q01)>2*e*(1+Math.abs(q00)+Math.abs(q11))) {
  t0 = (q11>=q00) ? q01 / ((q11-q00) + ak.hypot(q11-q00, q01))
                  : q01 / ((q11-q00) - ak.hypot(q11-q00, q01));
  c0 = 1/ak.hypot(t0, 1);
  s0 = c0*t0;
  c0c0 = c0*c0;
  s0s0 = s0*s0;
  c0s0 = c0*s0;
  qcs = q01*c0s0;

  d0 = q00*c0c0 + q11*s0s0 - qcs;
  d1 = q00*s0s0 + q11*c0c0 + qcs;
 else {
  c0 = 1;   s0 = 0;
  d0 = q00; d1 = q11;

 q[0][0] =  c0; q[0][1] = s0;
 q[1][0] = -s0; q[1][1] = c0;

 return {v:ak.matrix(q), l:ak.vector([d0, d1])};

Note that q01 is the sum of the upper and lower off diagonal elements and so, by dropping the factors of two that we would have multiplied it by, we are effectively working with their average, or in other words the off diagonal element of the symmetric matrix that is the average of q and its transpose. Furthermore, we're representing \(\mathbf{V}\) with an ak.matrix object and \(\boldsymbol{\Lambda}\) with a vector containing its diagonal elements in an ak.vector object.

The fromMatrix function given in listing 4 switches between Jacobi's and Francis's algorithm depending on the size of the matrix, doing nothing but copy diagonal elements if it's zero or one dimensional since in those cases it is already trivially diagonal.

Listing 4: The fromMatrix Function
function fromMatrix(m, e, steps) {
 var n = m.rows();
 var s, o;

 if(m.cols()!==n) throw new Error('non-square matrix in ak.spectralDecomposition');

 if(n<2) return {v:ak.matrix('identity', n), l:ak.vector(n,, 0))};
 if(n<3) return jacobi(m.toArray(), e);

 s = initialState(m, n);
 o = 0;
 householder(s.q, s.d0, s.d1, n);
 while(o<n-2) o = givens(s.q, s.d0, s.d1, o, n, e, steps);
 return {v: ak.matrix(s.q), l: ak.vector(s.d0)};

The initialState function is defined in listing 5 and stores the average of the matrix \(\mathbf{M}\) and its transpose as an array in the q member of its result, together with arrays d0 and d1 for the householder and givens functions to work with.

Listing 5: The initialState Function
function initialState(m, n) {
 var r, c, mr;

 m = m.toArray();
 for(r=0;r<n;++r) {
  mr = m[r];
  for(c=0;c<r;++c) mr[c] = m[c][r] = 0.5*(mr[c]+m[c][r]);
 return {q: m, d0: new Array(n), d1: new Array(n-1)};


We shall represent spectral decompositions with the ak.spectralDecomposition type defined in listing 6.

Listing 6: The ak.spectralDecomposition Type
ak.SPECTRAL_DECOMPOSITION_T = 'ak.spectralDecomposition';

function SpectralDecomposition(){}
SpectralDecomposition.prototype = {
 valueOf: function(){return ak.NaN;}

ak.spectralDecomposition = function() {
 var sd    = new SpectralDecomposition();
 var state = {v:undefined, l:undefined};
 var arg0  = arguments[0];

 constructors[ak.type(arg0)](state, arg0, arguments);

 sd.v = function() {return state.v;};
 sd.lambda = function() {return state.l;};

 sd.toMatrix = function() {
  return toMatrix(state.v.toArray(), state.l.toArray());

 sd.toString = function() {
  return '{v:'+state.v.toString()+',lambda:'+state.l.toString()+'}';

 sd.toExponential = function(d) {
  return '{v:'+state.v.toExponential(d)+',lambda:'+state.l.toExponential(d)+'}';

 sd.toFixed = function(d) {
  return '{v:'+state.v.toFixed(d)+',lambda:'+state.l.toFixed(d)+'}';

 sd.toPrecision = function(d) {
  return '{v:'+state.v.toPrecision(d)+',lambda:'+state.l.toPrecision(d)+'}';

 return Object.freeze(sd);

var constructors = {};

This follows our usual convention of first defining a type id ak.SPECTRAL_DECOMPOSITION_T and a class prototype SpectralDecomposition that uses it for its TYPE property, and then defining the ak.spectralDecomposition function to create the decomposition. After initialising the state object by dispatching to the constructors object, this adds the property access methods v and lambda, the toMatrix method to recompose the matrix and the standard string conversion methods.

Listing 7 shows the constructors that decompose an ak.matrix object with optional second and third arguments for the convergence threshold and maximum number of steps, defaulting to three quarters of the digits of the double precision epsilon and thirty respectively.

Listing 7: The Decomposition Constructors
constructors[ak.MATRIX_T] = function(state, m, args) {
 var arg1 = args[1];
 if(m.cols()!==m.rows()) {
  throw new Error('non-square matrix in ak.spectralDecomposition');
 constructors[ak.MATRIX_T][ak.type(arg1)](state, m, arg1, args);

constructors[ak.MATRIX_T][ak.NUMBER_T] = function(state, m, e, args) {
 var arg2 = args[2];
 e = Math.abs(e);
 if(isNaN(e)) {
  throw new Error('non-numeric threshold in ak.spectralDecomposition');
 constructors[ak.MATRIX_T][ak.NUMBER_T][ak.type(arg2)](state, m, e, arg2);

constructors[ak.MATRIX_T][ak.NUMBER_T][ak.NUMBER_T] = function(state, m, e, steps) {
 var s;

 steps = ak.floor(Math.abs(steps));
 if(isNaN(steps)) {
  throw new Error('non-numeric steps in ak.spectralDecomposition');

 s = fromMatrix(m, e, steps);
 state.v = s.v;
 state.l = s.l;

constructors[ak.MATRIX_T][ak.NUMBER_T][ak.UNDEFINED_T] = function(state, m, e) {
 var s = fromMatrix(m, e, 30);
 state.v = s.v;
 state.l = s.l;

constructors[ak.MATRIX_T][ak.UNDEFINED_T] = function(state, m) {
 var s = fromMatrix(m, Math.pow(ak.EPSILON, 0.75), 30);
 state.v = s.v;
 state.l = s.l;

We can also directly initialise the decomposition with a matrix of eigenvectors and a vector of eigenvalues or with an object that has properties or methods v and lambda that contain or return them, as shown by listing 8.

Listing 8: The Direct Constructors
constructors[ak.MATRIX_T][ak.VECTOR_T] = function(state, v, l) {
 if(v.rows()!==v.cols()) {
  throw new Error('invalid v in ak.spectralDecomposition');

 if(l.dims()!==v.rows()) {
  throw new Error('invalid lambda in ak.spectralDecomposition');

 state.v = ak.matrix(v);
 state.l = ak.vector(l);

constructors[ak.OBJECT_T] = function(state, obj) {
 var v = obj.v;
 var l = obj.lambda;

 if(ak.nativeType(v)===ak.FUNCTION_T) v = v();
 if(ak.nativeType(l)===ak.FUNCTION_T) l = l();

 if(ak.type(v)!==ak.MATRIX_T || v.rows()!==v.cols()) {
  throw new Error('invalid v in ak.spectralDecomposition');

 if(ak.type(l)!==ak.VECTOR_T || l.dims()!==v.rows()) {
  throw new Error('invalid lambda in ak.spectralDecomposition');

 state.v = ak.matrix(v);
 state.l = ak.vector(l);

constructors[ak.SPECTRAL_DECOMPOSITION_T] = constructors[ak.OBJECT_T];

Since the ak.spectralDecomposition type is an example of the latter we simply direct it to the same constructor.

To demonstrate the use of ak.spectralDecomposition, program 1 applies it to the matrix
\[ \begin{pmatrix} 1 & 2 & 3 & 4\\ 2 & 3 & 1 & 4\\ 3 & 1 & 1 & -2\\ 4 & 4 & -2 & 3 \end{pmatrix} \]
first printing the decomposition and then the results of both pre-multiplying the eigenvectors by the matrix and by the eigenvalues.

Program 1: Constructing The Eigensystem

One Last Refactoring

In the implementation of ak.jacobiDecomposition we recomposed the matrix with the helper function toMatrix given in listing 9.

Listing 9: The Original toMatrix
function toMatrix(v, l) {
 var n = v.length;
 var m = new Array(n);
 var i, j, k, vi, vj, s;

 for(i=0;i<n;++i) m[i] = new Array(n);

 for(i=0;i<n;++i) {
  vi = v[i];

  for(j=i;j<n;++j) {
   vj = v[j];

   s = 0;
   for(k=0;k<n;++k) s += vi[k]*l[k]*vj[k];
   m[i][j] = m[j][i] = s;
 return ak.matrix(m);

Given that we've been much more cautious about memory use in the implementation of ak.spectralDecomposition it's worth our while refactoring this function too. Since the elements of the \(i^\mathrm{th}\) row of \(\mathbf{M}\) don't depend upon any earlier rows of \(\mathbf{V}\) we can put them into a buffer mi which we can then swap with the \(i^\mathrm{th}\) element of v before the next iteration, as done in listing 10.

Listing 10: The Refactored toMatrix
function toMatrix(v, l) {
 var n = v.length;
 var mi = new Array(n);
 var i, j, k, vi, vj, s;

 for(i=0;i<n;++i) {
  vi = v[i];

  for(j=0;j<i;++j) mi[j] = v[j][i];
  for(j=i;j<n;++j) {
   vj = v[j];

   s = 0;
   for(k=0;k<n;++k) s += vi[k]*l[k]*vj[k];
   mi[j] = s;
  vi = mi;
  mi = v[i];
  v[i] = vi;
 return ak.matrix(v);

Note that at each step we're exploiting the fact that we know that \(\mathbf{M}\) must be symmetric by copying its already calculated upper triangular elements of the result into the buffer with
for(j=0;j<i;++j) mi[j] = v[j][i];

Program 2 demonstrates the recomposition of a randomly generated symmetric matrix, printing the original above the recomposition and the distance between them.

Program 2: Recomposing A Matrix

Overloaded Operators

An important feature of the spectral decomposition is that we can use it to generalise functions of scalars to matrices with
\[ f\left(\mathbf{M}\right) = \mathbf{V} \times f\left(\boldsymbol{\Lambda}\right) \times \mathbf{V}^\mathrm{T} \]
where \(f\left(\boldsymbol{\Lambda}\right)\) is the result of applying \(f\) to the diagonal elements of \(\boldsymbol{\Lambda}\), which follows from a generalisation of Taylor's theorem to matrices[9].

The ak library overloads the single argument operators
ak.abs ak.acos ak.asin ak.atan ak.cos ak.cosh ak.det ak.exp ak.inv ak.log ak.neg ak.stableInv ak.sin ak.sinh ak.sqrt ak.tan ak.tanh

for ak.spectralDecomposition objects with equivalent implementations to those we implemented for ak.jacobiDecomposition objects and you can try them out in program 3, which defaults to exponentiation with ak.exp.

Program 3: Applying A Univariate Overload

Recall that the ak.stableInv operator allows us to find something close to an inverse for singular matrices, having a determinant of zero and consequently no true inverse, by transforming the eigenvalues to
\[ \lambda_i^{-1} = \begin{cases} \frac{1}{\lambda_i} & |\lambda_i| > \left|\epsilon \times \lambda_\mathrm{max}\right|\\ 0 & \text{otherwise} \end{cases} \]
where the vertical bars represent the magnitude of the value between them and \(\lambda_\mathrm{max}\) is the eigenvalue with the greatest magnitude. This means that when we apply the inverse we'll transform the zero vector to itself, which is the smallest of the infinite set of vectors that the matrix maps to it. This is particularly useful when the matrix is nearly singular, since sacrificing accuracy in the directions of eigenvectors with relatively small eigenvalues significantly reduces numerical errors in all other directions.
Note that the value of \(\epsilon\) is specified with an optional second argument to ak.stableInv, which defaults to zero.

We also have the same overloaded two argument operators as ak.jacobiDecomposition

where the ak.stableDiv operator takes the same approach as ak.stableInv of transforming small eigenvalues to zero and larger ones to their reciprocals when the decomposition appears on the right hand side, having an optional third argument representing \(\epsilon\).

That's all there is to say about our implementation of the spectral decomposition, which you can find in SpectralDecomposition.js.


[1] FAO The Householder,, 2019.

[2] Spryer Francis,, 2019.

[3] Francis, J., The QR Transformation - Part 1, The Computer Journal, Vol. 4, No. 3, Oxford University Press, 1961.

[4] Francis, J., The QR Transformation - Part 2, The Computer Journal, Vol. 4, No. 4, Oxford University Press, 1961.

[5] A Well Managed Household,, 2020.

[6] Funky Givens,, 2020.

[7] Conquering The Eigen,, 2014.

[8] Ascending The Eigen,, 2014.

[9] At The Foot Of The Eigen,, 2014.

Leave a comment