Conquering The Eigen


Now that we have a thorough grasp of the mathematics of eigensystems[1][2], we're ready to implement the Jacobi method for finding them for real symmetric matrices. Specifically, we shall seek to construct the spectral decomposition of a real symmetric matrix \(\mathbf{M}\)
\[ \mathbf{M} = \mathbf{V} \times \mathbf{\Lambda} \times \mathbf{V}^\mathrm{T} \]
where the columns of \(\mathbf{V}\) are the eigenvectors of \(\mathbf{M}\) and \(\mathbf{\Lambda}\) is the diagonal matrix of the eigenvalues associated with the eigenvectors in the same columns of \(\mathbf{V}\).
Now, the Jacobi method is by no means the most efficient algorithm for finding the spectral decomposition so we shall name our implementation ak.jacobiDecomposition and save ak.spectralDecomposition for another day.


Since one of the principal uses of matrix decompositions is to simplify certain operators for matrices, it makes sense to implement them as ak style classes, as opposed to functions returning simple objects containing their components as members, so that we can apply our overload dispatch mechanism to them, as illustrated for ak.jacobiDecomposition in listing 1.

Listing 1: ak.jacobiDecomposition
ak.JACOBI_DECOMPOSITION_T = 'ak.jacobiDecomposition';

function JacobiDecomposition(){}
JacobiDecomposition.prototype = {
 TYPE: ak.JACOBI_DECOMPOSITION_T, valueOf: function(){return ak.NaN;}

ak.jacobiDecomposition = function() {
 var jd    = new JacobiDecomposition();
 var state = {v:undefined, l:undefined};
 var arg0  = arguments[0];

 constructors[ak.type(arg0)](state, arg0, arguments);

 jd.v = function() {return state.v;};
 jd.lambda = function() {return state.l;};

 jd.toMatrix = function() {return toMatrix(state.v.toArray(), state.l.toArray());};

 jd.toString = function() {
  return '{v:'+state.v.toString()+',lambda:'+state.l.toString()+'}';

 jd.toExponential = function(d) {
  return '{v:'+state.v.toExponential(d)+',lambda:'+state.l.toExponential(d)+'}';

 jd.toFixed = function(d) {
  return '{v:'+state.v.toFixed(d)+',lambda:'+state.l.toFixed(d)+'}';

 jd.toPrecision = function(d) {
  return '{v:'+state.v.toPrecision(d)+',lambda:'+state.l.toPrecision(d)+'}';

 return Object.freeze(jd);

var constructors = {};

As usual, we initialise the state via dispatch to a constructors object which will populate the member v with an ak.matrix representing \(\mathbf{V}\) and l with an ak.vector representing the leading diagonal of \(\mathbf{\Lambda}\), although this time we are using the ak type rather than the native type for dispatch. Access to these members is provided by the methods v and lambda respectively.
Finally we have the standard string conversion methods and a toMatrix method to recover the matrix defined by the decomposition
\[ \mathbf{M} = \mathbf{V} \times \mathbf{\Lambda} \times \mathbf{V}^\mathrm{T} \]
Of these, only the constructors object and the toMatrix method are non-trivial and bear further discussion.

ak.jacobiDecomposition toMatrix

The toMatrix method is the simpler of the two, so we shall cover it first. Now we could naively implement it by constructing a diagonal matrix representing \(\mathbf{\Lambda}\) and multiplying out the terms, as is done in listing 2.

Listing 2: A Naive Implementation of toMatrix
function toMatrix(v, l) {
 v = ak.matrix(v);
 l = ak.matrix('diagonal', l);
 return ak.mul(v, ak.mul(l, ak.transpose(v)));

The reason why this is naive stems from the fact that \(\mathbf{\Lambda}\) is diagonal and so we can more efficiently reconstruct the matrix by ignoring its off-diagonal zeros

\(\displaystyle{ \begin{align*} \mathbf{M}_{ij} &= \sum_k \mathbf{V}_{ik} \times \mathbf{\Lambda}_{kk} \times \mathbf{V}^\mathrm{T}_{kj}\\ &= \sum_k \mathbf{V}_{ik} \times \mathbf{\Lambda}_{kk} \times \mathbf{V}_{jk} \end{align*} }\)

Our implementation exploits this observation, and the fact that the result is necessarily symmetric, as shown in listing 3.

Listing 3: toMatrix Done Right
function toMatrix(v, l) {
 var n = v.length;
 var m = new Array(n);
 var i, j, k, vi, vj, s;

 for(i=0;i<n;++i) m[i] = new Array(n);

 for(i=0;i<n;++i) {
  vi = v[i];

  for(j=i;j<n;++j) {
   vj = v[j];

   s = 0;
   for(k=0;k<n;++k) s += vi[k]*l[k]*vj[k];
   m[i][j] = m[j][i] = s;
 return ak.matrix(m);

ak.jacobiDecomposition Constructors

We shall most certainly want to be able to initialise ak.jacobiDecomposition objects with eigenvectors and eigenvalues that we already have to hand, so our first constructor does just that, as shown in listing 4.

Listing 4: ak.jacobiDecomposition Matrix-Vector Constructor
constructors[ak.MATRIX_T] = function(state, arg0, args) {
 var arg1 = args[1];
 constructors[ak.MATRIX_T][ak.type(arg1)](state, arg0, arg1);

constructors[ak.MATRIX_T][ak.VECTOR_T] = function(state, v, l) {
 if(v.rows()!==v.cols()) throw new Error('invalid v in ak.jacobiDecomposition');
 if(l.dims()!==v.rows()) throw new Error('invalid lambda in ak.jacobiDecomposition');

 state.v = ak.matrix(v);
 state.l = ak.vector(l);

Note that we do not check that the columns of the eigenvectors matrix v are orthogonal and have unit length since to do so we would need to make precision assumptions on behalf of the user. Instead we shall trust that they know what they are doing and define operations upon an incorrectly initialised ak.jacobiDecomposition as having undefined behaviour. You can blame my background in C++ for my willingness to do this!

We should also be able to initialise them with other ak.jacobiDecomposition objects, or objects with members with the same name, whose constructors are provided in listing 5.

Listing 5: ak.jacobiDecomposition Object Constructors
constructors[ak.OBJECT_T] = function(state, obj) {
 var v = obj.v;
 var l = obj.lambda;

 if(ak.nativeType(v)===ak.FUNCTION_T) v = v();
 if(ak.nativeType(l)===ak.FUNCTION_T) l = l();

 if(ak.type(v)!==ak.MATRIX_T || v.rows()!==v.cols()) {
  throw new Error('invalid v in ak.jacobiDecomposition');

 if(ak.type(l)!==ak.VECTOR_T || l.dims()!==v.rows()) {
  throw new Error('invalid lambda in ak.jacobiDecomposition');

 state.v = ak.matrix(v);
 state.l = ak.vector(l);

constructors[ak.JACOBI_DECOMPOSITION_T] = constructors[ak.OBJECT_T];

Note that we must specify the dispatch target for the type ak.JACOBI_DECOMPOSITION_T since we are not dispatching on the native type this time.

Finally, we shall want to initialise the ak.jacobiDecomposition object with a matrix whose decomposition we wish to compute, since this is rather the point in implementing it in the first place. We shall allow the user to choose a level of precision to which the decomposition is calculated, should they wish to, by passing an error threshold as the second argument, as illustrated in listing 6.

Listing 6: ak.jacobiDecomposition Matrix Constructors
constructors[ak.MATRIX_T][ak.NUMBER_T] = function(state, m, e) {
 var n = m.rows();
 var s;

 if(m.cols()!==n) throw new Error('non-square matrix in ak.jacobiDecomposition');

 s = (n>1) ? fromMatrix(m, e)
           : {v:ak.matrix('identity', n), l:ak.vector(n,, 0))};

 state.v = s.v;
 state.l = s.l;

constructors[ak.MATRIX_T][ak.UNDEFINED_T] = function(state, m) {
 var n = m.rows();
 var s;

 if(m.cols()!==n) throw new Error('non-square matrix in ak.jacobiDecomposition');

 s = (n>1) ? fromMatrix(m, ak.EPSILON)
           : {v:ak.matrix('identity', n), l:ak.vector(n,, 0))};

 state.v = s.v;
 state.l = s.l;

These simply check that the matrix is square before forwarding it, and an error threshold, on to the fromMatrix function to actually perform the calculation, except for the trivial corner cases of zero and one dimensional matrices, that is. In the former case, 0) will return undefined, but it doesn't actually matter since the ak.vector constructor won't refer to the second argument in the zero dimensional case anyway.
Clearly, the real meat of our implementation is to be found in the fromMatrix function, which is given in listing 7.

Listing 7: fromMatrix
function fromMatrix(m, e) {
 var n = m.rows();
 var s = initialState(m, n);
 var l = new Array(n);
 var e2 = e*e;
 var i = 0;

 while( > e2*s.all) {
  rotateState(s, n);

  if(++i===n) {
   i = 0;
   updateOff(s, n);
 for(i=0;i<n;++i) l[i] = s.m[i][i];

 return {v:ak.matrix(s.v), l:ak.vector(l)};

This function is responsible for iteratively applying Jacobi rotations to the matrix until the convergence criterion is met; namely that the sum of the squared values of the off diagonal elements of the current state of the matrix is no larger than sum of the squared values of all of the elements of the original matrix multiplied by the square of the error threshold, stored in and s.all respectively.
These are initialised by the initialState function and updated at each step by the rotateState function, but in order to ensure that is not too very much affected by numerical errors, we recalculate it every n steps with a call to updateOff.

In addition to these convergence measures, the initial state of the algorithm has an eigenvector matrix \(\mathbf{V}\) equal to the identity matrix \(\mathbf{I}\), the lower triangle of the original matrix \(\mathbf{M}\) which will converge to that of the diagonal matrix of eigenvalues \(\mathbf{\Lambda}\) and the column indices of the largest magnitude elements of each row of the lower triangle which we shall use to select the pivot element for each rotation.
Listing 8 gives the implementation of initialState.

Listing 8: initialState
function initialState(m, n) {
 var ml = new Array(n);
 var v  = new Array(n);
 var p  = new Array(n);
 var all = 0;
 var off = 0;
 var r, c, vr, mlr, max, pr, x, abs;

 for(r=0;r<n;++r) {
  vr  = new Array(n);
  mlr = new Array(r+1);
  max = 0;
  pr  = 0;

  for(c=0;c<r;++c) {
   x = 0.5*(, c), r));
   abs = Math.abs(x);

   vr[c]  = 0;
   mlr[c] = x;

   all += 2*x*x;
   off += 2*x*x;

   if(!(abs<max)) {
    pr = c;
    max = abs;

  x =, r);

  vr[r]  = 1;
  mlr[r] = x;

  all += x*x;

  for(c=r+1;c<n;++c) vr[c] = 0;

  v[r]  = vr;
  ml[r] = mlr;
  p[r]  = pr;
 return {v:v, m:ml, p:p, all:all, off:off};

For the sake of efficiency, we're using native arrays of arrays to represent the matrices rather than immutable ak.matrix objects so that we can update them without making copies. You should also note that we don't actually use the lower triangle of the original matrix, but rather the average of its upper and lower triangles, and that consequently ak.jacobiDecomposition actually computes the eigensystem of \(\tfrac{1}{2}\left(\mathbf{M} + \mathbf{M}^\mathrm{T}\right)\) so as to ensure symmetry.

Updating the sum of squares of the off diagonal elements is pretty straightforward, as shown in listing 9. We must only remember to double the sum for the lower off diagonal so as to account for the upper off diagonal elements.

Listing 9: updateOff
function updateOff(s, n) {
 var off = 0;
 var m = s.m;
 var r, c, mr, x;

 for(r=1;r<n;++r) {
  mr = m[r];

  for(c=0;c<r;++c) {
   x = mr[c];
   off += x*x;
 } = 2*off;

All that remains then is to implement the function rotateState to actually apply the Jacobi rotations to the matrix \(\mathbf{M}\), as is done in listing 10.

Listing 10: rotateState
function rotateState(s, n) {
 var v = s.v;
 var m = s.m;
 var p = s.p;
 var l = pivotRow(m, p, n);
 var k = p[l];
 var mk = m[k];
 var ml = m[l];
 var mkk = mk[k];
 var mll = ml[l];
 var mkl = ml[k];
 var tan = (mll>=mkk) ? 2*mkl / ((mll-mkk) + Math.sqrt(Math.pow(mll-mkk,2)+4*mkl*mkl))
                      : 2*mkl / ((mll-mkk) - Math.sqrt(Math.pow(mll-mkk,2)+4*mkl*mkl));
 var cos = 1/Math.sqrt(tan*tan+1);
 var sin = cos*tan;
 var i, mi, mik, mil, vi, vik, vil;

 for(i=0;i<k;++i) {
  mik = mk[i];
  mil = ml[i];
  mk[i] = mik*cos - mil*sin;
  ml[i] = mik*sin + mil*cos;

 mik = mk[k];
 mk[k] = mkk*cos*cos + mll*sin*sin - 2*mkl*cos*sin;
 ml[k] = 0;

 for(i=k+1;i<l;++i) {
  mi = m[i];
  mik = mi[k];
  mil = ml[i];
  mi[k] = mik*cos - mil*sin;
  ml[i] = mik*sin + mil*cos;

 mil = ml[l];
 ml[l] = mkk*sin*sin + mll*cos*cos + 2*mkl*cos*sin;

 for(i=l+1;i<n;++i) {
  mi = m[i];
  mik = mi[k];
  mil = mi[l];
  mi[k] = mik*cos - mil*sin;
  mi[l] = mik*sin + mil*cos;

 for(i=0;i<n;++i) {
  vi = v[i];
  vik = vi[k];
  vil = vi[l];
  vi[k] = vik*cos - vil*sin;
  vi[l] = vik*sin + vil*cos;

 p[k] = pivotCol(m, k);
 p[l] = pivotCol(m, l); -= 2*mkl*mkl;

Whilst this is a fairly lengthy function, it's actually a reasonably straightforward implementation of the formulae for the Jacobi rotation that we covered last time. Since we're only manipulating the lower triangle of the symmetric matrix \(\mathbf{M}\) we have to be a little careful as to which of the equal valued elements \(\mathbf{m}_{ij}\) and \(\mathbf{m}_{ji}\) we have to hand, which is why the steps are split up into multiple loops.
At the end we subtract twice the square of the zeroed element's original value from the sum of those of the off diagonal elements which, as we proved last time, is the amount by which a Jacobi rotation decreases it.
The pivotRow function selects the row with the largest pivot element and the pivotCol updates the indices of the pivots on the rows affected by the rotation. These are given in listing 11.

Listing 11: pivotRow And pivotCol
function pivotRow(m, p, n) {
 var pr  = 1;
 var max = Math.abs(m[pr][p[pr]]);
 var r, abs;

 for(r=2;r<n;++r) {
  abs = Math.abs(m[r][p[r]]);

  if(abs>=max) {
   max = abs;
   pr = r;
 return pr;

function pivotCol(m, r) {
 var mr = m[r];
 var pc = 0;
 var max = Math.abs(mr[pc]);
 var c, abs;

 for(c=1;c<r;++c) {
  abs = Math.abs(mr[c]);
  if(abs>=max) {
   max = abs;
   pc = c;
 return pc;

The only point of note about these function is that pivotRow doesn't check row zero because it doesn't have any lower triangular elements!

Noting that we can transform the matrix member v of a decomposition d into an array of eigenvectors with


Program 1 demonstrates that ak.jacobiDecomposition recovers the eigenvalues and eigenvectors of

\(\displaystyle{ \begin{pmatrix} 1 & 2 & 3\\ 2 & 3 & 1\\ 3 & 1 & 1 \end{pmatrix} }\)

Program 1: Constructing The Eigensystem

You might want to try it out on some more symmetric matrices to convince yourself that this example isn't a fluke.

ak.jacobiDecomposition Overloads

Now that we have an implementation of the spectral decomposition we can use it to perform a variety of linear algebra operations and, as noted above, we shall do so by overloading our arithmetic operators and functions for ak.jacobiDecomposition objects.
As noted in the last post, we can apply any function to a matrix by applying its scalar equivalent to each of its eigenvalues, so it'll prove useful to implement some helper functions to do just that, as given in listing 12.

Listing 12: pivotRow And pivotCol
function fD(f, d) {
 return toMatrix(d.v().toArray(), d.lambda().toArray().map(f));

function fDR(f, d, r) {
 return toMatrix(d.v().toArray(), d.lambda().toArray().map(function(x){return f(x, r);}));

function fRD(f, r, d) {
 return toMatrix(d.v().toArray(), d.lambda().toArray().map(function(x){return f(r, x);}));

We can trivially implement the inverse and powers of a spectral decomposition using these functions, as shown in listing 13.

Listing 13: Inverse And Powers
function inv(d){return fD(function(x){return 1/x;}, d);}
ak.overload(ak.inv, ak.JACOBI_DECOMPOSITION_T, inv);

function powDR(d, r){return fDR(Math.pow, d, r);}
ak.overload(ak.pow, [ak.JACOBI_DECOMPOSITION_T, ak.NUMBER_T], powDR);

function powRD(r, d){return fRD(Math.pow, r, d);}
ak.overload(ak.pow, [ak.NUMBER_T, ak.JACOBI_DECOMPOSITION_T], powRD);

Keeping mind that ak.pow can only raise ak.matrix objects to integer powers, program 2 compares the results of these function for a symmetric ak.matrix and its ak.jacobiDecomposition.

Program 2: inv And pow Operator Comparison

As noted in the last post we can use the spectral decomposition to invert an asymmetric square matrix \(\mathbf{M}\) with the left inverse, defined by
\[ \left(\mathbf{M}^\mathrm{T} \times \mathbf{M}\right)^{-1} \times \mathbf{M}^\mathrm{T} = \mathbf{M}^{-1} \times \mathbf{M}^{\mathrm{T}^{-1}} \times \mathbf{M}^\mathrm{T} = \mathbf{M}^{-1} \]
as illustrated by program 3.

Program 3: Inverting An Asymmetric Matrix

Now since applying a function to the matrix represented by a spectral decomposition simply requires us to apply its scalar equivalent to the eigenvalues, we can calculate the results of all kinds of weird and wonderful functions for symmetric matrices. Program 4 shows the result of the cosine of a matrix, although I'm not entirely sure why we would want it, and of its square root, which actually turns out to be rather useful (as we shall find in a later post).

Program 4: cos And sqrt Operators

Note that the square root is only defined if all of the eigenvalues are non-negative, which is why this example uses the decomposition of ak.mul(m,m) rather than that of m. Such matrices are known as positive semi-definite if one or more eigenvalues are zero and positive definite otherwise. Likewise, negative semi-definite and negative definite matrices have all of their eigenvalues less than or equal to zero and strictly less than zero respectively.
Now it might seem a little suprising that the square root isn't equal to m, but in fact matrices have many square roots, much as positive real numbers have two; one positive and one negative.

A Stable Inverse

In the last post we noted that we can improve the numerical stability of inverting singular, or near singular, symmetric matrices by replacing the reciprocals of their small eigenvalues with zeros.
\[ \lambda_i^{-1} = \begin{cases} \frac{1}{\lambda_i} & \left|\lambda_i\right| > \left|\epsilon \times \lambda_\mathrm{max}\right|\\ 0 & \mathrm{otherwise} \end{cases} \]
Here \(\lambda_\mathrm{max}\) denotes the eigenvalue with the greatest magnitude and \(\epsilon\) a user supplied tolerance.
We justified this with the argument that vectors would be squashed to almost zero in the directions of the associated eigenvectors and so it made some sense to choose the smallest component of the inverse in those directions rather than pollute the result with numerical errors.
The implementation of this stable inverse is pretty trivial, as illustrated by listing 14.

Listing 14: A Stable Inverse
function cutoff(a, e) {
  var n = a.length;
  var x = 0;
  var i;

  for(i=0;i<n;++i) x = Math.max(Math.abs(a[i]), x);
  return x*e;

function stableInv(d, e) {
 var l = d.lambda().toArray();
 var n = l.length;
 var c, k;

 if(ak.nativeType(e)===ak.UNDEFINED_T) {
  e = 0;
 else if(ak.nativeType(e)!==ak.NUMBER_T) {
  throw new Error('invalid threshold in ak.jacobiDecomposition stableInv');

 c = cutoff(l, Math.abs(e));
 for(k=0;k<n;++k) l[k] = (Math.abs(l[k])<=c) ? 0 : 1/l[k];
 return toMatrix(d.v().toArray(), l);

if(!ak.stableInv) ak.stableInv = function(x,e){return ak.stableInv[ak.type(x)](x,e)};
ak.overload(ak.stableInv, ak.JACOBI_DECOMPOSITION_T, stableInv);

Division of numbers by decompositions is almost the same except that 1/l[k] is replaced with r/l[k].

Solving Simultaneous Equations

With our original matrix operators[3] we implemented the division of a vector by a matrix as the multiplication of the former by the inverse of the latter. With the spectral decomposition we don't have to actually construct the inverse of the matrix since it is given by the decomposition with the reciprocals of the eigenvalues
\[ \mathbf{M}^{-1} \times \mathbf{v} = \mathbf{V} \times \mathbf{\Lambda}^{-1} \times \mathbf{V}^\mathrm{T} \times \mathbf{v} \]
and, given that \(\Lambda\) is diagonal, the product \(\mathbf{\Lambda}^{-1} \times \left(\mathbf{V}^\mathrm{T} \times \mathbf{v}\right)\) is easily calculated by dividing the elements of \(\mathbf{V}^\mathrm{T} \times \mathbf{v}\) by the leading elements of \(\Lambda\)
\[ \left(\mathbf{\Lambda}^{-1} \times \left(\mathbf{V}^\mathrm{T} \times \mathbf{v}\right)\right)_j = \sum_k \Lambda^{-1}_{jj} \times \mathbf{V}^\mathrm{T}_{jk} \times \mathbf{v}_k = \Lambda^{-1}_{jj} \times \sum_k \mathbf{V}_{kj} \times \mathbf{v}_k \]
We can thus apply the inverse of \(\mathbf{M}\) to the vector \(\mathbf{v}\) by pre-multiplying this by \(\mathbf{V}\), as is done by divVD in listing 15

Listing 15: Back Substitution
function divVD(x, d, e) {
 var v = d.v();
 var l = d.lambda().toArray();
 var n = l.length;
 var a = new Array(n);
 var j, k, s;

 if(x.dims()!==n) {
  throw new Error('dimensions mismatch in ak.jacobiDecomposition div');

 if(ak.nativeType(e)===ak.UNDEFINED_T) {
  for(k=0;k<n;++k) l[k] = 1/l[k];
 else {
  if(ak.nativeType(e)!==ak.NUMBER_T) {
   throw new Error('invalid threshold in ak.jacobiDecomposition stableDiv');

  s = cutoff(l, Math.abs(e));
  for(k=0;k<n;++k) l[k] = (Math.abs(l[k])<=s) ? 0 : 1/l[k];

 for(j=0;j<n;++j) {
  s = 0;
  for(k=0;k<n;++k) s +=, j) *;
  a[j] = s*l[j];
 return ak.mul(v, ak.vector(a));

if(!ak.stableDiv) {
 ak.stableDiv = function(x0, x1, e) {
  return ak.stableDiv[ak.type(x0)][ak.type(x1)](x0, x1, e)

ak.overload(ak.div,       [ak.VECTOR_T, ak.JACOBI_DECOMPOSITION_T], divVD);
ak.overload(ak.stableDiv, [ak.VECTOR_T, ak.JACOBI_DECOMPOSITION_T], divVD);

Note that this function has an optional tolerance, e, allowing us to use it to stably solve near singular simultaneous equations with stableDiv.
Program 4 compares the division of a vector by a matrix and its spectral decomposition.

Program 5: div Comparison

Division of matrices by decompositions takes a similar form.

And The Rest

The full set of overloads for ak.jacobiDecomposition are given in listing 16 and their implementations can be found in JacobiDecomposition.js.

Listing 16: ak.jacobiDecomposition Overloads
ak.overload(ak.abs,       ak.JACOBI_DECOMPOSITION_T, function(d){return ak.abs(d.lambda());});
ak.overload(ak.acos,      ak.JACOBI_DECOMPOSITION_T, function(d){return fD(Math.acos, d);});
ak.overload(ak.asin,      ak.JACOBI_DECOMPOSITION_T, function(d){return fD(Math.asin, d);});
ak.overload(ak.atan,      ak.JACOBI_DECOMPOSITION_T, function(d){return fD(Math.atan, d);});
ak.overload(ak.cos,       ak.JACOBI_DECOMPOSITION_T, function(d){return fD(Math.cos, d);});
ak.overload(ak.cosh,      ak.JACOBI_DECOMPOSITION_T, function(d){return fD(ak.cosh, d);});
ak.overload(ak.det,       ak.JACOBI_DECOMPOSITION_T, function(d){return d.lambda().toArray().reduce(function(x0,x1){return x0*x1;}, 1);});
ak.overload(ak.exp,       ak.JACOBI_DECOMPOSITION_T, function(d){return fD(Math.exp, d);});
ak.overload(ak.inv,       ak.JACOBI_DECOMPOSITION_T, function(d){return fD(function(x){return 1/x;}, d);});
ak.overload(ak.log,       ak.JACOBI_DECOMPOSITION_T, function(d){return fD(Math.log, d);});
ak.overload(ak.neg,       ak.JACOBI_DECOMPOSITION_T, function(d){return fD(function(x){return -x;}, d);});
ak.overload(ak.stableInv, ak.JACOBI_DECOMPOSITION_T, stableInv);
ak.overload(ak.sin,       ak.JACOBI_DECOMPOSITION_T, function(d){return fD(Math.sin, d);});
ak.overload(ak.sinh,      ak.JACOBI_DECOMPOSITION_T, function(d){return fD(ak.sinh, d);});
ak.overload(ak.sqrt,      ak.JACOBI_DECOMPOSITION_T, function(d){return fD(Math.sqrt, d);});
ak.overload(ak.tan,       ak.JACOBI_DECOMPOSITION_T, function(d){return fD(Math.tan, d);});
ak.overload(ak.tanh,      ak.JACOBI_DECOMPOSITION_T, function(d){return fD(ak.tanh, d);});
ak.overload(,        ak.JACOBI_DECOMPOSITION_T, function(d){return d.lambda().toArray().reduce(function(x0,x1){return x0+x1;}, 0);});

ak.overload(ak.div,       [ak.MATRIX_T, ak.JACOBI_DECOMPOSITION_T], divMD);
ak.overload(ak.div,       [ak.JACOBI_DECOMPOSITION_T, ak.NUMBER_T], function(d, r){return fDR(function(x, r){return x/r;}, d, r);});
ak.overload(ak.div,       [ak.NUMBER_T, ak.JACOBI_DECOMPOSITION_T], divRD);
ak.overload(ak.div,       [ak.VECTOR_T, ak.JACOBI_DECOMPOSITION_T], divVD);
ak.overload(ak.pow,       [ak.JACOBI_DECOMPOSITION_T, ak.NUMBER_T], function(d, r){return fDR(Math.pow, d, r);});
ak.overload(ak.pow,       [ak.NUMBER_T, ak.JACOBI_DECOMPOSITION_T], function(r, d){return fRD(Math.pow, r, d);});
ak.overload(ak.stableDiv, [ak.MATRIX_T, ak.JACOBI_DECOMPOSITION_T], divMD);
ak.overload(ak.stableDiv, [ak.JACOBI_DECOMPOSITION_T, ak.NUMBER_T], function(d, r){return fDR(function(x, r){return x/r;}, d, r);});
ak.overload(ak.stableDiv, [ak.NUMBER_T, ak.JACOBI_DECOMPOSITION_T], divRD);
ak.overload(ak.stableDiv, [ak.VECTOR_T, ak.JACOBI_DECOMPOSITION_T], divVD);

Note that the magnitude of the matrix is equal to the magnitude of the vector of eigenvalues, that the determinant is given by the product of the eigenvalues and that the trace is simply the sum of the eigenvalues, with the last two being calculated here with array reductions.

And with that we are done! In the next post we shall see how we might actually use eigensystems.


[1] At The Foot Of The Eigen,, 2014.

[2] Ascending The Eigen,, 2014.

[2] The Matrix Recoded,, 2014.

Leave a comment