Matrix

A two dimensional array of values representing a (linear) transformation of vectors. For example
\[ \mathbf{M} = \begin{pmatrix} m_{00} & m_{01} & m_{02}\\ m_{10} & m_{11} & m_{12}\\ m_{20} & m_{21} & m_{22} \end{pmatrix} \]
Given a number \(x\), a vector \(\mathbf{v}\) and matrices \(\mathbf{M}\) and \(\mathbf{N}\) and denoting the \(i^{th}\) element of a vector \(\mathbf{v}\) with \(v_i\) and the element of a matrix \(\mathbf{M}\) in the \(i^{th}\) row and \(j^{th}\) column with \(m_{ij}\) the rules of matrix arithmetic are given by
\[ \begin{align*} (\mathbf{M} \times x)_{ij} &= m_{ij} \times x\\ (x \times \mathbf{M})_{ij} &= m_{ij} \times x\\ \\ (\mathbf{M} \div x)_{ij} &= m_{ij} \div x\\ \\ (\mathbf{M} \times \mathbf{v})_i &= \sum_j m_{ij} \times v_j\\ (\mathbf{v} \times \mathbf{M})_j &= \sum_i v_i \times m_{ij}\\ \\ (\mathbf{M} + \mathbf{N})_{ij} &= m_{ij} + n_{ij}\\ (\mathbf{M} - \mathbf{N})_{ij} &= m_{ij} - n_{ij}\\ \\ (\mathbf{M} \times \mathbf{N})_{ij} &= \sum_k m_{ik} \times n_{kj} \end{align*} \]
where \(\sum\) is the summation sign.

A square matrix, one with the same number of rows and columns, whose elements are zero when the row number and column number differ and are one when they are the same is known as an identity matrix. For example
\[ \mathbf{I} = \begin{pmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix} \]
Such matrices have the property of not changing matrices or vectors by multiplication. For example
\[ \begin{align*} \begin{pmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix} \times \begin{pmatrix} m_{00} & m_{01} & m_{02}\\ m_{10} & m_{11} & m_{12}\\ m_{20} & m_{21} & m_{22} \end{pmatrix} &= \begin{pmatrix} m_{00} & m_{01} & m_{02}\\ m_{10} & m_{11} & m_{12}\\ m_{20} & m_{21} & m_{22} \end{pmatrix} \\ \\ \begin{pmatrix} m_{00} & m_{01} & m_{02}\\ m_{10} & m_{11} & m_{12}\\ m_{20} & m_{21} & m_{22} \end{pmatrix} \times \begin{pmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix} &= \begin{pmatrix} m_{00} & m_{01} & m_{02}\\ m_{10} & m_{11} & m_{12}\\ m_{20} & m_{21} & m_{22} \end{pmatrix} \end{align*} \]
Identity matrices are therefore multiplicative units allowing us to define the inverse of a matrix as
\[ \mathbf{M} \times \mathbf{M}^{-1} = \mathbf{I} \]
or
\[ \mathbf{M}^{-1} \times \mathbf{M} = \mathbf{I} \]
For square matrices these two definitions of the inverse are equivalent, but for non-square matrices they are not and are known as the right and left pseudoinverses respectively. Matrices that do not have inverses are known as singular matrices.

For example, the inverse of a two dimensional square matrix is given by
\[ \begin{pmatrix} a & b\\ c & d \end{pmatrix} ^{-1} = \frac{1}{a \times d-b \times c} \begin{pmatrix} d & -b\\ -c & a \end{pmatrix} \]
as can be confirmed by following the rules of matrix multiplication.
\[ \begin{align*} \frac{1}{a \times d-b \times c} \begin{pmatrix} d & -b\\ -c & a \end{pmatrix} \times \begin{pmatrix} a & b\\ c & d \end{pmatrix} &= \frac{1}{a \times d-b \times c} \begin{pmatrix} d \times a - b \times c & d \times b - b \times d\\ -c \times a + a \times c & -c \times b + a \times d \end{pmatrix} \\ &= \frac{1}{a \times d - b \times c} \begin{pmatrix} a \times d - b \times c & 0\\ 0 & a \times d - b \times c \end{pmatrix} \\ &= \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix} \end{align*} \]
Such matrices are singular if
\[ a \times d = b \times c \]