Get A Shift On

| 0 Comments

We have seen[1] how we can approximate random variables with linear congruences of the form
\[ x_{i+1} = \left(a \times x_i + c\right)\;\%\;m \]
where \(x \% m\) is the remainder of the integer division of \(x\) by \(m\). If we are careful with our choices of the seed \(x_0\), the multiplier \(a\), the increment \(c\) and the modulus \(m\) then they can generate sequences of pseudo random numbers that, whilst wholly deterministic, pass many of the statistical tests for randomness.
These are not the only kind of recurrences that we can use to generate pseudo random numbers, however.

A Little Bit Random

An important class of pseudo random generators are linear feedback shift registers, or LFSRs, which typically generate pseudo random bits rather than integers. They do this by using an \(n\) bit unsigned integer as a sequence of bits from which the least significant is drawn as a pseudo random bit. After each bit is drawn, the integer's bits are shifted one place to the right and a subset of them, known as taps, are then replaced by their exclusive or with the drawn bit, or in other words with a one if they don't equal it and with a zero if they do.

Figure 1 illustrates a single step of an eight bit LFSR, using filled circles to represent ones, empty circles to represent zeros and the \(\oplus\) symbol to represent the exclusive or operation.

Figure 1: Drawing A Bit From An LFSR

Note that we're combining the right shift into the taps with their subsequent exclusive or with the drawn bit.
The convention is to call the drawn bit the zeroth and the least significant the first, so here the taps are identified as the fourth, fifth, sixth and eighth bits. The most significant bit, in this case the eighth, is set to zero after the shift and is always one of the taps, so that it ends up set to the drawn bit at the end of the step.
Furthermore, if the sequence doesn't contain any ones then it never will whereas if it does then it always will, so we should avoid starting from a sequence of all zeros. As a consequence, an \(n\) bit LFSR can be in at most \(2^n-1\) states and so the sequence of bits must cycle after that many are drawn.

Program 1 plots the bits drawn from the eight bit LFSR shown in figure 1 in rows of 51 bits apiece. You can see that the pattern repeats after every five rows, implying that it begins to cycle after 255 bits have been drawn; the maximum possible for an eight bit LFSR.

Program 1: Drawing From The Eight Bit LFSR

Since JavaScript converts the arguments of its bitwise operators to 32 bit signed integers for some unfathomable reason, interpreting \(2^{31}\) as the sign bit, we must be careful to use the >>> operator which shifts the sign bit to the right rather than leaving it in place as the >> operator does. In this example it doesn't make any difference, but in general the former yields the results that we'd expect if the bits represented an unsigned integer whereas the latter does not.

Now, since the exclusive or of a bit with zero has the same value as that bit we only need to perform the exclusive or operations of an LFSR if the drawn bit equals one, in which case we can simply use a constant in which the taps are set to one and all of the other bits are set to zero. Here this constant is
\[ 10111000_\mathbf{2} = \mathrm{b}8_\mathbf{16} \]
where the bold subscripts show the bases as decimal numbers.

Despite looking completely different to congruential generators LFSRs actually have quite a lot in common with them, but to see why we'll need to take a brief detour.

The Multiplicative Arithmetic Of Polynomials

We can define a method of long multiplication for polynomials that is much the same as the one that we learned for integers as children, adding together those terms that have the same power of the variable rather than those that have the same power of ten. For example
\[ \begin{align*} \left(x + 2\right) \times \left(x^2 - x + 3\right) &= x \times \left(x^2 - x + 3\right) + 2 \times \left(x^2 - x + 3\right)\\ &= x^3 - \phantom{2 \times}\,x^2 + 3 \times x\\ &\phantom{= x^3}\; + 2 \times x^2 - 2 \times x + 6\\ &= x^3 + \phantom{2 \times}\,x^2 + \phantom{2 \times}\;x + 6 \end{align*} \]
Note that there are no carry operations when multiplying polynomials since the constants by which the powers of the variable are multiplied are independent of that variable.

Whilst the operation of such a multiplication is probably rather unsurprising, you may be less familiar with the fact that we can also divide one polynomial by another using long division. For example, we can use it to calculate
\[ \left(x^3 + x^2 + 3 \times x + 1\right) \div \left(x+2\right) \]
with
\[ x+2 \, \overline{\big) \, x^3 + x^2 + 3 \times x + 1} \]
Rather than having a thousands column, a hundreds column, a tens column and a units column, here we have an \(x^3\) column, an \(x^2\) column, an \(x\) column and a units column and, to cancel out the \(x^3\) term, we need \(x^2\) in the \(x^2\) column
\[ \phantom{x+2 \, \overline{\big) \, x^3 + 2 \times}} x^2\\ x+2 \, \overline{\big) \, x^3 + \phantom{2 \times} x^2 + 3 \times x + 1}\\ \phantom{x+2 \, \overline{\big) \, }} x^3 + 2 \times x^2 \]
Subtracting the second line from the first yields
\[ \phantom{x+2 \, \overline{\big) \, x^3 + 2 \times}} x^2\\ x+2 \, \overline{\big) \, x^3 + \phantom{2 \times} x^2 + 3 \times x + 1}\\ \phantom{x+2 \, \overline{\big) \, }} x^3 + 2 \times x^2\\ \phantom{x+2 \, \big) \,} \overline{\phantom{x^3 + 2}-x^2 + 3 \times x + 1} \]
after which we need \(-x\) in the \(x\) column to cancel out the \(x^2\) term
\[ \phantom{x+2 \, \overline{\big) \, x^3 + 2 \times}} x^2 - \phantom{3 \times\,} x\\ x+2 \, \overline{\big) \, x^3 + \phantom{2 \times} x^2 + 3 \times x + 1}\\ \phantom{x+2 \, \overline{\big) \, }} x^3 + 2 \times x^2\\ \phantom{x+2 \, \big) \,} \overline{\phantom{x^3 + 2}-x^2 + 3 \times x + 1}\\ \phantom{x+2 \, \big) \,} \phantom{x^3 + 2}-x^2 - 2 \times x\\ \phantom{x+2 \, \big) \,} \phantom{x^3 + 2}\overline{\phantom{-x^2 +1} 5 \times x + 1} \]
Finally, to cancel out the \(x\) term we need a five in the units column
\[ \phantom{x+2 \, \overline{\big) \, x^3 + 2 \times}} x^2 - \phantom{3 \times\,} x + \phantom{0}5\\ x+2 \, \overline{\big) \, x^3 + \phantom{2 \times} x^2 + 3 \times x + \phantom{0}1}\\ \phantom{x+2 \, \overline{\big) \, }} x^3 + 2 \times x^2\\ \phantom{x+2 \, \big) \,} \overline{\phantom{x^3 + 2}-x^2 + 3 \times x + \phantom{0}1}\\ \phantom{x+2 \, \big) \,} \phantom{x^3 + 2}-x^2 - 2 \times x\\ \phantom{x+2 \, \big) \,} \phantom{x^3 + 2}\overline{\phantom{-x^2 +1} 5 \times x + \phantom{0}1}\\ \phantom{x+2 \, \big) \,} \phantom{x^3 + 2}\phantom{-x^2 +1} 5 \times x + 10\\ \phantom{x+2 \, \big) \,} \phantom{x^3 + 2}\phantom{-x^2 +1} \overline{\phantom{5 \times x} - \phantom{0}9} \]
yielding
\[ \left(x^3 + x^2 + 3 \times x + 1\right) \div \left(x+2\right) = x^2-x+5 - \frac{9}{x+2} \]
Note that rather than leaving a positive remainder, as we typically do during integer division, we choose one whose largest power of \(x\) is smaller than that of the polynomial that we're dividing by. This means that it is particularly easy to divide a polynomial by a power of \(x\). For example
\[ \left(x^3 + x^2 + 3 \times x + 1\right) \div x^2 = x+1 + \frac{3 \times x + 1}{x^2} \]
In such cases the remainder is simply the sum of the terms that have a smaller power of \(x\) than that we're dividing the polynomial by, so in general we have
\[ \left(\sum_{i=0}^n a_i \times x^i\right) \div x^j = \sum_{i=j}^{n} a_i \times x^{i-j} + \frac{\sum_{i=0}^{j-1} a_i \times x^i}{x^j} \]
where \(\sum\) is the summation sign.

Now this is all well and good, you might be saying to yourself, but what does it have to do with the subject at hand?

The Surprising Relationship Between LFSRs And Polynomial Multiplication

To relate LFSRs to polynomial multiplication we simply need to change the taps so that they multiply the numbers that they are fed by some constant and add or subtract them to or from the numbers in the register as they are shifted. If we then equate the elements of the register as coefficients of \(x\) in increasing powers from left to right and feed in coefficients of the polynomial that we want to multiply it by in the same order then shift registers implement polynomial multiplication exactly as described above.
Figure 2 illustrates this for
\[ \left(x + 2\right) \times \left(x^2 - x + 3\right) \]
with the taps representing the polynomial on the right hand side of the multiplication and the numbers fed into the register representing the polynomial on the left hand side.

Figure 2: A Polynomial Shift Register
\(1\) \(x\) \(x^2\) \(1\) \(x\) \(x^2\)
\(\times 3\) \(\times 1\) \(\times 1\)
\(2\) \(1\) \(0\) \(0\) \(-\) \(0\) \(+\) \(0\)
\(2\) \(1\) \(0\) \(0\) \(0\)
\(2\) \(3\) \(-1\) \(1\)
\(6\) \(1\) \(1\)

Note that since the output terms are discarded, this register calculates the remainder of the product after division by \(x^3\) and is clearly equal to that of our long multiplication example
\[ \frac{\left(x + 2\right) \times \left(x^2 - x + 3\right)}{x^3} = \frac{x^3 + x^2 + x + 6}{x^3} = 1 + \frac{x^2 + x + 6}{x^3} \]
If we only allow the coefficients to equal zero or one and perform all arithmetic modulo two, or in other words only keep the remainder of any result after integer division by two, then the taps can be implemented with exclusive or operations so we can consider LFSRs to be akin to linear congruential generators, but using polynomials rather than integers and in which the increment is equal to the previous output.

Unfortunately, it is no easier to choose polynomial multipliers for LFSRs than it was to choose integer multipliers for congruential generators. Fortunately, others have already done the hard work for us in this respect[2] and so all that's left to do is to implement them, which we shall do in the next post.

References

[1] Pseudo Algorithmical, www.thusspakeak.com, 2016.

[2] Ward, R. & Molteno, T., Table of Linear Feedback Shift Registers, 2007.

Leave a comment