Recall that in the Baron's latest wager, Sir R-----'s goal was to traverse a three by three checkerboard in steps determined by casts of a four sided die, each at a cost of two coins. Moving from left to right upon the first rank and advancing to the second upon its third file, thereafter from right to left and advancing upon the first file and finally from left to right again, he should have prevailed for a prize of twenty five coins had he landed upon the top right place. Frustrating his progress, however, were the rules that landing upon a black square dropped him back down to the first rank and that overshooting the last file upon the last rank required that he should move in reverse by as many places with which he had done so.

To reckon the fairness of this wager, we must first label the places upon which Sir R----- might begin a turn

The probability that he might advance from the \(i^\mathrm{th}\) to the \(j^\mathrm{th}\) place is given by the element in the \(i^\mathrm{th}\) row and the \(j^\mathrm{th}\) column of the matrix

This is an example of a

We can divide the transition matrix into those parts that represent the completion of the game and those that represent its continuation with

Noting that the function

Since the game must

In continuation, rearranging yields

To reckon the fairness of this wager, we must first label the places upon which Sir R----- might begin a turn

The probability that he might advance from the \(i^\mathrm{th}\) to the \(j^\mathrm{th}\) place is given by the element in the \(i^\mathrm{th}\) row and the \(j^\mathrm{th}\) column of the matrix

\[
\mathbf{P} = \frac14 \times
\begin{pmatrix}
0 & 1 & 2 & 1 & 0 & 0\\
1 & 0 & 2 & 1 & 0 & 0\\
1 & 0 & 1 & 1 & 1 & 0\\
1 & 1 & 0 & 0 & 1 & 1\\
0 & 2 & 0 & 0 & 1 & 1\\
0 & 0 & 0 & 0 & 0 & 4
\end{pmatrix}
\]

in which the probability of leaving the last square is zero since that should be the end of the game.This is an example of a

*transition matrix*and the process of changing states according to the probabilities that it contains is known as a*Markov chain*; in this case an*absorbing*Markov chain which has at least one state that, once reached, endures.We can divide the transition matrix into those parts that represent the completion of the game and those that represent its continuation with

\[
\mathbf{P} =
\left(
\begin{array}{c|c}
\mathbf{Q} & \mathbf{R}\\
\hline
\mathbf{0} & \mathbf{I}\\
\end{array}
\right)
=
\frac14 \times
\left(
\begin{array}{ccccc|c}
0 & 1 & 2 & 1 & 0 & 0\\
1 & 0 & 2 & 1 & 0 & 0\\
1 & 0 & 1 & 1 & 1 & 0\\
1 & 1 & 0 & 0 & 1 & 1\\
0 & 2 & 0 & 0 & 1 & 1\\
\hline
0 & 0 & 0 & 0 & 0 & 4
\end{array}
\right)
\]

The probabilities of completion and continuation after two turns are consequently given by
\[
\mathbf{P}^2 =
\left(
\begin{array}{cc}
\mathbf{Q} & \mathbf{R}\\
\hline
\mathbf{0} & \mathbf{I}
\end{array}
\right)
\times
\left(
\begin{array}{c|c}
\mathbf{Q} & \mathbf{R}\\
\mathbf{0} & \mathbf{I}
\end{array}
\right)
=
\begin{pmatrix}
\mathbf{Q}^2 & \mathbf{Q} \times \mathbf{R} + \mathbf{R}\\
\mathbf{0} & \mathbf{I}
\end{pmatrix}
=
\begin{pmatrix}
\mathbf{Q}^2 & \left(\mathbf{Q} + \mathbf{I} \right) \times \mathbf{R}\\
\mathbf{0} & \mathbf{I}
\end{pmatrix}
\]

and those after three by
\[
\mathbf{P}^3 =
\left(
\begin{array}{cc}
\mathbf{Q} & \mathbf{R}\\
\hline
\mathbf{0} & \mathbf{I}
\end{array}
\right)
\times
\left(
\begin{array}{c|c}
\mathbf{Q}^2 & \left(\mathbf{Q} + \mathbf{I} \right) \times \mathbf{R}\\
\mathbf{0} & \mathbf{I}
\end{array}
\right)
=
\begin{pmatrix}
\mathbf{Q}^3 & \left(\mathbf{Q}^2 + \mathbf{Q} + \mathbf{I} \right) \times \mathbf{R}\\
\mathbf{0} & \mathbf{I}
\end{pmatrix}
\]

In general we have
\[
\mathbf{P}^n =
\begin{pmatrix}
\mathbf{Q}^n & \left(\sum_{i=0}^{n-1} \mathbf{Q}^i \right) \times \mathbf{R}\\
\mathbf{0} & \mathbf{I}
\end{pmatrix}
\]

where \(\sum\) is the summation sign, which follows by the induction
\[
\begin{align*}
\mathbf{P}^{n+1} &=
\left(
\begin{array}{cc}
\mathbf{Q} & \mathbf{R}\\
\hline
\mathbf{0} & \mathbf{I}
\end{array}
\right)
\times
\left(
\begin{array}{c|c}
\mathbf{Q}^n & \left(\sum_{i=0}^{n-1} \mathbf{Q}^i \right) \times \mathbf{R}\\
\mathbf{0} & \mathbf{I}
\end{array}
\right)
=
\begin{pmatrix}
\mathbf{Q}^{n+1} & \mathbf{Q} \times \left(\sum_{i=0}^{n-1} \mathbf{Q}^i \right) \times \mathbf{R} + \mathbf{R}\\
\mathbf{0} & \mathbf{I}
\end{pmatrix}\\
&=
\begin{pmatrix}
\mathbf{Q}^{n+1} & \left(\sum_{i=1}^n \mathbf{Q}^i \right) \times \mathbf{R} + \mathbf{R}\\
\mathbf{0} & \mathbf{I}
\end{pmatrix}
=
\begin{pmatrix}
\mathbf{Q}^{n+1} & \left(\sum_{i=0}^n \mathbf{Q}^i \right) \times \mathbf{R}\\
\mathbf{0} & \mathbf{I}
\end{pmatrix}
\end{align*}
\]

Now, the expected number of times that Sir R----- might land upon the \(j^\mathrm{th}\) labelled square should he have started from the \(i^\mathrm{th}\), should neither be the goal, is given by the element in the \(i^\mathrm{th}\) row and \(j^\mathrm{th}\) column of the matrix
\[
\mathbf{N} = \sum_{k=0}^\infty \mathbf{Q}^k
\]

since the expected number of times that he should have concluded upon the former after starting from the latter after exactly \(k\) turns is simply the probability that he should have done so.Noting that the function

\[
f(x) = x^{-1}
\]

has the derivatives
\[
\begin{align*}
f^{(1)}(x) &= -x^{-2}\\
f^{(2)}(x) &= 2 \times x^{-3}\\
f^{(3)}(x) &= -6 \times x^{-4}\\
\dots\\
f^{(n)}(x) &= (-1)^n \times n! \times x^{-(n+1)}\\
\end{align*}
\]

where \(f^{(i)}\) is the \(i^\mathrm{th}\) derivative of \(f\) and the exclamation mark stands for the factorial of the term preceding it, by Taylor's theorem, we have
\[
\begin{align*}
f(1-x)
= \sum_{i=0}^\infty \frac{(-x)^i}{i!} \times f^{(i)}(1)
= \sum_{i=0}^\infty \frac{(-x)^i}{i!} \times (-1)^i \times i! \times 1^{-(i+1)}
= \sum_{i=0}^\infty x^i
\end{align*}
\]

provided that the magnitude of \(x\) is less than one so that its increasing powers shrink ever closer towards zero.Since the game must

*eventually*end, the repeated multiplication of \(\mathbf{Q}\) by itself must tend to a matrix that is wholly populated with zeros, meaning that we can generalise this result to yield
\[
\sum_{k=0}^\infty \mathbf{Q}^k = \left(\mathbf{I}-\mathbf{Q}\right)^{-1}
\]

and therefore
\[
\mathbf{N} = 4 \times
\begin{pmatrix}
4 & -1 & -2 & -1 & 0\\
-1 & 4 & -2 & -1 & 0\\
-1 & 0 & 3 & -1 & -1\\
-1 & -1 & 0 & 4 & -1\\
0 & -2 & 0 & 0 & 3
\end{pmatrix}^{-1}
\]

The expected number of times that Sir R----- should land upon any but the last space once he has reached the \(i^\mathrm{th}\) is simply
\[
e_i = \sum_{j=0}^4 n_{i,j}
\]

which we can represent as
\[
\begin{pmatrix}
e_0\\
e_1\\
e_2\\
e_3\\
e_4
\end{pmatrix}
= 4 \times
\begin{pmatrix}
4 & -1 & -2 & -1 & 0\\
-1 & 4 & -2 & -1 & 0\\
-1 & 0 & 3 & -1 & -1\\
-1 & -1 & 0 & 4 & -1\\
0 & -2 & 0 & 0 & 3
\end{pmatrix}^{-1}
\times
\begin{pmatrix}
1\\
1\\
1\\
1\\
1
\end{pmatrix}
\]

I tried to explain this line of reasoning to the Baron but I fear that I did not have his full attention.In continuation, rearranging yields

\[
\begin{pmatrix}
4 & -1 & -2 & -1 & 0\\
-1 & 4 & -2 & -1 & 0\\
-1 & 0 & 3 & -1 & -1\\
-1 & -1 & 0 & 4 & -1\\
0 & -2 & 0 & 0 & 3
\end{pmatrix}
\times
\begin{pmatrix}
e_0\\
e_1\\
e_2\\
e_3\\
e_4
\end{pmatrix}
=
\begin{pmatrix}
4\\
4\\
4\\
4\\
4
\end{pmatrix}
\]

which we can express as the simultaneous linear equations
\[
\begin{array}{rrrrrrrrrrrrr}
4 \times e_0 &-& e_1 &-& 2 \times e_2 &- &e_3 &&&=&4&&(1)\\
-e_0 &+& 4 \times e_1 &-& 2 \times e_2 &- &e_3 &&&=&4&&(2)\\
-e_0 &&&+& 3 \times e_2 &-& e_3 &- &e_4 &=&4&&(3)\\
-e_0 &-&e_1&&&+& 4 \times e_3 &- &e_4 &=&4&&(4)\\
&&-2 \times e_1 &&&&&+&3 \times e_4 &=&4&&(5)
\end{array}
\]

We can eliminate \(e_4\) from these with
\[
\begin{array}{rrrrrrrrrrrrr}
(5) + 3 \times (4) &\rightarrow & -3 \times e_0 & - & 5 \times e_1 &&&+&12 \times e_3 &=&16&&(6)\\
(5) + 3 \times (3) &\rightarrow & -3 \times e_0 & - & 2 \times e_1 &+&9 \times e_2&-&3 \times e_3 &=&16&&(7)
\end{array}
\]

and thereafter \(e_3\) with
\[
\begin{array}{rrrrrrrrrrr}
4 \times (7) + (6) &\rightarrow & -15 \times e_0 & - & 13 \times e_1 &+&36 \times e_2&=&80&&(8)\\
(7) - 3 \times (2) &\rightarrow &&& -14 \times e_1 &+&15 \times e_2&=&4&&(9)\\
(7) - 3 \times (1) &\rightarrow & -15 \times e_0 & + & e_1 &+&15 \times e_2&=&4&&(10)
\end{array}
\]

Next we can erase \(e_2\)
\[
\begin{array}{rrrrrrrrr}
(10) - (9) &\rightarrow & -15 \times e_0 &+& 15 \times e_1 &=&0&&(11)\\
12 \times (10) - 5 \times (8) &\rightarrow & -105 \times e_0 &+& 77 \times e_1 &=&-352&&(12)
\end{array}
\]

and finally \(e_1\) to yield
\[
\begin{array}{rrrrrrrrr}
15 \times (12) - 77 \times (11) &\rightarrow & -420 \times e_0 &=& -5280
\end{array}
\]

implying that
\[
e_0 = \frac{5280}{420} = \frac{88}{7}
\]

Sir R-----'s expected winnings were therefore
\[
25 - 2 \times \frac{88}{7} = \frac{25 \times 7 - 2 \times 88}{7}
= \frac{175 - 176}{7}
= -\frac17
\]

and I should consequently have advised him to decline the Baron's wager.
\(\Box\)

## Leave a comment