it was fun re-reading the general report on tunny – a couple of years after I first encountered its strange language and before I learned the core math it leverages – as taught today using our language game.
We are used, today, when decoding or error-correcting to using iterative message-passing algorithms. That is, given a parity matrix that specifies a state machine (with edges and nodes), pass beliefs (about code-breaking-related “propositions”) along the edges to act as a custom computing machine. In the case of tunny break, log-likelihoods were passed (much as today), with a particular computation of the inner product between the evolving wheel bits and their evidence valuations and each row of the “parity matrix” – which in colossus days is of course the sample of depths as 1271 spacing of the cipher tape.
What is interesting next is the architecture not only of the Manchester computer – which followed colossus – but also machines as recent as NSA’s cray computers (with custom CPUs). They of course have “secret” instructions – that compute scalar products (i.e. geometric vector angular distances, applying such as the tunny-era masks for doubting bits).
So, given that “secretly-specialized” but otherwise (fast and) general purpose CPUs have gone out of fashion when designing cryptanalytical machines, we have to really go look at the graphics processors of the 80s to see how, back then, cryptanalytical hardware was proceeding. One has to look at how the hardware pipelines supported conformal projections and calculations of complex function vector spaces – to glimpse at what the cryptanalytical capability really was – back then. With that done, one can project forward to today, knowing how raw hardware capability has evolved since the first generation of GPUs.