To a degree, we were able to map Turing’s on permutations manuscript model onto expander graphs. But, it didn’t quite work (no matter how we forced interpretations). Do we do better if we map things onto quantum random walks, instead (recalling that expander graphs and quantum walks are related, in any case)?
One author does a good job for us of laying out the model:
we see that the H – as an operator – is specified as a set of projection terms (one for each element of 2×2 matrix). We clearly the 4 terms (and the sign bias in the last term).
Other authors prefer to present the coin state as two values (head and tail, logically), each one being one of the 2 values of the polarized basis. They then have us think of H as that which puts such a singular value (of heads say) into a superposition of computational-basis states, instead. This change of basis gives rise to the intuition that we are, upon applying the operator, going to be “refining the combinatorics of the wave function”, so it evolves to its next set of combinatorics – as the components of the function construct or destruct to “generate” then next wave function.
We have what group theory calls an “action” – a functional-property, built upon the “impulse of the coin flip.” We can view said flip as a finger perhaps striking a “piano key” – which induces its hammer to strike the associated “string” – which produces sound pulses – whose space-compression waves hit neighboring strings made at resonant frequencies – which also start to vibrate given the impulse from the inbound wavelet – which induces harmonics – whose compression waves add to the sound “mix”).
The author does a better job at relating the 4 terms of the “H operator” to the “2 values” of the superposition that results. We see below how the 2-term-superposition act on each “k’th” component of the current wave function, which incudes the generated-terms and the indirectly generated-terms (the harmonics). Below, we see a harmonic-induced wavelet. Even though a coin flip as tails (say) has induced state 1 to move to 2, and the other flip to heads (say) moves –1 round further to –2, we also have to remember that there are ‘harmonic’ kth components of Phi.1, too The coin flips (in superposition) also act on the ‘state 0 harmonic” (since a series of flips inducing +1 or –1 actions could be collectively resulting with our token being currently at harmonic-0, as the generators cancel), with (+1) + (–1) = 0.
Perhaps we can now go back to the Turing model, and see his H1 “generator” as the coin, where the 4 terms (1, k, –1, –k) are thought of as the H projection operators (1 is , –1 is , etc for k, and –k). This is generated by k, to the nth power, of course.
Now Turing models K(y,x) as a scalar multiple of K(x) – which immediately makes one think of eigenvalues and eigenspaces as alternative coordinate systems, when thinking “directionally”. But Turing says no such thing (unless its implied by the conventions attached to symbols). What do see is…
a series of measures of y being the (2) “probability amplitudes” of the (4) “projection terms of the H1 group, once reduced to 1-d. Of course in L2 space, we need the square of the 2 amplitudes to sum to 1, as required by Turing., meaning that root(2/3) and root(1/3) would be our H1 amplitudes if the following adapted-picture was our 1d projection:
Now recall that our re-modeling of Turing’s Phi(y) is based on his statement that y is the previous 2 characters of the plaintext (two memory states of his turing machine tape, that is). It is the relatively-randomness (entropy) that act “as the Coin”. So its not a hadamard coin; but a defined-coin-operator whose state is defined by the “2-previous positions” on the tape. Doing this type of “memory” thing is of course entirely plausible and crypto-ish thing to do (being seen in practice in the go-backs in 1943-era Tunny machine). The coin is a very “quantitative” source of randomness!
ok. so if the 2 plaintext-memory cells are controlling the furtherance of the walk from the current position (on the Turing tape!) we do have a very “conditional branching” concept!
So, we know that Turing is using H1 to give himself a quotient/factor logic. He wants to be able to think in terms of the cosets (equivalence classes), allowing several plaintexts to map onto a single generated value, causing the compression in combinatoric space for each round, while at the same time allowing the H-superposition and entanglement to spread and diffuse the probabilities through the harmonics – making the resulting distribution of terms within the evolving wave function directly a custom-density of the plaintext, though the resulting range value tend to being uniform distributed.
While this interpretation has a lot going for it, we have to see how Turing says it, when stating
It could be that when “g(x) is constant throughout each coset of H1”, given H1 = 1 in its projected amptlitudes, this means that – at this nth round – the input characters have a equal probablities (1/n, that is) resulting from the fact that their mutual distances (as point in L2 point-space) is now all the same.
ok. the final part of the puzzle, since that interpretation of Phi(y) is rather more convincing than my last attempt, is to consider the sum of the powers of the generator being zero. We can look at that semantically, as being one of two things. First, its just that condition that ensures that the rules of normal subgroups hold, giving closure under conjugation. Secondly, it’s a capture of the more semantic notion that we are interested in viewing the walk from the perspective of net-effect of all generator actions being, happenstance, the net-zero case. Thirdly, since the two rotations around the uprights swap can be seen as basis changes, we can see the enforcement of zero as being the final and only “measurement” performed on the set final wave equation.
In piano terms, having struck many keys while the sustaining pedal is held down (allowing harmonics and superpositions to build in the sound field), “measurement” is what happens when you suddenly remove the sustain. What the “ear hears” our of the sudden change in the cacophony as the tonal center is the (tonal) measurement. Presumably, one wants that “no tonal” center is sense, by the ear and its supporting billions of (in my case) of very well trained neural networks in the brain set to do recognition of tonal centers.
So let’s summarize the thoughts about the system.
The function of H1 is to act as a 2 bit register. It’s the result of the 1 way function, that takes another character from the plaintext and conjugates the current register by a function of that value, resulting in athe storage of the calculated value (1 of the 4 possible values) in the register.
To calculate the function (of the next plaintext character), we have said character act as input to a set of wheels, each of which is coded to add first I (generator), and then second the J (generator). The wheels are individually rotated, but in a manner so that the powers are zero, such that the node at which the walk lands when translating the H1 subgroup identifies a particular coset, in which set all the probability amplitude of the elements are eventually equal.
The conjugate value is stored in the register
Since the system is working with only 8 values, we can perhaps assume any one wheel can process 3 streams at a time, having 3 lots of 8 pins = 24 pins (out of 26, on a typical enigma wheel). Perhaps the quantization of the analogue value is stored in 3 levels, each of 1 bit, and thus there are 3 H1 registers (one per quantization level).
So the actual output stream would consist of the states to which the walk had proceeded, at that point in the input stream, but one notes that the wave function that is guiding selection of the output value is evolving, as a function of H1. Thus we get an operator-based averaging process (that acts on the probability amplitudes, related as proportions) but we also get a directed walk that is a function of the plaintext, where the function is evolving the terms in the wave function, forming up the dirac oscillations that characterize the final density.
ok, so we get to exploit quantum logic, that the quantum channel uses conjugation, but the distinguishability (as the inner product spaces compresses, and the norm of the compressed space gets closed to zero) only gets worse (as the alpha in cos(alpha) between the two vectors goes to zero).
Now, whereas the quantum random walk is modeled as a tensored product of the coin and position vectors, Turing’s system feels more like a group ring, where the 2-bit H1 helps generate a unique codebook. But the point is that the system (i.e. the wave function) is continually evolving, and guarding which output wire is active. After a burn in period, there is no predictability between two ciphertext chars, unless one has the trapdoor that allows one to regenerate the evolution of the functional/wave function.
In his summary of quantum mechanics lecture (that is rather clearer that the original course!), Susskind nicely introduces unitary operators working on wave functions parameterized by time – distinguishing them from Hermition operators (such as x and P) and the world of representations based either on position or on momentum, upon which such operators induce actions.
He also nicely characterizes just what an inner space is, in terms of distinguishability. He nicely also motivates what a wave function is – and what a linear functional is (when it transforms the function itself). He makes it clear that state-space – in the world of computer science – is phase space, which whose eigenvalues just happen to align nicely with the “points” in the unit circle, which sound awfully like the pins/pads on the enigma wheel!