From qubits back to vector lengths, and only DES backdooring in 1980s.

I’m starting to get a feel for the “reasoning” required to understand Turing’s quantum algorithms, based on information processing by enigma-style wheels. Modern explanations help by explaining two things: what Turing was saying, and how Turing was applying in older “geometric” language to make say things now said using the terminology of qubits.

First consider a nice, modern write-up up concerning quantum algorithms, “constant functions” and “balanced functions”.


safari hosted book

Note how folks argue – distinguishing between two phase shifts. The first concerns the global field induced by the entanglement of Bob and Alice’s quantum states: no shift (constant functions). The second is the “pi/2 radians phase shift”: half the bits are constant x, the other half being constant not-x.

For the case of the no phase-shift, Alice’s measured value of any positive value of the qubit register is 0 – since the amplitude when the vector of qubits with all components in the 0-valued state is either 1 or –1,  already consuming all the possible amplitude. Any positive (and non-zero) qubit state therefore contributes nothing (having amplitude zero).

The case of the all-zeros pattern  has an interesting formulation, since that particular pattern eliminates any contribution to the register’s amplitude due to the inner product of the state components – leaving the amplitude to reflect “purely” the value of f(x). No other state pattern has this property.

Lets recall Turing’s version (or 2 version rather) of this argument:


Upon convergence (when no further change occurs between the probability distribution density “k” at time instance t+1 and t), we have what we would now call the “entanglement” situation. In Turing’s case, Alice is the party introducing entropy into the composite system and Bob is the operator/analyzer of the machine (i.e. Turing himself). By making the argument, Bob/Turing is inducing entanglement between his particular Bolzman brain and the generators built into the wiring of the drums and their interaction with the Alice’s entropy-inducing plaintext.

The situation of the query register having n qubits ALL of which are in the zero state is equivalent to H1 having powers of generators sum to zero, lets say.

In Turings case, the value induces by the particular constant value taken by f(x) is only 1 (and cannot be –1); not that the elimination of the –1 case changes the core of the argument.

Since the effect of the wheels is to make points equidistant (since the averaging process reduces the variance of the encountered entropy from the plaintext), all points in a given coset ultimately have to get a constant probability – i.e. that value of f(x) that is constant. That is, either all elements on the coset have 0 probability or all 4 positive values (say) have 1/4 probability. Only the particular coset that corresponds to the closed condition associated with the commutating operators  will have this non-zero f(x) , all others being associated with non-commutating operators whose accumulation point limit is 0 – not being on the boundary of the poincare sphere and not having (thus) radius/vector-length 1.

We can also argue the other way, that Turing was also modeling the balanced case, but in q-dit sense of balance. Rather than there being 2 cosets with a 2-ary balance, there are n cosets with a n-ary balance, only one of which has all the amplitude. (This contrasts with an intuiton that perhaps is n/2 of the cosets would each a non-zero amplitude, constantly for each). This of course aligns nicely with the mental picture of the 7 point geometry, with the central circle being the one coset assocdiated with communiting/normalizer operators and the (3) others involving points off the circle and being associated thus with non-community elements that introduced “volume” outside the circle, at a (radial) vector length >1 necessarily.

Note how Turing’s balanced-coin (the nature of H1) is not a hadamard gate, which induces a skewed distribution on a quantum random walk. Turing uses the 7- point geometry in the same sense that folks in physics compute the interaction of multiple oscillators each with angular momentum. Turing thinks more in the terms of the Tunny rectangle, used for breaking wheels – where each cell of such a rectangle is an individual oscillator, tuned into the evolution of particular tunny wheels and a particular wheel bit – and their non-local interactions.

All of this suggests that NSA would have introduced a subtle quantum walk into the “coin tossing element” in the likes of DES, remember the thinking about quantum algorithms above was not common back in 1977 and such would have been viewed as a secret – even as a possibility.  For a suitable detector of such as squeezed or coherent states only apparent in quantum evolutions, one will want similar “non-local  signals” to be helping infer the value of certain number of DES keybits – with which to reduce the brute force search surface, even back in 1980. One will need quite enormous matrices, and the likes of maximal using the Baum/Welch algorithm to infer when a DES key was created by a particular process, associated with some US crypto vendors (now known to be compromised at birth) random number generator.

We can also think back to the world of Tunny,

One sees clearly, above, that not only is the tensored- (i.e. multibit-) hadamard transform migrating the data into the frequency domain, but that said domain is a superposition of walsh (square wave) functions.

Then, we recall how in a r-dimensional qudit space, one can draw a parallel between Turing’s H1 generator-hamiltonian (whose powers being zero) and the sum of the bulges (of each characters expected value over 1/r) accumulating to zero.


and then


About home_pw

Computer Programmer who often does network administration with focus on security servers. Sometimes plays at slot machine programming.
This entry was posted in crypto. Bookmark the permalink.