Turing’s global “quantum” measurement process for cryptanalysis


Our marked up version of the slide shown below indicates how we learned to think like TURING  – about quantum cryptanalysis. For the first time, we feel like we are cracking his secret code. This is not the secret of the classified process later revealed as Enigma-style banburismus  (http://stoneship.org.uk/~steve/banburismus.html) or Fish-centric rectangling. Rather, it is the secret of how his mind worked in getting to such definitions – based on a rather pure and very theoretical training he received in his Cambridge math courses. While Turing clearly was immersed in the physical sciences math of the day, one should note how he does NOT apply it – to cryptanalysis. Rather, he applies the underlying logic of that math.

image

4-2. Distinguishability

What we see above, in a style of Dirac notation that Turing did NOT use, is a formal model of a composite system in which the 2 sub-systems are dependent. The left hand sub-system, denoted A, can be seen as the  “measuring” apparatus, for a variety of measures. (Yes! That’s measures as in “Measure Theory”.)  In particular, it will measure the “scores” of bigrams, trigrams and tetragrams, etc (just as in Enigma cryptanalysis) found on the “input” Turing tapes given a pointer to a current tape position. And, the reason one needs the apparatus to support a “variety” of measures is because in Enigma’s particular manifold and geometry, there may be different scoring tables for each “overlap” (for certain n-grams).

Being a general apparatus, the measure-capable subsystem “A” can adapt to different scoring tables, as required. The right hand system, labeled B, is entangled with this “measuring potential” – and the measuring apparatus. One can think of B as the rotors, in the rotor cage, whereas A is the indicator and other keying setup.  The relevance here is that we can perform a “global unitary” operation on the composition upon which completion the measuring apparatus is prepared to do its job. It is only through the mystery of the global unitary specifically acting on a composition that allows measure-potentials –  revealed by A – to be actually reduced from potential to the actuality of a pointer moving on a dial.

Looking at the formalism, we see the “degree” of entanglement-potential characterized by use of the k variable. Remember, the state is being simply decomposed into a weighted set of k basis states. And this role of k is interesting , as it turns up in Turing’s own writings on the topic – albeit with different modeling goals. its worth pointing out just how k’ness is used across several mathematical concepts. There is a common k’ness notion, that is.

Let’s look at Turing’s most obvious ‘k’. Think of this K function as the second subsystem, B, which implies that the first subsystem A is modeled by Turing’s φ(n)(.)

image

https://yorkporc.wordpress.com/2013/11/29/mapping-turing-onto-quantum-walks-and-pianos/

We note that in changing from notation g() to K(), Turing is indicating that k is a probability distribution (related to the g, that it replaces, in that g was the “input distribution” while k is the entropy of the input distribution). Furthermore, K applied to the 2 character sequence of (y1 x) is the product of a scalar alpha to the function K applied to (y) alone. We have a quantum system that is – in that a given distribution can be decomposed as a weighted distribution of more basic distributions. If x is the current input byte on the Turing tape and y is one or more of the preceding characters (in the bigram, trigram, tetragram, …) then alpha (for 1 y character) is the result of measuring the predictability of y to guess x. It contains the “information”, uncertainty, mutual information, … that is. It is the conditional probability between y given x. The global unitary reveals – by considering the dependency between B and A, the uncertainty of A that is unmeasurable by local operation only.

Now why did Turing move from using g() to K()? We can guess that it relates to a subtle point concerning complexity – when doing acts of ‘distinguishing’. In his arguments about norms and continuity, Turing fails to model K directly as an upper bound upon the norms resulting from applying a transformation to a Hilbert space and its wave functions. He does use it indirectly though when pointing out that the ratio of the norms is 1, when considering the limiting or ‘kth’ compression (contraction of the angle between the ever more indistinguishable vectors towards zero)  in a CONTINUOUS space. There will always be a minimum angle, for continuous spaces, beyond which we cannot compress and further refine our measurements of the information content. (One should think of Shannon limit here, without using that American intellectual product otherwise).

Turing links the certainty/complexity bound on distinguishability of states in a Hilbert space – expressed with a k when considering vectors with his function K() – when now considering actions on Turing tape characters.  The limit of the inner product may well be a distribution whose character is uniform. But note that mere fact that there is a certainty/complexity bound does not stop uniformity occurring! It means that upon reaching uniformity one has reached a maximum potential of having certainty in the distinguishing power, in a cryptanalytical sense.

Turing indirectly uses K in the sense of eigenvalues, too. But, being trained in 1930s topology by original thinkers like Newman, he thinks and reasons geometrically; in terms of complex geometry (Bloch spheres).  In just a 2-d complex vector space he knows he really just dealing with k points on the circle which represent points in phase space over which his (measurement) system’s dynamics  can act. And of course, it just so happens that he can now model the dynamics on such a point geometry using a Cayley graph, based on a suitable “circular” group whose non-Abelian nature allows one to capture the dynamics in (classical physics) phase space. For Turing, he simply recasts k points as k terms in permutation group whose factors are expressed in cycles/circles. The elements are isomorphic – in representation theory – to the eigenvalues that the system can measure, which in turn are points in the phase space, all of which allows him mentally to move from classical to quantum mechanics reasoning models. He thus has k elements in his group, over which his group operator can act. And the resulting actions are analogues of quantum mechanics in complex vector spaces – but using real fields (which simplifies out all the adjoint operators, complex conjugate worlds, etc)

Of course, his geometric circle of phase points is just a reasoning model for something practical: an enigma wheel, that can rotate/step of course. Such wheels can also be reflected, of course. Between points, group elements, permutation groups that give us easy-to calculate conjugates, rotation and reflection transformations, and invariancy of distances we seem to have group algebras in the making! This all allows Turing to allude to using symmetry and asymmetry for cryptanalytical measurement purposes. We can even reach a bit and see how multiplicities were applied, looking at the permutation group’s automorphisms – those multiple eigenfunctions for a given point, all of which allow for certain (i.e no uncertainty) measurements.

Let’s now link the two pictures. Turing’s φ is subsystem A, and the α weighting of a basis vector – attached to the prefix of x – is the value of φ().

This is all backed up later when we see how each k’th component of the locally-unmeasurable A is recast as a probability weighting of the projection operator form of the k’th basis vector. That is, something of the form |φ⟩⟨φ| which if course is a group element product, to Turing, in his H1 group, that reduces in value space to α1αsince we have only real values to worry about (eliminating ordering issues, or conjugates). We have to remember that a projection operator is…. just a geometric projection (just torches shining upon objects in the center of the room, while considering the resulting shadows falling on the corners of rooms, formed of two walls… known as the ket and bra!) Turing helps out a lot here, since H1 is probably a hint symbol indicating first that it’s a subgroup (of H, as stated) and its also an entropy-style measurement (in an L1 space) – where entropy is a kind of uncertainty measure. And, furthermore, he specifically tells us that φ is, in any case, a projection onto a 1d line, giving a real α as required and thus a sequence of 2 α terms under homomorphism and the group product that become simple multiplication of real factors … that of course were used (when augmented with a Bayesian value logic ) to calculate the scoring tables used when scoring the “predictive power” of bigrams, trigrams,… during enigma “cryptanalysis”.

This analysis is reinforced when you consider the very nature of projective spaces, in which necessarily P2 = P (giving indempotency). But more than these facts, the space reduces the domain of the factor to 0 and 1, only. Fully indistinguishable, or certainly distinct. We recall how in the limit Turing argues that the value is 1 ( or indistinguishable, being as far from orthogonal (cos(90) = 0) as possible) from which fact he can derive claims about uniformity of the distribution (for all values in his coset that are have as little conditional probability from/to each as its possible to have).

Our final k concerns Turing’s example, based on the symmetric quaternion Cayley table, with its identity and 3 symbols (i, j , k), and their signed alternatives. Remember that H1 uses just elements { 1, k, –1, –k}. its our conjecture that Turing is saying that we have at 4 combinations, which aligns with a Hadamard transformation that changes one basis into a superposition basis, with a suitable weight for each symmetric ({ 1, -1} or {k, –k}) or anti-symmetric pairing (e.g. {1, –k}, {-1, k}).

Ok. We see a clear and pretty convincing link between his Cambridge training, enigma analysis, and what we know about scoring (for enigma menu construction) and Tunny-era rectangling. All of which tells us to go find out all about what Newman, the hidden figure, was doing – in secret.

Advertisements

About home_pw@msn.com

Computer Programmer who often does network administration with focus on security servers. Very strong in Microsoft Azure cloud!
This entry was posted in crypto. Bookmark the permalink.