latest thoughts on what Turing teaches about ciphering and cryptanalysis


Folks following this blog (all 3 of you) might know that I’ve spent now 4 years learning to do cryptanalysis. 3 of them was finding out what it even was/is. One test of what I am even now still learning about the core of the theory is: to describe “how DES works” (not how to break DES, note). That is, what are the design principles that enable ciphering (and defeat cryptanalysis)?

Of course, my world is somewhat ideal – since real cryptanalysis includes subverting the human being in some way or other (so that the problem is actually  rather smaller than in the academic model). One notes how Stewart Baker (ex NSA) plies his trade on this distinction, reinforcing how true Americans would WANT to engage in such systemic deceptions (or else be labeled traitors, in true yankee treasoning).

So here goes with the latest treatise on DES.

Life starts when one notes, in Tunny era, that it is detection of plaintext characteristics that allows certain of the attacks on Tunny chi, motor and psi streams to proceed. IN math terms, this means there is a word problem – in which the ordering of the characters in the plaintext is somewhat predictable. It is thus the duty of a cipher, when working against plaintext distinguishability, to diffuse such statistics.

Turing teaches us how. He plots a world in which paths on a manifold (a doughnut, say) are functions of plaintext, perhaps known as alpha(). IN the world of homotopies, it is the aim to deform that function to one now known as beta(). IN so doing, the distinguishability of bigrams and trigrams etc in the alpha() version of the text is reduced to apparent uniformity (in the beta() version).

In a world in which probability is conserved, how does one “hide” the distinguishability? The answer is: to add to the math model, beyond homotopy, the notion of homomorphism. With this, we now get images space (and inverse images spaces, too). The goal is to move distinguishability from the text to a space I’m going to call “typicality”.

Folks doing math courses are properly trained to know the difference between a limit in averages from a “full limit”. That is, there are different types of integrals – and we get to focus on the first type: those in which, in the average case, “measures are concentrated”.

Well, that sound vague (and academic). So what do we mean?

This all means that the deforming that should be occurring – as alpha() becomes beta() in the homotopic transform – is matched by a similar transform in the group-theoretic projection of the alpha() and the beta(). After all, alpha() is just a walk subject to the rules of the geometry – and a continuous one, at that.

so now we are getting even more  vague and technical, by introducing continuity. So, Turing! help us.

Fortunately he does, when teaching that all we have to do to embrace continuity is think in terms of a grating (and a microscope). Each time we focus on our subject under the scope, we note the image is split up into parallel lines (of a certain uniform width). If then we swap lens and look at it perhaps 10* greater magnification, the uniform width of the lines does NOT change. That is our grating got 10* finer, too.

image

http://www.express.co.uk/life-style/science-technology/488512/Google-Doodle-marks-World-Cup-Final

The lines on the Google doodle are NOT continuous, between pitch and tv image

So we have to a very intuitive model of limits – from continuity – as the predictability of one character following another in plaintext is to be diffused, as alpha() becomes beta().

But, now, back in the world of our discrete space, as our  domain space is mapped by homomorphism onto a discrete cayley graph, in image space, we have to ask: so where does the ciphering come in?

In the mapping from one world to another we have to recall that not ONLY is there a mapping going on but that mapping is a covering map. SO now we have to consider projection planes for the target, onto a one-dimensional space. More than that, we have to consider what happens we when map BACK from projection space ONTO the union of discrete spaces in the image of alpha().

At this point, lets introduce a missing notion: the normal operator. This is that, somewhat connected to the kernel of the homomorphism, that subgroup whose left and right cosets are the same. This, of course, is related to our core topic: commutativity of our beta function or its image (and the non-cummutativity or “plaintext predictability, from tunny” in the original plaintext of the alpha() function, or its image).

The real point is that we are nominally swapping the positions of characters in the original plaintext so that the probability mass – originally found there – moves into the image space – where we can now play certain games.

Our mental model is that expressions in plaintext become expressions over terms of the group members in the normal subgroup (that set of wheels whose powers sum to zero, etc etc). We get a longer and longer expression, in image space (formed from normal terms of the …).

If this were the entire story, it would be boring. But there is more fun.

One of the cute things that Turing teaches, rather obliquely to hide the ciphering material from the 1950s or 1940s UK censors no doubt, is that “probability mass” can be concentrated. And, after all, this what Tunny documentation teaches us too (when one manipulates depths so as to concentrate probability into n bands of standard deviation width, too).

so now, as one formulates up the long expressions, in image of beta() space, we have to remember that IN PARALLEL folks are ordering the normal terms. The predictability that was in the plaintext transfers … to the predictability that its one this or that term in beta-image() space that occurs next. Furthermore, most of the probability has been concentrated in the (entirely predictable) elements of the beta-image expression, leaving only residual and TRANSFORMED improbability left- reflected in the ordering of the expression terms.

So to defeat  the very attack on Tunny used by GCCS, one must  find – whether its it’s the DES Process or other – a method to diffuse the original plaintexts probabilities into improbabilities that the PROJECTED image of beta() is any one particular mapped value in the space of the image of alpha().

We have a clear amplification of probability space size that is, having amplified the ‘grating’ size – and thus making it harder to apply machine power to attack the combinatorics.

Advertisements

About home_pw@msn.com

Computer Programmer who often does network administration with focus on security servers. Very strong in Microsoft Azure cloud!
This entry was posted in crypto. Bookmark the permalink.