Core ideas of cryptanalysis hardware


Turingismus was an early classified procedure for “decoding”. A form for graph searching, the analyst  considered a multidimensional vector space, whose field sizes were set according to the number of bits in the associated funny wheel. One first walked the neighbors of the chi1 wheel (ie the 5bit chars at each repeat length of the chi1 cycle). Then one walked  a second dimension, considering its neighbors (at the cycle repeat of the chi2 wheel, say). Knowing that hamming weights represent a measure for an inner product space that is scale invariant, folks assigned scores to each future inference chain, as the tree of possible walks through the trellis evolved.

These days, we work with concatenated codes in which an inner block code fashioned using a colossus-era shift register with feedback/forward is itself protected by a burst error correcting code. The latter backtracks the inner decoder as it wander on the wrong path throughout space of possibilities. It does the same kind of future path scoring  that folks did with turingismus, allowing the cryptanalysis to return go a branching point more likely to meet the constraints that define a well formed code word.

Such techniques can be used against ciphers like des, where the subkey dependency constraints guide the backtracking, and where each subround is a dimension – in the sense of turingismus and tunny. Though the dependency of subkeys on key bits must support the production of strength in the enciphering, the interrelationships of given keybits to subkeys and parity bits within the keyboard space aids the “decoder” producing an avalanche of correct branching decisions in the search process.

One should look at the des subkeys  as a codebook over the state space, while the previous round output is the past code and the current round output the future code. In enciphering the goal is to reach a Markov condition in which the future code releates only to the past code of the previous round. In cryptanalysis, the goal is to allow the detection of increasing Markov’ness as a scoring system, guiding path selection.

Finally, we must recall the use of the erasure channel back in the tunny era, post turingismus. For a series of depths combined to create a noisy  convolution of two wheel patterns, one eliminates from the state machine doing convergence/rectangling those wheelchairs that have insufficient log-likelihood score to contribute to improving the separation of the inner product space (the depths)  into its two components (each wheel).

In the tunny era, folks used a differential trial as the update rule, intended to entail a strengthening of one wheel as the current wheel and depth evidence (once strengthened by the update rule associated with correct branching choices). In des one one uses its differential trails, similarly.

This is much more efficient than differential cryptanalysis.

Advertisements

About home_pw@msn.com

Computer Programmer who often does network administration with focus on security servers. Very strong in Microsoft Azure cloud!
This entry was posted in coding theory. Bookmark the permalink.