Now for one of my math-like missives (that is the kind of submission from a crank that every math professor gets in the post once in a while from an amateur math nut). Its on topics I don’t understand. As usual Im just throwing concepts at a writing wall, and seeing what happens.

The idea starts with swapping of characters in a the words of plaintext, so as to minimize the spectral bulge giving away their unique ‘plaintext-stream’ character – as used in the attack on Tunny. Of course a swap is a 2 element cycle, or factor in an expression chain of such cycles that “operate” on the original stream. The choice of which values to swap – or which factors to include in the transform equivalently – is ‘decided’ by an algorithm that wants to remove all the inherent non-commutativity found in the plaintext making the resulting, and now-swapped image of the words be, formally, a commutative stream.

Since that is bullshit math, what I;’m alluding to is the property that such a ‘commutative’ stream will almost always shows up the same spectral character no matter which additional *and random swaps* are added. Of course, its just possible that those random swaps would just happen to create a stream whose character is normal plaintext, of course…

Now when I think of non-commutativity, generally, I think of a deformed rectangle, and the corner kick from that corner that is not ‘right”– because the linesman marking out the lines of the soccer pitch made one line miss the intended junction point, having wandered off square as his marking machine laid down the chalk. He thus adds a small fifth ‘correcting’ line between the (painted at the wrong angle but with the correct actual length) line to that other line forming up the other side of the corner.

There are many lengths and angle for said corrective term.

I then imagine that the spectra gap of the eigenvalues of the operator is always positive, since the homotopy functions, as the plaintext stream morphs into the ciphertext stream, as the ciphering operator remove the spectral characters that signal the non-commutativity…by computing a long sequence of factors that sequence the swaps. The original bigram is the long line in the parallelogram, and the factor swap are the corrective line.

each time we do such a swap, we deform the crazy parallelogram treating it as a loop (centered at X0, the faulty corner) , which increasing comes close to a true rectangle, with the corrective line limiting to a single point (that is almost but not zero). The final state of affairs is the ultimate function in the homotopy class.

Now our factors could be have been q-ary cycles, with greater than 2 allowing for transforms based on graphs in which vertices have q outgoing (and distinct) edges. but, we keep the q=2 (pure swaps that it) so that our image space is unitary (based on all the usual orthonormal vector basis, mean length of vectors being 1, …yada yada).

we know that the rules of expander graphs – when designing the wiring of the nominal-enigma drum on a turing bombe – are related to the spectral gap. I now imagine the SIZE OF THAT GAP to be a unique measure of the factor chain that made the plaintext into a commutative stream. When q= 2, all such measure values will be in the small class possible which is continuous … in the sense that there is a distinct value for each homotopy function in its class, if for no other reason.

Of course, this nothing different form Turing’s write up of a factor chain, in the profs book.

But WHAT I also intuit is that the length of the point, in the limit, is a unique “base” for the logarithms used in belief propagation algorithms used in cryptanalytical attack processes.

We have to imagine that there is a machine/chemistry that can, in small quantities, do base-specific calculating by counting very small sample in huge fields. On the basis that one only needs a good “hint” or two of the DES keys bits to “get a start”, lets imagine now that the big trapdoor secret is that one can count at a level of accuracy that seems impossible to achieve (except for small numbers of runs).

So lets image such a machine can distinguish between the terms from two expression in different log bases – each one a function of the swaps, as above. That is, it can tell the difference between two loops. In particular, it can tell the difference between the constant loop (of point size) and ANY other. it can detect the difference, that is.