Back at https://yorkporc.wordpress.com/2012/02/23/fundamenta-of-keystream-generation/ we took a look the sequency. Having created an additive signal from a set of individual weighted walsh functions taken from the hadamard matrix (or orthonormal basis functions), one learns how the inverse WHT identifies the weightings.

https://yorkporc.wordpress.com/2012/02/23/fundamenta-of-keystream-generation/

Two other things now occur to me.

First, look at the form of the sequence matrix, above. Imagine that on the right and bottom sides you have tunny wheel bits – those for Chi1 and those for Chi2, say. now recall how convergence was performed. Take the guessed patterns for one side, the start, and perform an inner product with each row/column (of probability info).Use the result to update the reliability weighting attached to the corresponding bit on the other wheel. Also, if the weighting improved that bit (or whatever was the rule), update the row in the rectangle by a using a swap that more probably aligns the average value with the rules of the mechanism.

Now look at the form of the sequency picture. One can imagine that the dependency between wheel bits, with respect to the cipherstream is represented by the area as ones eye moves from bottom right to top left. The area of probability increases… AS more and more bit values come into play and affect the average – in that coordinate.

As I looked at it, I though to myself: just look at the rate of zero-crossings in the high0rder bits. Don’t they remind one of the columns of the Tunny alphabet, as ordered here?

https://yorkporc.wordpress.com/2012/02/23/fundamenta-of-keystream-generation/

Tuuny has the following sequence of zero-crossings, moving from left to right: (1, 2, 16, 4, 8) – remember this ordering is NOT there for wheel breaking but is there to help do 16-counts, 4- counts, etc, so that one may compare the proportion of dot-flows to cross-flows (and see if the proposed SETTING of that Chi wheel is correct).

when counting this means for a count of 16 chars at a time, producting 2 output:

for the bnext count, one has to imagine the rectangle is wrapped around a stick and gummed one side to the other so that the 1/4 of contribution first/last columns (in this plane) combine.

The latter though reminds of gumming both edges to each other, making a donut – recalling the relationship between that complex torus and factorization.

see https://yorkporc.wordpress.com/2014/03/15/colossus-runtc-tonal-centroid/.

fingerprinting devices… using acoustics – the kind the microphones in phones would be picking up! Can we assume that an intel would be tuning the video signal uniquely for each CPU/motherboard to facilitate device identification in the IOT? Sounds like Intel! to me (pun pun).