We are used to the idea that code values in a given codebook can be defined as:- those whose product with a parity-check matrix gives zero. Values in the nominal vector space that fail this constraint are NOT “codewords”.

So its interesting to see that, in a stochastic world, one may similarly define codewords that are “constrained to be zero”. For example, the product of a “distribution” and the DES E function (the expander from 32 to 48 bits). in “phase space”, the value type is “distributions”, and the “parity constraints expressed as a parity matrix” is the E function.

Next, one can be interested in the “expansion” properties of graphs, or subgraphs. One can look at the codewords defined in some subspace and then larger subspace defined as the codewords PLUS those non-codewords on the very boundary …between the codeword space and the non-codeword space. There is an “expansion” of size, between the elements of the smaller subspace and this larger isoperimetric set (that now includes the boundary elements, too).

Given the idea of isoperimetric expansion, one can be interested in graph-expansion of the first set (DES-E x distribution-value). So what would the larger set be? (i.e. what boundary exists, that augments the space to make the larger set?) Could it be the product of the SPN and the “distribution” of the (xor’ed) outputs of neighboring sboxes over several rounds (as outlined by Murphy)?

The interesting idea is that just the right expansion indirectly specifies a markov model transition matrix, with certain properties. The right expander can be inducing particular “hysteresis loops” within the generator of the transition matrices that happen to concoct diminishing/decaying partitions of the keyspace. This might be creating “condensation points” – the result of a stochastic/quantum limiting process that “controls mutual information” and acts to reduce conditional probability relationships within the plaintext, substituting it with dependency on the keybits. Alternatively, the PURPOSE of the expansion is to approximate bit independence (within the hamilonian), by approximating cycle-free constraints.

When designing a cipher, one can be focused on using one nibble as the means to calculate a “single-bit distribution” and its partner nibble as the means to calculate a larger distribution cued to the input/output relationship across the non-linear sbox. In a multi-round cipher, one might then swap over the roles of the nibbles (n the “next” round), so “mixing” in “pastry-dough” manner. For any one round, the single-bit acts as the parity constraint upon the multi-bit (that is the analog of the n-component column vector normally acted upon by a parity-matrix). Thus, the “matrix” has only 1 row (acting as a single constraint).

One can also be interested in 1945-era cryptanalytical method in which one defined “6-bit” “ANALYTICAL” characters (from a Tunny machine based on 5 bit chars). There, one was interested in forming a “composite” character made up of 1 bit of “limitation” L and 5 bits of output from some component-function (e.g. Chi); for (statistical) analysis purposes

Its of interest that, given 6 bits, one can look at them two ways: 1 + 5, or 5 + 1. In the first case, one can look at this as generating an identity transition matrix, whereas in the second (say) one lets THIS class of identified message states define a “keyed-decay” transition matrix – that permutes the messages states in such fashion that the (collective) mutual distance of the sampled set within the class then reduces.

Generally, we can look at a round function as one in that is acting in cooperation with another round function. There is that addressing 2 and 3, and that addressing 1 and 2. Of course, 2 and 3 play the role of 1 and 2 (when one advances the schedule by one). And this swap of roles is important. When “generating parity-checks” we can be using 1 (say) as a means to calculate a bit (of parity) that is to constrain the codewords that are “valid” in the space characterized by the bits of the 2 (say)