Studying quantum random walks has shown us that the “function of a coin” is to help choose a direction for the walk. If you are at node x, having come from ‘x, then the coin can help choose next from x1, x2, … edges, each one associated with a member of the (symmetric) generator set.

When Turing and others discuss the compression of the lattices underlying point spaces using round functions they characterize the notion of ‘x (the node FROM which the token came before landing on x) as the space of constant functions. That is, that volume of current space in which the coin has no effect, in the future. They then characterize the spaces orthogonal to the constant space. That is, further, the volumes that are not currently constant and in which a coin toss might affect the evolution of the system, in the future.

For the cayley graphs and its generators – that define edges that might be followed as token goes on a random walk, we can think about differential cryptanalysis. If each edge is just one of the possible “directions” that lead to the subspace resulting from partially-differentiating the cryptosystem in that direction, then the set of edges over which the token might travel can be seen as the d-ary coin. The toss of such a coin lands on one of the d-sides and thus selects the associated direction edge on the graph. The future of the token belongs to the associated sub-space – into which the token now moves. That is, the action of the coin is to choose an outgoing edge of the graph and – due to the “de-correlating effect” of what is an act of measurement – to influence the resulting distribution seen in the future.

We also know that resistance to differential cryptanalysis comes from ensuring that there is little or no memory of which choices were take, where the next round’s compressed lattices of its own point space – with its newly-concocted set of directions pointing to its partially-differentiated subspaces. No memory in any sense – takes the form of ensuring that one has maximum (mutual) distance from therefore __any__ linear structure. This results in there being no predictability in the output value (v), given the “relatively-constant” input value (c). Said quantitively, the output bits that flip should do so with a 50% chance, given a impetus provided by a flip in the input.

We can look at each distribution in polar thinking, as a compression of the circles (representing the distribution of weights assigned to each generator, in the configuration space resulting from applying each stage of the lattice, Rf_{n}). During “functional space” compression, the mutual “geometric” distances are altered, tending to a uniform mutual distance. Folks are simply applying a property of the L_{2} norms – which distinguishes them from L_{1} norms

It is “lemma (a)” that induces this particular quality, in Turing’s mental model. And note how he thinks in “modern: quantum terms” implying one should be measuring correlations (i.e. wave functions cohering or destructing) in different volumes of space.

It depends on the mean weight of the (now stochastic) operator’s outgoing edges for each stage of the lattice being 1, that isThis induces what would be a growth rate in the correlations of the “plaintext-operator”

to stay consistent with the correlations of (non-measured) functional , i.e.

as and when, for a two-step walk on a Cayley graph whose first step generator was some a now-considered to be constant-space c