Turing and markov chains and operator norms


image

image

http://us.docsity.com/en-docs/Markov_Chains_Contraction_Approach_-_Essay_-_Mathematics

See

what is interesting is to see how Turing models the same concepts (and how he does not, too)

image

http://turingarchive.org/viewer/?id=133&title=28

above, we now understand why he is pointing out, given the rules of markov chains that enable one to calculate the indeterminate’s stationary distribution, why he mentions g(a) >=0 (not negative) and g(a) > 0 for some values in the input vector. We can assume that the statement that the f-bar is one is the stochastic condition that induces the contraction of the norms, as the operator acts on the function space of g vectors. In thinking that aligns closely with Markov’s original thinking, we see how conditional independence is brought into the argument when considering specifically 2-hop on the graph. Of course, the 7-point geometry is “all about” 2 hops – and their properties.

image

http://blog.kleinproject.org/?p=280

Where Turing talks about certain self-conjugate subgroup, more modern folks talk about signed measures – which balance. Its specifically the ‘’”balance” that its reflected in the “must sum to zero” – since any movement left cancels with any movement right.

image

http://turingarchive.org/viewer/?id=133&title=29

Now we understand that K is the unique stationary distribution and is an eigenvector. Turing is reasoning in terms of operator norms, where eigenvectors correspond to eigenvalues.

more formally

image

image

http://www.stat.umn.edu/geyer/8112/notes/markov.pdf

Now Turing also makes oblique references to properties about the uniform distribution, as it interacts with the operator. Could it be a reference to this:

image

image

http://blog.kleinproject.org/?p=280

its also interesting to wonder if the beta, above, influenced the Tunny folks and their notions of proportional bulges. Anyways, getting back to facts:

Turing seems to be indirectly reasoning that as the conditional dependence of terms from the plaintext (as represented by how an input term is mapped to its output value along with the next term, similarly) are averaged out and contracted so that the fixed point attractors draw the flows towards the uniform points. Its almost as if he is interested in convergence of averages (in a mostly closed space); since he doesn’t specifically mention concept that aligns with aperiodicity. In order to calculate both the stationary distribution use a beta  (and 1 – beta) – that corrective factor needed for and induced by the non-abelian group operator that brings the overall system into regularity.

OF course, we are familiar with the overall form of the equation (weighted by (1-b) on one branch and by (b) on the other) from the study of symmetric channels.

We have to note, since the example text notes that the norm for L1 is what works in producing the contraction that Turings example is in L2 norm. But, then, Turing is really interested in two step on the graph. His contraction occurs as a model for the conditional dependency of two terms in the plaintext – which in tunny terms is like cryptanalyzing the next bit in the tunny wheel (for hamming weights in binary) , or the most likely second character in an bigram (for hamming weights in q-ary fields where the #fixed points is the number of changes in the uprights component-wise mappings).

We see that there are various notions of independence:

image

http://terrytao.wordpress.com/category/teaching/254a-random-matrices/

Advertisements

About home_pw@msn.com

Computer Programmer who often does network administration with focus on security servers. Very strong in Microsoft Azure cloud!
This entry was posted in coding theory. Bookmark the permalink.