More on Turing at Princeton


In an earlier article (https://yorkporc.wordpress.com/2013/02/17/barclay-rosser-princetonidansa/) we took at look at the cipher-theoretical work Turing was doing at Princeton – while pretending otherwise. Having studied both classical and quantum expanders, we can expand a little on what we said back then. It turns out that the core concepts of that known as Shannon information theory are really known earlier that the publications of Shannon on communication models. Of course, whether Shannon, working in secret, was part of their invention… we don’t known.

So before there was Shannon and his theories about capacity of channels and rates of signaling, there were theoretical topics of: topology, measure theory, representation theory, and metric spaces. There were also studies of the design of wheels for enigma, typex and ECM style cipher devices. While its assuredly true that a mathematician of Shannon’s standing was perfectly competent in the topics just mentioned, the legacy claimed for him as the “father of information theory” does *not* really hold up. Though true that he might have concocted an ideal formulation of the problem, we are seeing that now that a half century of academic reporting restrictions on cipher studies have been lifted that more fundamental studies were ongoing; precursors in many a case. These study areas now appear to have been more general than mere “information theory” and now pertain to current studies of quantum computation models. It is as if we have re-discovered Turing’s (el al’s) private models about quantum computing, now that we have some quantum technology to play with (and the growth of the electronic computing model is slowly, rapidly).

We can build up our model without using many math symbols. That way, any 12 year can get to grips with the core ideas in quantum ciphering.

From Set theory, one easily gets to group theory. And from such groups one gets to graph theory – wiring diagrams between pins and pads of cipher wheels. From graphs, one gets to cayley graphs, introducing the notion of generators (or a set, thereof). From cayley graphs, such as G, we get to the idea that we can compose graphs (or compose together two wiring plans…or two wheels). In the world in 1930s, folks were fascinated with modeling things without reference to any particular coordinate system – and thus modeled things in terms of the invariants.

From general analysis we also have he notion of k-designs, or the ability of look at only k of n sources and test for independence in k, expecting it to hold for n. This gets us to the ideas of asymptotic limits, in which one can generate arbitrary levels accuracy (or limited difference of any k-result from the predicted n-result)). From norm theory, we know that a norm is a quality of distintinguising generic points (that may be limits, as just discussed). Norms can be stronger and weaker than other norms; but in design terms exist as topics of analysis in order to characterize properties of distinguishing features. From norms, one reaches for measures, such as the Haar measure. This leap allows us to compare a computation model based on operators against a model of when things are certain

In the world of information systems, its easy to see graph theory as a  supporting general models of computing . For example, thinking as were limited those in the pre-computing equipment era of the 1930s, the “program” of a group” is its action – upon a vector space (an “input data file”…). Perhaps, its easier to look at the subgroup H, of a group G, and realize that H provides just such an “action.” This is after all the FIRST reason why we are so interested in subgroups. Subgroups are, however, not the only actions/programs we might formulate, and we quickly realize that there are thus classes of programs associated if nothing else with difference ways of developing actions. From this we get to the associated concepts of classes of complexity for such classes of action-inducing algorithms.

Though subgroups act, more powerful actions and computing models abound. For example, the semi-definite product of two cayley graphs provides a very 1930s quantum computing model. Any coordinate in this graph space is a couplet, one singlet from each graph. But when one then uses the group operator on two couplets, one sees the “functional action” come into being, in a manner not seen with simpler subgroup actions. One sees how the second cayley graph and its singlet coordinate act almost an instruction code, directing which action is to be applied.  The set of possible actions is specified by the first cayley graph and its singlet coordinate. IN general, one sees a choice notion – in the sense that now our program has notions of statement flows and general notions of branching. Furthermore,  one sees how the next step itself in the program MAY be computed, as one of the results of the current calculation.

Wreath products are a particular type of semi-direct structure. They are interesting to 1930s Turing because they are a rather theoretical model of computing and calculating , by what we would now call “programming”. But more than (just) that, they also get us to notion of logarithm function, calculated on graphs; and from that notion of the core entropy of information, when communicated (through the actions of groups)

We know Turing was interesting in the relationship between a graph G,and a coset-graph, understanding that the one “covers” the other. Of course, the cosets are tied most easily to the subgroup H, of G; each coset being a translate of H in space. Coverings recognize a core computing notion that perhaps the graph is only something really “developed” from the coset-graph, in the sense that a compiled program is developed from the source code; with each containing the same “information” in some sense, And, we can conjecture that Turing also knew that the quantum of information in the covering graph, as a unique metric for that information transfer network, cannot be greater than that the coset-graph that being covered; recognizing thereby notions of redundancy (in G, verses H) . He knows that a series of norms can measure such graphs by “refining” the very notion of information quantum-ness, getting to a minimum – aligned with the order of the expansion of the graph – or the log of the diameter of a sequence of those graphs that are an “expander family”.

We also know that Turing was able “to compile” – which takes the form of knowhow how to diagonalize a graph. He knew that the signal-processing “transform” of particular inner-product spaces could represented in a parallelized form – amenable to faster processing. He knew that there were transforms of transforms – that made the parallelization of computation verily possible, upon parallel-compilation. And of course, we say these ideas perpetrated both in the design of the Turing bombe and then in the motivation of the ACE. The discrete fourier transform was an important study topic, in “early compilers”.

Finally, lets recall that the notion of the subroutine comes from a theoretical basis, aligned with the above. One  stacks, one computes, one unstacks. But, in an non-von-Neumann model, stacking and unstacking may not be as we think today. We may be perform a zig-zag product (of two cayley graphs), where the zig and the zag are the generalization of stack/unstack. And of course zig-zag is only a special case of the wreath product, which is merely one instance of a semi-direct product of (just the right) graphs.

Advertisements

About home_pw@msn.com

Computer Programmer who often does network administration with focus on security servers. Very strong in Microsoft Azure cloud!
This entry was posted in early computing. Bookmark the permalink.