trying convolution to fourier space, to multiplication, and to complex oscillators

image

http://www.science20.com/jon_lederman/fourier_transform_diagonalizing_convolution_operator

Posted in coding theory

Cryptography in Security Engineering; piano-playing dog playing the DES tune

Cryptome publishes a reference to a famous author’s book – on Security Engineering – now available for online download. If we review the cryptography chapter, what do we think?

The author shows in his style of writing that which you see in natural thinkers – the ability to cut to the chase and “lay out” in a flowing but simple style how to think about the subject matter. THINK like this, and the topic holds together. P passing academic exams is trivially easy (and scripts of such students are even easier to score). Think otherwise, you may be adding more than is needed to engineer.

At the same time, we must recognize that the “method of research” verily requires thinking “in a manner” beyond what’s known (and nicely expressed). In crypto methods, furthermore, its that very tiny oscillator due to non-commutative relations that is the thing we don’t fully understand. It necessarily upsets all nice flows, just like the random particle from the sun bombs your disk drive’s data blocks. What we do understand is that the “threat” to uniformity is critical and this lies at the heart of the nature of indeterministic reasoning and algorithm design.

The author does a good job of presenting a modern and non-dumbed-down view of cipher designing methods. He has moved beyond telling the stock story delivered in undergrad courses and a hundred books in the 1990s and 2000s. One gets a glimpse of what researchers knew at the time (but didn’t know “how to” deliver to undergrad audiences, at that point). We get now to see back in time, that is – as folks realized just what were the “design rationales” behind DES as folks figured things out from just the evidence.

In classified WWII war work, as scrubbed for unclassified publication in 1948, Shannon apparently provided mathematical reasoning models for US cryptanalysis – that later become methods for cipher design based on first non-constructive and then constructive proofs. In short, the world of conditional probabilities used in detector theory received a more abstract treatment focused on the notion of typical sets – whose “volume” forms a ratio to the volume of containing sets – such as those for all sequences of a certain bit length. Furthermore the limit of and the rate of change of the volumes of such sets was treated as forming up a ratio – much like the angle on a geodesic plane between two vectors, one projecting upon the other – that is interesting specifically in a world  of asymptotic reasoning that can look at a bit as a random variable in its own right and, then, also as a component in a multi-bit vector space, also modeled as a random variable. One knows that there is an exponential relation for the volume ratios, viewing the vectors as volumes of a hyperplane. Given that, one can start playing with bounds (as a cipher designer) so that particular limiting processes demand more time, more storage, OR MORE POWER (in the volume/density sense) than can be brought to bear. In particular, if search space is seen as resource, one may need to search more space than exists…if one is missing the key. A designer may use his function-synthesizer to present you the attacker with a sound whose music mixes with your own to always make a cacophony.

DES is quite musical, in many ways; and of all the cipher concepts it is the one that most appeals to a (two handed) piano player – since two themes interlock. If one looks at 3 rounds of DES, there are 5 terms in the algebra that can be viewed as notes: L, R, f1, f2, f3. In the key of C, its dominant tone is G. So use now your “musical ear” to calculate (rather than the cognitive part of the brain). If like me you earned 2 performance degrees at 17 and 18 in piano playing (roughly 1st and 2nd year music college exams) after 12 years of performing, your ear is very tuned and can hear/reason with tonality and convolutions of tonal systems. And you can do it for several tonal rule systems (Bach, Beethoven, or Bartok!). Any 3 rounds of DES are just a tune, though you MUST syncopate, for the sound to render well “musically”. The logic of the pattern, in music sense, is the way that the theme implies is correct chordal basis (that satisfies the polyphonic rules of the various music styles).

Hmm. Probably sounding like a nut at this point. But it works. L is B, R is C, f1 is D, f2 is E, f3 is F. If you read the DES “expression tree” for three rounds right to left, in infix notation, and use a swing style with 1 note (i.e. the first beat is f1, preceded by a quick R) you can continue in that pattern, CDDDDD (ie.  f1(R)), BEEEEE (i.e. f2(L)), CFFFFF (i.e. f3(R)), CDDDDB (i.e. f1(R)L), CDDDDD (i.e. f1(R)),  BEEEEEE (i.e. F2(L)), C (i.e. R).

I spent about 3 months studying all this, when I was 21 – mixing computer science, an expert rule system doing probabilstic inference, and rules for chordal progressions. That was in the days when you could not even buy a PC program that would print music! How things change changed!

Posted in crypto

looking at crypto history’s development 1

One way to apply historological methods (to crypto) is to analyze the development of skills over a lifetime. So, lets see what we can infer from Studying Ueli Maurer (from this limited perspective). Lets see how he changed, over the years. Within his change lies a reflection of other changes – in society, in open knowhow, in hardware power, etc.

The notion of local randomness and (theoretical) definitions “perfect security” etc were once very much in vogue (i.e. 1989)

image

image

image

image

image

image

image

image

 PDF link

One notes how these (allowed) professionals never say the word “diffusion” (or “differential”) in this era. And one notes just how _linear_ “coding” centric is their thought train.

Posted in crypto

Reusing des cfb

Its tempting to reason that des could be released because its design is, like a candle’s diffusion reactor, limited in scaling symmetry. This means, without folks knowing how or rather why des works, it could not be extended.

That rationale applies only to ecb, however. From the outset, the (other) modes of des, building on des ecb, were intended to turn des into a cryptosystem. Unlike a raw cipher a cryptosystem applies to security services delivering concrete security semantics.

With des in 1bit feedback mode, each block of ciphertext produces but 1 bit of output – to be viewed as the output function of a ‘higher’ state machine with publicly known state transition function. One apply this to make a new cipher, one has to know certain things about (once classified) cipher design methods. By default, such as the standard des cfb mode just provides a security mechanism suiting certain cryptosystem applications.

If one uses 16 (say) such des cfb-style computations (each producing 1 bit), they can form a 16-bit input to a binary function (such as parity/xor). Furthermore, one could arrange that the feedback memory of the collection of such computations (designed/synced computations) to fashion a particular dependency relationship between the last n bit of collective output and the latest bit. This should be designed so as to effectively create an md4 class mac’ing function, giving one a compression (collision avoidance) function as result.

NOW the trick is to reason how this, in a ciphering context, all secures against a random oracle, delivered by protocol flaws or rigged hardware power signatures etc etc, predicting the state transition output for any given ciphertext’s (partial) stream output. There lies the art of making ciphers.

Posted in cloud

Wiki crypto

http://en.wikipedia.org/wiki/Symmetric makes a cryptographic point.

Why not take the knowhow in Des and make a ‘bigger’ cipher?

A (single wick) soft wax candle scaled to the height of a tree collapses … under its own weight. This limits the time its wicking effect can act as a diffusion reactor, countering any benefit from attempted rescaling.

Posted in crypto

covariance; balance; walsh spectral analysis

We have a solid handle now on notions of co-variance and those matrices that allow one to build up a model of pseudo-noise sequence leveraging the notion of joint guassian’ness – suitable for QAM demodulation, for example.

 

image

https://yorkporc.wordpress.com/2012/09/04/pca-to-variance-to-eigenvalues-to-mutual-information-and-projections/

We also now have a solid understanding now of what the relationship is between balancedness in truth tables and the relationships with power spectral densities – and in particular with those spectral forms calculated by (i) taking a truth table, (ii) expressing it in sign basis (what Seebury ad co fall its sequence form), calculating auto-correlation and cross-correlation inner products (in that sign basis) and comparing functions “in particular directions” with each of the rows of the hadamard matrix – to quantify the relationship of the function against in linear support space of certain dimension. See https://yorkporc.wordpress.com/2011/10/15/seberry-zhang-zheng-cryptographic-methods/

While that was a good starter paper, a much better one uses our facility now with argument cast all in terms of inner products of vector spaces:

image

http://www.jucs.org/jucs_1_2/the_relationship_between_propagation/Seberry_J.html

Now the notion of differential cryptanlaysis is giving way to analysis of what one needs to dominate, as a cipher designer, to address the underlying issue set.

We also understand from the Inria folks how to go one step further than the above, and fashion almost-bent functions that have the balancedness property too, when the walsh-hadamard transform has 3 spectral points.

image

image

Between galagger and the PCA folks giving us practical understanding of finite energy distribution/density/power functions and how to reason, Susskind and Turing giving us the means to think directly in terms of circular functions, Sebury’s great teaching style on topics of the sign basis when analyzing non-linear boolean functions, and the inria stuff pulling it all together, we are establishing a solid theoretical baseline

Posted in crypto

home_pw@msn.com:

One can do largely the same thing with wif and forms auth tickets where the frontend web host supporting the store app (wif sp) shares ticket minting/verifying config with its backend services site. Frontend code can make api calls citing the ticket that it mints in a header, playing the role of the access token.

or one can pass wif minted session cookies, similarly.

Passing master session cookies around distributed systems components is Old story.

The issue is politics, only. Who is the session authority, and what happens when your site falls out of favor with the big dog trying to run the show?

Originally posted on leastprivilege.com:

The last days I’ve been researching some of the new security features in Windows 8. One of the biggest changes in Windows is definitely the fact that you can now login using your Microsoft Account.

I will describe the details how this works in another post, but the net outcome is, that after you logged in using your Microsoft Account, you can access services that are secured using the Microsoft online infrastructure in a single sign-on fashion. And that’s a pretty compelling feature for a number of scenarios.

Windows itself uses the Microsoft backend e.g. to synchronize your Windows and app settings between your various devices. Other built-in apps use Microsoft provided services like Skydrive, contacts or calendar using the same technology.

Of course, you can do the same – e.g. since every Microsoft Account comes with 7 GB of Skydrive storage, you could take advantage of that for document/data…

View original 586 more words

Posted in coding theory

Applying Md4 era differentials to policy goals of the era

Md4 is a modified block cipher. There is super large  key space compared to the state space.

There are also differential trails.

Were they intended?

In the era, one wanted strong crypto primitives for digesting that did not deliver strong encryption – indirectly.

Assume md4 wanted to hobble any misapplication of hashing outputs – used as an additive noise stream for example. The differentials would deliver the hobbling, of that. But the collisions would not (arguably) interfere with its intended role in a message authentication service.

If one accepts the collisions/differentials argument, then one asks how long the security lasts given an attack on the complexity (of useful collision finding) in realtime – and is it much longer than the intended lifetime of the mac?

Assume mit/irtf had two tier analysis, with trusted us folk (only) being in on the intended side effects, with intent to lie (by omission or by classification).

Posted in coding theory

Turing conjugation

Enigma study was where i first encountered the notion of conjugation of two permutations. We learned also about classes of conjugates, involving invariants of cycle lengths. We can now reduce this to simple mental models.

Sure, a enigma wheels wiring might give an upright U, listing the wiring of pins and pads. But we know its just a function of 26 inputs to 26 outputs.

Furthermore we know that u-specified functions use the same alphabet for inputs and outputs. Cycle notation is born. (1 2 3) is compressed form of the table of three rows:- 1-2, 2-3, 3-1. Its just the specification of a function, however.

View that function as open box,with input wire and output wire.

On each wire, there is a mapper. These mappings are specified using the conjugate permutation.

Function u conjugated by c = (2 3) means (2 3) is the input mapping – reading left to right – and its inverse (3 2) is the output mapping – reading right to left.

If u = (1 2 3) then we also know what the pre mapped input are – from the domain side of the u  table. They are just 1, 2 and 3!

For user selected choice of 2, then, 2 is placed on the input wire, which maps it before starting the function u. So 2 becomes 3, on input mapping with said 3 now being acted on by u table. There 3 has function value of 1. Before 1 is output, the  output wire itself does a mapping,mapping the 1 from the table lookup to 1, since the (inverted) conjugate spec applying to output remappings has an implied (1 1) cycle, nothing being listed for 1.

So conjugation gives a simple machine spec, with inputs, outputs and a 1 table. Its nothing more.

Written out in function notation its just g(2). What is g?

Well it has three components: input rule, function rule, output rule.  Obviously input rule happens first. So g is (h (2 3))(2). What is h? Well next the function lookup should happen, so eliminate h:

g is (i (1 2 3) (2 3))(2).

What is i? Well its the output rule – the final step, obviously.

So g is ((3 2)(1 2 3)(2 3))(2).

Remember to think right to left when doing and evaluating the 4 steps – now including writing down how the 3 apply to (2) – or the other u domain members. Do read leaf cycles left to right though.

What is the betting Turing wrote his leaf cycles using the right-to-left convention, confusing everyone (with its logical and efficient sense)!

The enigma rod square just applies the above, lots of times choosing the domain elements in diagonal order, where each row takes the output of the first calculation using it as an input for the row’s second, etc.

Not hard, huh?

Posted in coding theory

trace to table (and back); cyclotomic; galois fourier transform

Gong gives us a good diagram of the relationships between a bit sequence, binary functions captured in polynomial form, and good old (non-boolean) truth tables over an n-digit binary field:

image

image

Correlation of Boolean Functions – MIT – Massachusetts Institute

Using trace representations as “specification” of truth tables, we glimpse at a world in which we are in two calculating machines at once (almost like being in the amplitude and frequency worlds at the same time, in the world of signal processing).

image

Trace representation – Universitetet i Bergen

It is  interesting that to get from table to trace, we go via the “cyclotomic coset” things:

image

Trace representation – Universitetet i Bergen

Now what we don’t have yet is a good mental model of what’s going on when getting from binary sequence back to the trace“specification”. So lets think like Turing, Newman, and co.

First there is a (binary) sequence – for which we want an actual mechanical-machine generator and associated math model. Let’s say we are in the world of tunny wheels. Obviously any wheel is periodic. If one takes 2 tunny wheels stepping together, as in the Tunny attack, one gets a multi-wheel (with longer period). ok. Said formally, this accounts for Gong’s F and and F2, or GF(2n) and GF(2), rather. 2 is the pin setting on a pinwheel, and the 2n aligns with the number (n) of wheels acting in concert.

image

Now in the special calculating plane of the galois fields, we do comprehend that from a function on a alpha to some power I (where alpha is a “primitive element” of the n-digit binary field  GF(2n) where n is the number of tunny wheels), one gets each I’th line of the joint-wheel truth table. f() of alpha0 is…, alpha1 is …, alpha2 is…

Ok this aligns with the first part of the Bergen paper. Each bit in the sequence being generated as output is some function of the polynomial calculus, for a given input expressed in terms of an alpha to the I’th power .. giving the ith bit in the sequence.

Gong tells us a bit more about the “polynomial calculus”, next:

image

indicating that the ith bit in the binary sequence is not simply some and/or combination of the truth table input  for the ith line. Rather, its  a function of the cyclotomic coset leaders, the number of elements in the coset, where each coset is our old favorite from the world of cylotometry. This world is, rather like the fourier world, able to turn the world inside out.

We seem to basically saying that f(), in the inside-out world view,  is a linear combination of several terms involving that alphai.

image

we can either describe a n-bit value (in the F2n world) – which is the binary label for the row giving the I’th bit in the sequence; or we can define a term in polynomial calculus space (fashioned in terms of a sum of primitives to some ith power).

We already have a simple mental model giving a rationale for using all the Galois stuff – being a means of doing “polynomial” based modular arithmetic. And we can see how a sum of k Trace function outputs is a “sum” of individual wave functions – in some “polynomial” orthonormal basis. The matrix A seems to be in there to be encoding of the combination requirements/constraints limiting calculations to an algebra in which all expressions are in the calculation basis of the discrete field .

Now something  Gong points out, in THIS context, helps us recognize a little that the function (and its auto-correlation properties) look to all intents and purposes like the assumptions that Shannon & co made when modeling the theorms of their communication theory

image

It really wants what we wanted to get out of this study (so who is the trace function “deduced”), but the argument does help reinforce that DES processes are, round after round, performing a self-convolution to get to the k’th order propagation – that addresses differential cryptanalysis – but only in a world in which k-resiliency is address the linear cryptanalysis, too. In the course of the Fiestel type process, one is generating an “effective” bent function – if analyzed as such – as measured statistically using the auto-correlation metrics. One is also inducing dependency of any one output bit on all input (and key) bits (a necessary component of the propagation goal).

Getting back to “how to figure” the trace function (given for a field such as F25 its primitives), Gong shows us the indicator function (having as Bergen says, got to the spectral form). We see one step closer! (and I could see WHY any 1940s Testery material hinting at early version of this topic would still be seen as “pseudo-sensitive,” even today) – since it would indicate the rationale for folks having focused in 1970s/1980s etc. on developing custom DSP hardware for this particular function (which one may want to still deny exists, etc., etc.)>

image

At least we are getting a hookup with hand-method wheel breaking AND the last paper we read from the Zhang/Zheng/Seebury camp (which gave us a math for correlating signs/sequencies, in terms of the sign basis). We start to see how Testery folk may be “constructing” their rectangles too (rather than using depth material), to act as a filter giving the linear functionals they want (to use a Galagger’ism).

Hmm.  Still don’t understand too much about trace formula, what A and B do, but we are getting on firmer ground.

Posted in coding theory, crypto

The first turing bombe (1939). Was it an original design?

Folks in Poland are calling for better attribution for their WWII contribution to (enigma) cryptanalysis –at the intellectual level, that is. So we can ask: so what evidence is there for “UK” thinking in the first (UK) bombe design? How much *original* cryptanalytical thought was there, and how much can be attributed to being merely a refinement of the Polish bomba? Can we see through the usual UK deception campaigns, history corrupted by years of propaganda (and the general policy of self-puffing)?

Now, it’s clear that UK engineering of even its first bombe far exceeded the engineering of any Polish bombe manufactured to that point. But it’s from an analysis of the engineering that we can test what – beyond manufacturing prowess – the UK added (if anything). Building a bigger bridge is engineering, not “intellectual” achievement. (Being an engineer, personally I have the highest respect for the former class of achievement, too; “but “academically” its irrelevant).

So what were the design features of the Turing bombe (before the diagonal board was added)?

Well, first it used 2 sets of rotors per manufactured unit (and each case had many “units” to be linked up in a menu constraints checker), each set pointing at each other through a static rotor performing the role of the reflector in the design under test. But, we know the Poles had already done the same (albeit in rather slower operational form) with their their own stopping-centric cyclometer. So, unpacking an enigma reflected rotor-set into 2 rotor sets was not a breakthrough. Someone had already pointed out the great value in doing that!…. Was implementing at n trillion revs per second a absolutely crucial cryptanalytical engineering feat worthy of mention (of course, but not under “academic” criteria ).

Next, the operational practice of the Turing bombe’s setup required pre-analysis of depths of ciphertext, for an assumed plaintext crib. Given the iycles, the inherent constraints helped limit the search AND acted as a detector/stopping-condition. But was this novel? After all we know that rows of depths were formed in Polish hands too, to figure the particular cycles built out of the 3 cycles produced by the practice of double-enciphering the indicator. The French technique of getting out the (non-steckered) enigma wheels  surely used the same depth-centric, cribbing-method, no? So, it was not a “breakthrough” (to try something from one cryptanalytical area, in another). WE need to be shown that the “logical” language (motivating menu-making and menu-applying) when testing for “contradictions” that implied stopping rules – really was what was driving the design concept. After all, the pole’s machine stopped automatically, too!

Finally, we should remember Turing “engineered” his bombe very quickly, after joining GCCS (though I suspect the was more than involved in cryptoanalytical pure research from 1932, associating in secret with the GCCS/GCHQ types – as do many of even modern Cambridge math types). He joins a Dilly Knox era organization, remember – which has created a tradition of honing intuitions while crypto-puzzling; but none of these are really applied in the first bombe design concept aimed at “wheel setting” (only).

When you think of the first bombe, it really is just a large implementation of the polish bombe – and I cannot see what “original” breakthroughs it really had. It’s just a bigger mousetrap.

Now I can be shown to be wrong here, given some of the comments in the PRofs book on the topic of the spider – a refined bombe that is looking at phase differences (that never gets to working operation, apparently). Here, we start to see that there IS original thought going on (that quickly manifests in the Welchman bombe improvements), even if the ideas did not manifest in the first model, particularly.

Posted in early computing, enigma

x-raying assange

In the US, its getting more common for professionals to x-ray homes, during the home inspection processes used in buying/selling property. The x-ray (which might not use x-rays in fact, but other “rays”) does for your house (and whatever happens to be in it at the time of raying) what a GE 3 machine’s 3d magnetic resonance scan does for your insides, in medicine; or an electron microscope does for the chip-under-inspection. It builds a 3d picture…that one can then walk through much as one walks through video-game worlds. For home inspection purposes, one walks through the places in the structure performing a virtual (detailed) inspection – looking for non-insurable issues (like evidence of water damage, mold buildup…). On a million dollar negotiation, the $10k invested in the xray can bring down the price considerably…as a negotiating ploy. Even the threat of the x-ray can do wonders, since its results can IMMEDIATELY invalidate the seller’s insurance…! (meaning no realtor can wander by… since there is no liability coverage for slips..)

In the case of assange and his self-imprisonment in Ecuador, if I were an  HMC bean counter I’d have arranged to be illuminating his “living” space from the roof area of the building housing the Ecuadorian tenancy (mostly to save the cost of having a live copper cover each  potential bolt hole, as designed or as scratched into the drywall.. (or soft plaster, rather, in Edwardian London houses), usable by the cowering rat).

Now, even a rat has rights. Why not ANALYZE the (various) semi-secret technique being used for surveillance on one, as one sits in one’s rat maze?  The national-security lot within HMC absolutely hates having its spooky “methods” discussed, probed or analyzed, openly. One doesn’t need an expose, note, to engender a reaction. One just needs the experiment, the hypothesis about the methods , the publicity of the attempt to catch a spook/Fed, and the stats collection that shows reaction as folks attempt to hide the methods.

If you are a UK foreigner (albeit merely an Australian one) one has to learn to play the UK in its security-culture terms. That culture is similar (in its asinine aspects) to the one you know (the “even more” asinine Australian national security doctrine).  In the US, natsec folks openly lie culturally (when creating some cover story, as normal practice). The UK, the security culture is one of “it doesn’t exist” – a type of lie which requires a heavier hand to enforce when the uppity classes get stroppy. Of course, one can now measure the heavier hand being applied, as evidence…from which to make inferences. As a hacker trained in Australian asininity (application of the “doctrine of ninnes” – or ruling over those too ignorant to realize how the game is played), one should have no problem now “playing the UK”.

If you take something from the US (e.g. an xray process) and apply it in the UK, one can upset the cultural assumptions. Perhaps someone in the UK can lend Assange an xray… and he can go scan the “surroundings”. Bet you there are some interesting metal objects to be found, “physically within” the beams supporting the roofing!

Posted in assange

mathematica and cryptology

image

http://www.scribd.com/doc/7178895/Fundamentals-of-Cryptology-a-Professional-Reference-and-Interactive-Tutorial9780792386759

Perhaps its time to get a copy of mathematica and start actually doing these cryptanlaytical techniques (rather than just reading about them, for ever). The math notation and argument form is no longer threatening, which helps!

Posted in crypto

cyclotomic cosets and the shannon limit (measure theory in the limit)

The Georgia Tech context is conventional:

image

 

image

see LDPC codes based on cyclotomic cosets

With insert from Turing, at http://turingarchive.org/viewer/?id=133&title=11

Now, consider the context for Turing: the conditions under which his search process rules in/out exceptional [factor] groups.

Ok. So this dates classified versions of BCH before 1954…

we can take Turing’s (later in the manuscript) proof that the cycles create a compact set of uniform probability densities (one per factor cycle) as equivalent to doing shannon limit analysis. Turing treats the problem algebraically, rather than numerically. What Turing wants is that each and every factor, under any possible re-labeling or re-factorization is considered (in the exceptional group) and that to each is assigned a probability density *function*.

For Turing, one requires the weak law of  reasoning, in measure theory, in which the momentum of the measuring device does not influence the system under observation – and gets one to a good estimate which, providing its beyond the (estimation-scale) threshold, enables one to conclude that the accurate convergence would also hold.

Ok, the key to seeing the correspondence was this slide:

image

 

image

see LDPC codes based on cyclotomic cosets

 

Here we see the crypto argument,  in the minimal math modeling required.

What we needed to know was the means of generating the partitions was founded in polynomial reduction processes (the polynomial form of remainders, taught age 10). Given that, then, the partitions have certain forms, each of which aligns with  a type of cycle decomposition in Turing’s reasoning. These decompositions align somewhat with the way in which automorphisms – of particular [de-composed] cycle descriptions – can relabel such as the Fano plane (as analyzed by Turing himself).

What is interesting is Turing’s characterization of why a 4-cycle is not permitted


To capture the core: rising integers, multiplied by powers of two (mod n)

image

See Finite fields

Was not Newman – Turing’s family-friend and mentor – a world expert in “harmonics”?

image

http://en.wikipedia.org/wiki/Gaussian_period

on Cyclotomy! (was there no a story about Turin and bicycles, and counting…)?

image

image

See Cyclotomy and cyclotomic polynomials

The Turing angle seems worth pursuing more, given

image

http://turingarchive.org/viewer/?id=133&title=06

Though I think of U as an enigma wheel and the “conjugate by” R-n /Rn as the relative (re)labeling of rotating wheels pins/pads (as they move), we can also try to see how Turing thinks of this abstract reasoning framework. He is “calculating” with permutations, and a cycle-graph of a given permutation may even be a generator (in the sense of the being an element of the generator set for a cayley graph).

In this line of thinking, one can make the connection between his notation for calculating-cycles and cyclotomic groups.

Can we see T (the number of rotors, or number of formal terms in the monomial) playing the role of N? (where N would be 15, in the worked example above)?

Can we see the number of points on the wheel (q) as the GF(q)? where q=2 in the worked example?

Can we see m as the playing the role of d (that number of powers of 2, each of whose values multiples each term in the alphabet)?

When we look overall at the argument, we see Turing happening to be using coset groups to factorize, where properties of the permutations in the base group control certain (cycle) properties in the factor group,

In the BCH world, eventually one reduces the elements of the “constructed”-group to those term-polynomials with certain power terms, which can reduced to a descriptor-number of 146, say. Do we see Turing doing the analogous thing, in his unique notation, with (alpha, Rm.alpha, Rm+p.alpha) where the R to the x is the “power term”?

Posted in crypto

From type i waveforms to type ii des

Underlying pam and qam is the general theory of Gaussian waveforms , linear functionals, notions of covariance and stationarity, and measure theoretic probability theory. Used in an a2d modulator design, one can look at the resulting waveform as a kind of ciphertext.

It comes as no shock that there is ultimately one to one correspondence between plaintext and the demodulated waves. And we can view pam and qam as cipher methods, each method producing a unique signal point set per input.

For pam and qam to meet the requirements, they must not interfere with the presumptions of the general framework of Gaussian filtering by demodulators. Said otherwise, they specialize the framework. When considering such physical variables such as power, position, energy and volume, their formulations enable such issues to be so balanced that the resulting medium of physics equations is isomorphic to the required mathematical treatment.

In particular the physical rendition must have built in a specific relationship with guassian noise. This creates the conditions enabling convolution with filters and samplers to support added noise being separable from the original waveform.

In the world of des,  assume that k-resiliency Can Be recast as a set of k from n possible conditions on stationarity – defined in terms of the k pairwise distances having certain covariance and certain joint distribution. This particular certainness means that the probability of decoding error is set in line with the snr of the plaintext in an assumed typically normal noise model, and features a measure close to the difference of the log entropy with the shannon limit (log to base e of 2, no?)

Lets now assume that the mechanisms for a spn act pairwise on plaintext bits through selection and distribution processes by biasing codeword generation to have an excess of weight in each  kth distance class that is a function of the measure unit.

Leta guess that the iterative process converges to this value by sideeffect, having indirectly formed a tunny-style rectangle with initial chi1 set to the subkey and the round input bits randomly entered into the cells. The spn the redistributes the cell values in an expanding matrix, as rounds multiply, aiming to meet the ldpc conditions that assure convergence at the required measure (and probability of error)

To get the ciphertext, take the final subkey through the rectangle.

This is just a mental model, assuming that the cipher-theory for type II ciphers is a discrete version of that used for type I waveform modulation.

Posted in crypto

Circular convolution

IMAGINE A CIRCLE, and on said circle you have  points labeled with a set of naturally ordered numbers. And in that set you have both positive and negative integers, only.

IMAGINE  now a second circle similar to the f-irst, but with a smaller g-lowing set.

Now consider inv(g).f.g – or f conjugated  by g

Draw f on paper. At f is zero, draw g on transparent paper. Also Label as g is zero the point of g where g intersects  f at f’s zero. Call this the origin.  Rotate the paper that g is drawn upon so the  point of g diametrically opposite zero also intersects some point of f. Call this the co-origin.

Imagine now the circles’ points of intersection at the origin are toothed and geared. Rotate f so that the gearing causes  f minus 1 now to intersect g plus 1 at the origin, creating a difference of 2. At co-origin, each circle’s points also each rotate by 1 but their resulting numerical difference depends on the relative size of the two sets of labels and the numbers chosen for each.

Multiply the difference at the co-origin by the value of f at the origin by the difference at the origin and accumulate this product. Move the gearing another turn, and accumulate its product similarly. Repeat until f is zero at the origin.

Obviously we have a state space, as we wander through the lattice, with a two unit generators or, better, 1 2-dimensional complex unit generator.

Update your firmware now and replace the circles with reimannian spheres, doing the same convolution in n space.

Posted in coding theory | Leave a comment

From G8 to automorphisms

G8 is a fun group – mostly because its easy to see how the state machine description aligns with simple math. But from this simple case we get to see powerful concepts, also related to graphs of state machines generally: inner and outer automorphisms.

We saw Turing analyse a particular random matrix using Markov chain ideas. For inputs to a rotor machine described by an ensemble of random variables (one per next input), his cayley graph of the fano plane for octonions specifies the multiplication operator for each case, specifying the action in general. The distribution of inputs and outputs is unchanged. Given 2 terms to be multiplied, the plane (oriented with clockwise polarity, say) indicates the product.

In Permutation theory, we commonly encounter 3-cycles. His 2 generators i and j were the gating symbols for all the states in his machine, computing the possible following states of course. For G8 with 1,3,5,7 as the generators, for two terms in a 3-cycle it is easy to compute their multiple as in 3*5 is 15, which mod 8 is 7. From any two, we get the third. And as for Turing’s case, one can picture the state space version of the specifications as a state machine in state 3 getting gating input 5 (vs 7 on the other branch) which leads to state 7. W

When we look at an automorphism,we just think of inner automorphisms as simple relabeling. If there were 16 generators, we might call ‘10’ the hex letter ‘A’, for all it matters to the labels for states. (We must simply be consistent.).

For outer automorphisms, its more a case that we are relabeling the edges. Now, the rules on re-labeling are more involved as there are combinatorial repercussions and constraints, since multiple edges flow from a state, one per generator, and we have “trajectories” whose labeling must retain consistency. There are more constraints, that is ,almost multi dimensional in nature: which makes finding re-labeling rules harder to find. But having found them, they are then more powerful acts of relabeling, useful for searching combinatoric spaces in some preferred (e.g. optimal) ordering.

Posted in crypto

non singularities; densities; joint Guassians; Newman topology for cryptanalysis

A singularity is something you hear about in scifi. But in crypto, its less abstruse. Either you can invert or you cannot invert a matrix. If you can, then your matrix was non-singular. Once you can invert, your model of a deterministic process now can logically do such as “roll back” (by doing all the steps, taken to date, backwards). Inverting the matrix gives you what you need to go backwards:- step back at a time. Of course, all this going back and forward all now sounds a lot like the world of ciphering.

What we have learned is that the model given above is true — up to a point. But, one can fudge. To understand the notion of fudge and reduce it to a concept we have to grasp the notion of linear independence – and contrast that property with statistical independence.

If a row of a matrix is linearly dependent on other rows, this just means that we have more rows than we need –  each of which contribute something necessary and in some sense row-unique to the definition of the Borg-matrix collective’s deterministic process. We could refactor things (to remove a Borg “drone” row) to get to a smaller set, that is. If one inverts the matrix with more rows than it strictly needs, then in fudge land the result is a quasi-inversion whose resulting dimensionality will be LESS than the dimensionality of the source matrix. Such results are more fun that they might sound, especially in crypto land.

One of the fun things one can do with ciphering process is model it as a random process – represented concretely using a matrix and matrix operations. This doesn’t mean the output is not-deterministic or “random”! It means we use Gaussians as the wave functions; and from those then consider possible joint Gaussian-ness as a kind of inner product space. In this computational framework we are now dealing with the statistical independence, and the notion of co-variance – that fudgy thing, from above.

Joint Gaussian-ness is fun because its very Turing-esqe – its that special set of “constraining constraints” that guides a crypto-analytical search – having encoded constraint rules constraining other constraints (e.g. a Latin square…) so as to eliminate huge parts of the topology under investigation. Joint Gaussians are very good at specifying how to eliminate search spaces! .. when a “menu” of enigma contradictions now specifies a particular “joint Gaussian specification” that configures a general-purpose machine performing as “probability-detector”. Of course, the more famous name for this is the Turing/Welchman Bombe.

In the move from Enigma to Tunny, of course GCCS got to benefit from Newman’s background – a master of topology theory. At this point, folks get to construct custom-topologies in specially-crafted manifolds (just ways of always measuring, intrinsically). Crypto search spaces are now regions of such topologies. The density function resulting from joint Gaussian modeling takes the resulting constraints and forms up a custom algebra (over a custom decibannage, when doing the numerical stuff) – for ruling in or out  regions for more detailed searching (or at least deciding in which order to search the combinational explosion, given the likelihoods).

Posted in colossus, crypto, early computing, enigma

SCVMM compliance for PCI auditing evidences

The WSUS integration with SCVMM cloud takes some getting used to. It’s what you know, but different.

First, one performs the typical role installation, and one starts the role wizard. But DON’T FINISH IT.What needs to happen is that there is an IIS listener offering an API. But, one  doesn’t want the standard wizard’s configuration of the provider.

Second, having configured a listened, one configures SCVMM to recognize the new service provider by configuring an “update service”  in the SCVMM console:

image

In the course of configuring it, one performs many of the same steps as the usual wizard. I.e. identifty which categories of update package to scan for… and download from Microsoft.

Next, one we get to compliance – a concept alien to the raw WSUS world. Its just the way to give a tag name to groups of updates, by “compliance” goal. One designs the compliacne goal noting that its “hosts” and fabric servers that one is targeting – not general purposes windows servers. And that’s the key difference. Just use standard WSUS culture for the latter!

For cloud “fabric” services/services, use the update baselines to tag sets of updates, and otherwise order a resync with the microsoft repository

image

Frist tag particular updates:

image

and then assign the now tagged updates set (compliance profile) to particular hosts, using scopes:

image

note that all infrastructure may be targeted , but not VMs

Finally ,we start to assess ”compliance” we ordering a compliance scan (compare updates on machines against the tagged updates lists)

image

image

Finally , order the manual compliance “remediation” (apply the damn updates to a host, or designated host group):

image

In a nice feature (that’s MUCH nicer that the mainstream WSUS process), one specified audit-centric exceptions:

image

once you have remediated, it will do another scan. You;d expect it  to say complying no! Well it doesn’t.

If one does a manual windows update of the same update, and one then performs another compliance scan,

image

 

It still doesn’t recognize compliance.

I wonder if things are setup to only consider material ONLY ONCE the host has bothered to report its windows update status – which happens only infrequently? Or something…

Posted in system center

Turing tiles

Let’s continue the analytical line that 1950s Turing is constrained by an inner compunction to respect his secrecy promises concerning WWII crypto – even  as he is persecuted and hounded by the UK government (probably at the beck and call of the US and its various 1950s era pograms). This means his advanced intuitions about crypto are reflected only indirectly in his work. With hindsight, we can see how he solved what be to him a mere ‘puzzle’ – of using but not formally revealing secret knowhow. Of course, this would only generate paranoia-feedback and this induce further repression (something Turing was socially incapable of factoring into his mindset); from the folks who were wont to see red-scares in the attitude itself. But then, compared to policies of planned genocide, what’s one person’s persecution?

So what has a non-singular matrix got to do with flowers and crypto? (Flowers of the petal kind not the human kind –  in case you were wondering.)

The key is to look at 1943-era Bell-labs work on sigsaly. Today, we would call that digital modulation  that world in which one is playing with moving points on a circle around (phase transitions), orthogonal expansions (pointwise convergence), and pulse functions (circulant matrices).

It is likely that the training in communication engineering Turing got from his tour of the US voice-security systems in 1943 refined his intuitions, considerably (well done US, that is). Post war, he returns to the intuitions (obviously now well-enhanced  by all the GCCS theoretical work on Tunny) which takes continuous form modulation and better understand the discrete-model equivalents used in coding or cryptanalysis – preparing the ground for the computer software age, of course; and driving the first 5 years of computer hardware design, in actuality. When it comes to using a software simulation and association to model why plant leaves bud in various symmetry-breaking ways around a stem, he gets to apply some of his crypto intuitions.

In his diffusion-reactor, Turing gets to play with non-linear dynamics. Whereas in Tunny era the rectangle process of getting-out the wheel patterns is ultimately a refinement of what he knows from his Fellowship paper (the notion of being jointly Gaussian, which comes down as a concept enabling one to distinguish types of correlation spaces), in modeling non-linear dynamics we has moved on to what non-singular matrices offer.

Earlier, we looked as his rotor-era model for constructing  specifically Heisenberg uncertainty – leveraging sigma-algebras. This ultimately gets reduced to a particular adjacency matrix for the graph and the associated “designs”, and the matrix can now represent the “action” of this group in a manner implementable by a turing machine. But now we have such a mental model, we can also consider other matrices, and the general classes thereof – of which tunny and markov rectangles are examples. Of course, he was on track for an engine that would characterize indeterministic algorithms.

Getting back to plants leaves on stems and the more abstract notion of being jointly Gaussian (which is the basis of cryptanlytical method), we recall how turing built in complex space a lattice of a parallelograms to model points on the (rotational) surface of the stem-column (off of which would bud leaves, according to the “program” of the particular plant given to its diffusion-reactor. Using his intuitions of how information must diffuse in ciphering processes (and how constraints must be imposed to induce appropriate diffusion), he took the same notion and aligned them with the symmetry-breaking properties of “programmed” leaf budding.

Modulation is fun, because one has to model not only noise (whose variance constrains the energy per quanta) but shifts in the phase of one or more noise Gaussians. Modeling such phase – and the variance of a set of such phase-offsets – was built into his plant theory, given his complex point surface over which the parallelograms of the lattice “group” could be drawn.

Now, starting to sum up, when moving from a world of phase-alignment to phase-offsets one needs to translate a (unit) rectangle into its geometrically-shifted form (and back). For this the non-singular matrix comes to the rescue, since the lengths of the sides of the parallelogram that results (on the complex point plane) are not on the linear-scale (they are geodesic, in basis). But, no matter! For in Euclidean geometry they look like those a 12 year old would draw – which fits with intuitions. Now, all one has to do is understand that the matrix that induces the unit rectangle to be so tilted and  scaled (and shifted in the periodic phase space, as they wander up the plant stem) so as to make an (apparently to our eye) linear parallelogram is non-singular matrix – formed such that the values in its columns relate to the resultant (geodesic) lengths of the parallelpided.

Assume you have spent months looking a two columns of a tunny rectangle (formed from cipher depths recall) wondering how to measure their pairwise nature (trying to calculate the energy per crypto-symbol for the cipherstream – beta/bulge in decibans). Before that assume you have spend months looking at two streams of enigma ciphertext, trying to devise a test to see if they are in depth (i.e. one might detect the degree to which they are out of phase). Eventually one might look at three columns, each two of which have pointwise relationship, and one might must look at the numbers as an encoding of geometric points –  forming up sets of parallelograms rooted in column, spanning column 2, and culminating in a point in column 3.

It doesn’t seem hard to get inside Turing’s head. He really is NOT that smart – and not in the genius-math league. He IS incredibly astute and able to see multiple branches of math all at the same time as “representations” of the same thing. Transform between one and the other, you get a power differential – absolutely consistent with Von Neumann operator theory, sigma-algebras, self-replicating streams, etc. His particular genetic genius lay not in a special cognitive talent for math symbol fiddling but in a natural and innate ability to look inside-out – which is kinda what you need in science. It probably came from a brain abnormality, existing between the two hemispheres, with the inter-hemisphere chemical/diffusion process being slightly out of phase (giving rise to reflections within his own neural net).

Bullshit alert. In fact, red level.

Posted in crypto

ulam

One thing Turing was known to be wanting to do in his crypto work of 1939 was look for models that identified an “initial condition” (i.e. enigma wheel settings) with the notion of a measure. With luck, this combination would enable him to distinguish between one enigma ciphertext and another containing the firsts plaintext ‘at a slide.’ Having depths of cipher for the same plaintext provided depths, from which one would go off an construct ‘menus’ of constraints that would eliminate enigma settings and whole classes of potential settings such as rotor orders. To this end, he would have naturally sought to apply graph theory (as in cayley graph theory). Let’s now look for pertinent theory in this area; to be full of 1930s metrics, lie groups, measures, and probably not a little physics too!

image

image

http://people.clarkson.edu/~ebollt/Papers/stochas17.pdf

citation for following quotes, below

Our applied math training from Susskind and Gallager has enabled us now to read arguments in measure theory. So, if one wants raw theory about “ensembles of initial conditions” we seem to on the right track when looking at:

image

From the deterministic delta (a theoretical function, recall), folks move onto the stochastic equivalent, getting to

image

and we get to explicitly see inner products whose terms are PMFs, over some basis

image

Perhaps just read the paper… rather than my summary! Every line is good!

If its too hard, start here:

image

http://www.scholarpedia.org/article/Hyperbolic_dynamics

Posted in crypto | 1 Comment

rotor machines as table shufflers; anderson hashing throwaways

Having finally got a bit of a handle on what rotor machines are (or were) all about, the following line on RC4 threw me:

image

image

image

 

image

see Protocols

If you take a rotor-machine to be an enigma-class design, then I suppose its right. All the mechanism does is choose rows in a compound rod-square – whose per-wheel squares are just pre-computed permutations (or shuffles) of 26 chars “computed” by the conjugation of the wheel wiring by the fixed diagonal (or steckered diagonal ).

But, its now fun to think of RC4 as a giant table, being shuffled, pair at a time!

Concerning a 1-round hash function (note constraints) as a specially-crafted block cipher, I don’t recall:  for a hash function does a 1-bit change in the input ensure 50% of bits change in the output?

The particular picture reminds me of the kind of things the folks in Ottawa do.

Posted in crypto

M-209 as counter

Crypto-AG is infamous as a company that produced broken cipher designs. Reportedly, it produced endless design variations to order – abusing its trusted relationship with the customer to “orient” the design towards one with particular cryptographic properties – as deceptively engineered in concert with the US. Now, whether that ‘internet’ story is true or not, I have no idea (and apologies up front, if it’s a gross mis-characterization).  If it is even a tenth-true, shame is in order – given the impunity.

So why lead this article with a comment on Crypto AG if the machine we look at is the mid-1940s-era M-209? Because the company goes on, post WWII, to use the same principles of operation and design. So lets look at the grand-daddy, and see what was worth having.

 

image

From http://www.nf6x.net/2009/02/converter-m-209-b/

Let’s use Tunny-era formalisms to characterize the machine’s internal operations. Obviously, its external operation is to produce a stream of numbers from 1-27, each used with a ceaser cipher to shift the alphabet by that number of places, for each input character.

Internally, the 6 code rotors are obviously 6 pin wheels – each one essentially the same as a basic Tunny chi wheel, and having rather similar “wheel patterns”. We thus have a delta-X alphabet of 26 = 64 characters (rather than the 25 = 32 in the Tunny machine). Now assume a late Colossus-era attack…which aims ultimately at enabling a person to look down an ordered list of characters indexing the 64 numbers of the 64-count. – having used  several conditional-probability “runs” to hone in the “characteristic” count, founded in a correlation metric theory.

Assume next that the 64-count’s row characters, when listed vertically in binary, have some obvious gray-code ordering – when looking vertically at each of the 6 impulses of the “index” set (the input codewords) onto the counters (the output codewords) As during Tunny-era, the reason for said ordering of delta-D counts is to enable the human to figure which assumed settings – of any particular Xijk pinwheels – are not mutually consistent with the amplitudes and relative-placement of amplitudes of the count’s 64 components. As in Tunny-era, gain, this gives one a hint of which runs or which next-best counts (from a previous run) to choose as the next most likely setting candidate – given slides and anti-slides within the wheel-patterns.

Now, it would be tempting to model the M-209 drum in Tunny Phi wheel terms, where the combination of the Chi action and the lugs act as 27 TMs (Total motors). But I don’t. I choose to model each bar of the drum in sequence as the next of 27 Y Tunny-cipherstream inputs. We do this because the basis of Colossus is of course conditional-counting (and from that computation of the spectral density for a given Bayesian network)  And what do the combined actions of the M.209’s processes do?… but ultimately just add those 27 “motor” 1-bit outputs; forming two sums: the number of 1s, and the number of 0s in the binary form of the integers 1..27.

Well if that doesn’t suggest that we go look at the “excess of dots” in a stream of cipher (I mean M209 additive key output), I don’t know what does!

We view it as cipherstream (vs. motor) because in the Colossus–supported attack era we know that setting/breaking did not ultimately rely on depths (or plaintext cribbing). From either pure key or cipherstream, one could ultimately mount the spectral attack on the correlations – forming up thereby the custom-filter that modeled and constrained the possibilities (enabling the optimal decoding world of most-likely detection to apply).So, let’s take from history what works — and reapply it; since this is probably what happened!

Now, in an M.209 attack based on Tunny-attack principles, we know that given a successful break-in attack on X1X2 one can thereafter view the X1X2 as “parity bits” for the unknown X3X4X5 information bits yet to be broken. This starts to sound a little more like modern coding  theory, of course. But, now in cryptanalytical vs. coding terms, switch one’s viewpoint whereupon the X1X2 constraints further constrain the assumptions made about X4X5 (giving multi-dimensional conditional runs and thus multi-dimensional conditional densities). The Colossus-era filter was detector for the (target) machine-induced correlations that its mechanisms or operator practices introduced between the underlying impulse streams in a delta’d stream (key or cipher) – but ONLY IN THE CONTEXT of known-generic impulse stream correlations in the (a priori) plaintext alphabet (once delta’ed of course). We have P(Xi | Y)  = P(Yklm | Xi, Yij).P(Pklm). To this one does the usual odds trick to construct a distinguisher, reducing all this to the log-likelihood scale – from which one then computes the counting set-total – for n sigmas, etc., etc., for the N characters of actual cipher.

Now, doesn’t this implies that for M.209, one should look at each sequence of 27 additive characters as a kind of depth? Hmm.

Now would collecting such radio signals and doing such deciphering be worth it (given the tactical grade communications)? That is surely the same question as was it worth bothering collecting tactical enigma messages (in WWII and after)! If the goal is to build an intelligence picture, then the sum of the tactical fact-flow gives you a picture of the “support” (think W3C-culture f producing low-grade, poor compliance technology releasing personal facts in all directions/..aligned with Google searching and Brin’s background in nearest neighbor decoding, here, rather than WWII tank #3’s bearings need an oil-job and there is no local oil supply, please send some…). When you use that fact-picture and combine it with a set of inferences about resource constraints and overall capabilities, intent and industrial capacity, etc, one sees the “intelligence” value (and its inferential math!) If tank#3 has no supply of oil, oil production at the best of times is O, and we just bombed the oil tankers and Romanian synthetic oil plants, then…the war is shortened by a factor of zeta (probably) given the evidence.

Finally, what is interesting is that ultimately M-209 is a counter. Given a current X number in base 64, and given 27 binary functions from base 64 to base 2, where the (only) 2-lug model constrains the inner product of X with the sub-space of 27 such function-vectors, we then get a correlation measure as X is computed against each of the 27 binary functions – much as one can project a vector onto each of the vectors of a defined subspace to get a measure of the non-linearity (non-correlation with a linear model).

Hmm. Perhaps the “mechanism” has a cryptanalytical role too. Could one imagine a million Chinese soldiers in the Chinese/American (aka Korean war) acting as a million counters – in the evening time!? using some simple to locally-manufacture (from twisted grass, or extruded rice, or something) emulator?

Why not? People power always wins, ultimately.

Now what was I thinking recently, about some particle collider driving down the ability to detect for ever smaller probabilities of error, so that counting the explosion of (constrained) combinations is reduced to a polynomial time search?

Posted in crypto

more on Signin

image

https://yorkporc.wordpress.com/2011/04/29/nsa-signin-via-cryptome/

We have more context now to understand the technical information released in the signin disclosures.

Let’s assume the phrase “mixed wiring” (for the separator) means that unlike all other (removable) rotors, 1 input from the left maps to multiple outputs on the right.

This gives us a system with a sub-system on the left and another on the right – of different concepts and design contributions.

On the left, we have a character input being transformed using a compound rod square, where each conjugating rotor has as its stepping “motor-rotor” the one to the left rotor. The motor concept is one in which, for each 5 transitions FROM a notched position in the motor, the controlled rotor steps once; being otherwise “limited” by the motor. There is ultimately a “periodicity” of stepping, built from a cascade of notch transitions. This machine is NOT like Tunny (trying to create an aperiodic convolution ) and it is not like T.52 (using impulse permutation to interdict plaintext-cribbing).  Its much more basic and classical. On the other hand, its rotor cage concept is more “involved”, and feels (when combined with the concept on the right) like a mini-Sigaba concept.

On the right, we now have more than 5 inputs – due to the “mixed-wiring” effect of the  separator. Presumably this is Rowlett feature, reusing a little of Sigaba deign theory (in a much smaller machine!) in order to create a stream of PMFs that input to the right hand side. This contrasts with the left hand side, whose inputs were “characters”, note. In Sigaba, the PMFs control stepping. Here, folks are creating an additive keystream directly (rather than creating a “motor” for the sigaba enciphering rotor cage).

Now the mechanism on the right is two fast rotors flanking a couple of enigma-rotors. Recall that an operator is to reject any indicator (conveyed under an orthogonal latin-square indicator protection cipher) that would set the 3 fast rotors to the same setting. The purpose of the fast rotors is to stecker – and thus multiply the period of the next inner-rotor.

What you see is someone in the Rowlett school still dominated by sigaba type theory, even as they all play with feature of the T52 captured in N Africa. However, note how their instruction for permuting the “band” of output didn’t feature what the T.52 had (which interdicted the compromise of long plaintexts from breaking the wheels  – which NEVER change in T.52 (but then they didn’t need to, given the premise of the function and strength of the permutation generator). In both Tunny and Sturgeon, the 1930s-era German designers are clearly thinking more in terms of modern  theory (of convolutional coding, etc); whereas Rowlett and co are still thinking in terms of PMFs. Of course… we also know which theory won that round of the cipher wars, though!

Posted in crypto

Turing’s next engine; beetle research

In his 1950 works, its clear Turing was striving to define his next class of universal computational engine. Another theoretical machine (necessarily), it was to capture the principles of the computational processes only discovered in that timeframe. Since there was considerable overlap with crypto and ciphers, was that enough for the US to kill him like anyone else who didn’t contribute to the state? (yes…!)

In his rambling but entertaining book, Mainzer gives us a highly intuitive comprehension of non-linear science. In the part devoted to the period of Turing’s 1950s work, he builds a model of life processes – from a diffusion reactor. This engine is a step ahead of the improbability drive, and falls into the same category of theorizing. Of course, in crypto, such is not mere theory.

image

image

image

I find the theory very much a “crypto” model. We have channel coding gains, losses due to noise, and errors due to incorrect copying. We have random variables that are proportions, that calculate locally, but whose calculations are then subject to re-normalization to the averaging computed over all other members of the rv set. A motive is given for the macro-level constraints for the set of RVs as a whole in terms of conserving the energy. Then we have at least 3 different types of generator scheme, one for producing energy itself, another for fabricating calculating engines nodes, and those running calculations through the engine. The self-reproducing property knows how to at least replicate those three groups multipliers. Self-reproduction is due to the self-balancing diffusion of concentrations, which gives rise to the micro/macros interactions that drives the dynamic graph-based computation theory.

One sees Turings’ crypto experience all over this, down to fixed point attactors found in engima-era beetles!

Posted in crypto

a Turing text book

 

Turing’s university text book – prep for “principles of communication” (and crypto):

image

image

image

http://archive.org/stream/calculusofobserv031400mbp#page/n19/mode/2up

you can see the genesis of the thinking (and notation) on Tunny

image

We don’t think too much about difference tables and interpolation these days (unless you first tutor was called Samet…). The general form can be reduced to (tunny impulse) bit strings:

image

just look at 10110 (the first 5 digits of my binary value u) as 1.0110, where the 4 digits of the decimal are “4 digits worth of accuracy” (requiring 3 difference columns to describe it).

Remembering Babbage’s difference engine (for formulating such tables), we can even see a bit of a turing machines reader header (moving left and right on the tape helping out print out the tables) as E-1, and E:

image

Of course, in binary, this works quite LITERALLY (where delta is xor). Its interesting to to see the title of the section “symbolic operators” – consistent with (i) operator theory of the era, and (ii) the focus on logical foundations too.

One can now sort of see Chi1 [m] + Chi2 [n] = 1271… for Colossus scanning , etc:

image

its makes very “intuitive sense” that (1 + delta) to the power x (times a function) is  now a “specification”, where x is the movements on the tape to help “generate” the magical multiple (a log…)

interesting to see logs (and probabilities of error) and Turing tapes/reader come from a 1925…book! Even 1+ delta (the american AH form of BP’s (.5 + bulge)). As we say, power series (which is what difference tables are all about) come naturally to terms of the form (1+x).

Its also interesting to see how Turing was taught to think about probability (in terms of differences, and newtonian methods). There is no notion of probability per se – its just a distribution function, to be used in newtonian differencing methods (which iteratively calculate integrals, without using algebraic transforms).

image

Of course, this all comes back when looking at both communiction theory (wanting a numerical solution to conditional probability modeling) and thus crypto channels between wheel patterns and depths…)

While we cannot really assert an “instrumental” connection bets ween the following and the 32-count of the colossus-assisted method of attack on Tunny (paricularly since in WWI cryptanalysis folks already applied frequency spectral attacks)…

image

…we can perhaps note how Turing was oriented by the *argument* – that the deviations are the sum of lots of smaller deviations (per factor). Of course, we saw him playing with these ideas in his fellowship dissertation

It takes a little getting used to looking at calculus formulae and “thinking” probabilities! But, armed with hilbert spaces and L2 functions, its not too hard!

Posted in crypto

Nash life neighborhoods pointed out by cryptome; Talbot and Welsh

image

2013-0071.png

One of the fun things about real analysis is that it teaches one that the Fourier transform acts on formulae, much like a programming macro. It builds in one an easy intuition that such transforms are acting on functional forms themselves – in the style of Laplacians in general. This contrasts with schoolboy intuitions about mere fourier series – being merely a sequence of sin and cos terms that sum to a number.

Now given we are talking of sums we reflect on what we have learned: that the sum may be a sum of tiny functions rather than numbers; and said functions may be probability mass functions. They can still sum up (in a functional adding calculus). In particular, a mass function might factorize a density function – describing an independent sequence of bits.

In the Nash concept, in a quoted pictured displayed above, the designer’s combiner algebra builds upon the idea of a functional calculus. IN his concept, he partitions a whole up into tiny functions by treating and computed “neighborhoods” – the functional forms for probability measures, or functions thereof. To this he adds associated notions of distance and energy. Of course, this is all modern crypto – playing with indeterminism.

With all this in mind, one can conceive of a general cipher “method” as one that exploits an automorphism group (e.g. that of the fano plane). In support, such a method might call for generators -of automorphisms. These are a type of functional-program that chooses the order of functions. It permutes them in other words, or calculates a function that does the permuting.

In the cryptome-distributed book from Talbot and Welsh on Complexity and Cryptography, basic computer science is taught (in the style suiting math students). In one part, we get a good analytical presentation of what we are trying to say, above:

image

 

image

We must remember that the  classical Turing machine’s tape is split into 1d list of squares , but ANY single grapheme can be written into a square. The particular grapheme could be a “single” 15-digit, multi-dimensional vector for all it matters, encoded in the 45 strokes of a single Japanese character that symbolizes that grapheme-represented value from all others in the same value-type.

Using Turing-era terms from his crypto period of 1939 (vs his coding period, of 1932), assume a permutation group identifies the classical cycles within an enigma rotor’s alphabet/upright. Such values may be multiplication or conjugated rather, whose composite value has factor cycles. Expression tress exist, where one considers each “terminal node” in such a factor graph to be a small permutation mass function in a markov space – where a cycle such A-D-T-Y is now a trajectory description within a state space bearing a probability measure, etc , where the rules of the space are defined by a particular turing machine who state sets, i.e. equivalence classes of states, control different type of convergence and limiting functions.

One can match all this apparatus with (log) likelihood thinking – a particular type of measure. Having modeled the data sequence of plaintext in terms of its “intrinsic” log-likelihood metric, can one now express a “metric” (of an input bit-sequence) as the “component metrics” for each bit, as re-ordered in a manner controlled by each roundkey according to the permutation-program generated by the previous round, and whose output is a permutation-program for the next round that permutes the permutation of the state!  (Isn’t permutation group theory wonderful; all math concepts being… a  permutation!) In computer science terms, the oracle can replace a term with a program that computes the term. The trick is to ensure that the means used to compute that program (a permutation) is itself polynomial in time-complexity, and whose execution also uses polynomial time – to get to turing-reducibility.

For Nash, he uses the infamous life neighborhood concept to generate such generators – aiming to produce those conditions that induce the fractal conditions (defined ultimate as inducing perfectly self-replicating patterns).

If you just think in terms of permutation groups, it’s really a lot easier – as that notation lets any terms be a lambda expression – being an upright cycle, a fano plane design, an automorphism, or an expression in the automorphism group. Its important that any term is also a  function, which is also is a term (whose transmutability is the basis of the church/turing/kleene lambda calculus, of course)

Now, in Turing’s time, with L2 spaces (a space of mass functions from [-T/2,T/2] –> Cn), the indeterministic algorithm would factorize the density in terms of L2 decomposition. For Nash, he wanted to move forward to consider also the energy distance assigned to (or generated by) different neighborhoods. Thus he used a permutation-generator to choose which neighborhoods are applied, and which at the next stage of processing act to “generate a further permutation program”  that chooses the next round’s neighborhoods – for each point on that next round’s plane. Thus the energy decomposition is a function of the distances being “extrinsically generated”; and ultimately this all drives the fractal generator to converge.

You see the same thing in DES, where each coset generator is working on the dots or the crosses and estimating its extrinsic metric, a vector of real numbers (logically) that then go into a convolutional code generator to spread the distance from the estimate to the real metric. Multiple rounds generate maximal distance in the dual coding world inhabited now by a cascade of error terms, creating a unique noise-sequence that  is not itself describable in terms of orthogonality or non-orthogonality – being a dynamic sequence!

The welsh book stays away from anything too controversial (and useful) to actual crypto! It doesn’t give any crypto-centric examples of oracles in the wild (such as those used in the Tuuny era, during which reasonable-time estimation could do in time t, with confidence level d, what could not be done by brute  force in the time available. And it certainly does apply that to modern realtime analysis of security protocol messages, to exploit their (IETF designed-in) flaws (the whole IPsec half-IV scandal, to use an CBC oracle)

Posted in crypto, DES

fried fish

Written in March 1944,

image

http://www.codesandciphers.org.uk/documents/wffishnotes/wf001.htm

Its interesting to note the American (army, not navy) observer’s throw away line.

Reading this OCR, rather than the manuscripts, is like code breaking – given all the missing pieces!

We see the core practical nature of the orientation, teaching the knowhow behind the core skills:

image

http://www.codesandciphers.org.uk/documents/wffishnotes/wf001.htm

This implies BP training was drill based, not academic.

Next, we see another throw away remark, implying AH is very much involved in Tunny analysis (itself)

image

http://www.codesandciphers.org.uk/documents/wffishnotes/wf001.htm

We see that the monthly (at that time) breaking of Chi wheel patterns used methods know to the US – being turingery of course. Let’s assume that this refers to the detection of random vs non-random signatures in streams.

image

http://www.codesandciphers.org.uk/documents/wffishnotes/wf001.htm

NExt, we here the author implicitly comparing what he sees concerning the complexities and the reasons for with things he already knows … rooted in key management:

image

http://www.codesandciphers.org.uk/documents/wffishnotes/wf001.htm

In motor breaks relevant to a stopping/stuttering effect, one sees reference to “old” US skills, and the use of erasure channel theory:

image

http://www.codesandciphers.org.uk/documents/wffishnotes/wf001.htm

we see the log scale and bulges reasoned without respect to tunny (just raw dot excess):

image

http://www.codesandciphers.org.uk/documents/wffishnotes/wf006.HTM

we see the rationale given for using the 1 + lambda form (vs .5 + excess):

image

http://www.codesandciphers.org.uk/documents/wffishnotes/wf007.HTM

we get the american/Lincoln power-centric way of thinking: terms basically of s2/n for probabilistic decoding:

image

http://www.codesandciphers.org.uk/documents/wffishnotes/wf007.HTM

interesting to see someone clearly familiar with the American practice mixing newly-learned british terminology with what he his local vernacular:

image

http://www.codesandciphers.org.uk/documents/wffishnotes/wf007.HTM

This next set of phrasing tells one that the US is not reading the signals from the wire, and the techniques described were not gleaned from thinking about tunny specifically.

image

http://www.codesandciphers.org.uk/documents/wffishnotes/wf008.HTM

concerning motor rectangles, one sees some of the core design logic (based on column matching) – treating the components of wheel pattern as a channel:

image

http://www.codesandciphers.org.uk/documents/wffishnotes/wf011.HTM

One sees the authors assumption that some US machines can do “3 wheel runs” (count …) and that the terminology being used is not unfamiliar to his US readership (with little UK terms familiarity, no doubt).

image

http://www.codesandciphers.org.uk/documents/wffishnotes/wf014.HTM

We see reference to the “film machine” – irrespective of the US mecans of generating tunny streams using their geared thingy:

image

Posted in colossus, crypto, early computing | 1 Comment

cross here, cross there, dot everywhere

The following  made no sense, when I first read it – a year ago

image

http://www.codesandciphers.org.uk/documents/wffishnotes/wf001.htm

The text follows a description of those at BP who were taught Testery mental arithmetic. For a few hours, I then could not fathom how the method stated in the material quoted matched up. And of course, it doesn’t. I wrongly assumed that the two trainee groups were learning the same method (by two different teaching techniques, befitting folks’ aptitudes). Of course, the second group is being trained for converging rectangles!

What a difference a year makes!

Posted in colossus

cryptome not pissed for once; actually funny

http://cryptome.org/2013/01/spooky-emanations.htm

its hilarious (and well done).

The NSA facility has similar issues – since the architecture and the location geography affect the socialization aspects of movement – and thus offer tracking capabilities due to concentration. Look at the NSA facility as the key, and sensor data about traffic patterns from RFIDs sensed at roadjunctions as the periodic depths of (connected) ciphertext to be reverse inferred…

Put your sensor car in the (slow-moving) traffic lane, and use its DSP to scan the RFIDs in the tires of (probable) NSA employees’ cars (as folks go to shift work, each day); figured by location and timing cross-correlation. Alternatively, bury the sensor and a $10 solar array for the power source in the dirt somewhere along a funnel road… and wander by it (and its wifi) in your car every so often to pick up the data dump….

Now do the same at major junctions, in all the bedroom community towns, and you can trace the paths, figure shift patterns, guess roles of folks even, and generally do intel (much like was done in 1941-45 on the Tunny and T.52 “nets”)

But I’m not inventing anything, Peter.  Stop pretending. It’s already there, done, and working! (its just someone’s else boxes…).

The one thing Tunny taught us was that cryptanalysis by itself is kind of useless unless (a) its tied to device/people tracking upfront, to get a priori likelihoods sorted, and (b) intelligence evaluation after the fact, so all the bother is worth doing. Of course the after the fact eval feeds back to target the devices/people you MOST care about!

Posted in rant