generating minimum distance and t-resiliency, for channel reliability


it turns out useful to go re-review some of our year 3 curriculum on coding, channels, sampling, decibels etc now we that have the perspective afforded to use in studying year 4 topics. Our math is strong enough now to really understand the grad-level course that Forney in particular was trying to teach.

https://yorkporc.wordpress.com/2012/08/05/turing-repetition-codes-and-codebook-generators/ captured what we knew then about the world of (n, k d) and RM(r,m). At https://yorkporc.wordpress.com/2013/01/02/union-bound-estimate-pairwise-probability-of-error-in-tunny/ we got to understand the engineering, seeing how to take the world of hamming bits and project them onto the euclidean plane and the association world of finite energy functions. The two worlds of “math generation” and ensuring closure of the calculations when working in energy space aligned nicely.

The engineering also showed us how n, k and d work in both generator land and engineering land. And, we learned how to model probability of error, and SNR.norm. While Forney’s teaching orientation was focused on reducing the gap to capacity, we were more interested in seeing how the DES ECB mode work – in that L2 world – assuming that its coding structure leveraging random permutation theory, markov models for cayley graphs, generators and conditional probability calculations, etc were working to drive the probability of error lower than 2**64, ensure t-resiliency AND get in 1975 close to the shannon limit of a 1 bit channel.

It was fun re-reading those posts and the referenced material, seeing in particular how E.kmin was used and characterized along with minimum Euclidean distance and approach toward a capacity limit. One saw ghosts of phase space theory, ensuring that the projection onto suitable geometries within the hamming cube could retain the weight differences and minimum weights that translated into ensuring a closed calculation world.

it was also interesting to see how “factor” were characterized, reminding me of Tunny-era scoring systems – where even back then one designed DIFFERENT scoring systems for different assumptions about the probability of error (when separating Phi streams from Chi streams, say), the dottage of the day – which would influence whether the SNR had sufficient bias to allow averaging processes – over lots of depth such as DES CBC blocks favored in NSA-influenced IESG – to actually converge….when one started totting up scores.

its at the end of the https://yorkporc.wordpress.com/2013/01/02/union-bound-estimate-pairwise-probability-of-error-in-tunny/ post where one gets to see just how close to these ideas folks were, when wheel breaking, back in 1944. One even sees how calculating in the sign basis (the hard numbers of hamming space) was justified as a means to calculate the same “factor” that Forney identifies (given a particular engineering plot of P.error against SNR.norm/roe).

One also sees how, from practical cryptanlaysis methods, how both known-good hits of the model against the depths in a couple of rows under evaluation give a score contribution –and how known-good fails also contribute some score, too.

Advertisements

About home_pw@msn.com

Computer Programmer who often does network administration with focus on security servers. Very strong in Microsoft Azure cloud!
This entry was posted in coding theory, colossus, crypto. Bookmark the permalink.