cryptographic games–China vs US- removing the UK’s game rules


China would be well advised to consider the relevance of the “playing ball” phrasing.

Any good politician talks through both sides of her mouth – letting two audiences hear what they wish, from the same words. Google play ball, with words.

To Google, as any American firm, this “is” a game (to be won and lost, and competed over). It has to kept as a “game” since there IS no simple political solution. Being upbeat Americans, they get to choose between a mindless, senseless and failure-inducing world … or a game ( in which perhaps politics keeps the ball at least rolling).

Of course, google know that the standards groups are rigged. They know that the IESG might as well be made up of US officials, beholden to the american dogma. Of course they know that policies, practices and standards are are set to meet the commodity market (not the state-secrets market). And that this means that the security works ONLY in the areas its supposed to work (and not in areas “ out of scope”).

According to the American game, that both Google and Microsoft play well, the game rules are set so as to keep anything of much importance in the “out of scope” space – where normal spying techniques work.

Where China vs US seems to be enabling a solid China win is in the area of vendors, e.g. Google, who look increasingly shrill – as they attempt to deny the nature of the game – and their own political doublespeak. Yes there is NO backdoor to my electronic house locks (and no agreement with the local police to allow covert entry, with special lock codes); but two decent sledgehammers blows to the *casing* of any common, wooden house door make it “unnecessary to pick the lock”. The lock resists penetration by Harvard crypto scientists for 2h! (says the google security marketing). It’s just “not google’s problem” that the casing resists for 20s, only, to apes wielding  stone-age clubs.

And so it is with Google and Microsoft Crypto.

Both firms know precisely how to sledgehammer areas of their product other than crypto, making the crypto ineffective. And of course, they spend large amounts of money, in marketing UK-style deceive-and-deflect security standards , ensuring that “those areas” are “out of scope” in the “mind” of the public.

Posted in rant

math and cryptanalysis–some notes

three observations about math and crypto.


1. Quaternion “Algebras”

Its fun to look at quaternions as a special kind of polynomial sum, with terms weighted as is usual. Then, its interesting to see how to abstract H, the quaternions, in quaternion algebras – for different basis sets.

In light of how the theory of albelianization of streams, one can feel, quite intuitively, just how this is so crucial to even modern cryptanalysis.






The main point is that F can be anything (including huge number systems) good for cryptanalytical discriminant finding.


2. Wave “functions”


Also fun to review even the basics of quantum mechanics, so well reviewed by Susskind.

He was able to put succinctly how a wave function is just a function of x (just as the are the functions we all learn about, aged 12). Its just that the calculation formulation for that function is an inner product, where x varies (just as it does in the functions we all…).

What he doesn’t do well is just say what Turing said: a wave function is just an array of proportions!


3. hyperbolic geometries and generating algebras


its from hyperbolic geometry that we can a glimpse of how Turing saw quantum modeling and simulation AT AN INTUITIVE level.  IN particular, in his on permutations paper, we say how he leveraged just 2 and 3 (2 steps between 3 points) to create normed spaces (of 6 elements); then argued how energy functions (i.e. Hamiltonians evolving the energy component of quantum systems over time ) can be represented simply in terms of permutation groups, leveraging the projection of those functions in a hyperbolic geometry onto a projective plane that supports calculation in terms of long expressions of (ordered) swaps, using just the Newman’s core topological knowledge in foundational groups and homotopic equivalence.


We got to see how conjugation in a hyperbolic geometry is the reflection operation (when the geometry is the interior, or inner space, of the unit circle). Similarly, we got to see how external point relate to the circle too, and how this quickly gets us not only to the notion of duality but to external product space where constraints between two intertwined systems create a hilbert space where quadratures and spreads are preserved; and in which one can do quantum calculations using only proportions.

Posted in crypto

developing cryptographic intuitions for quantum era

we learned from the description of the us 1945 5205 cryptographic process how, despite all its engineering complexity, the machine counted how many time certain (high scoring, distinguishing) characters appeared in a tunny stream. for example, compute chi (5bits) xor ‘e’ (5 bits). now count the hits in cipher. e will have less uncertainty than it would have, were its to be found in a random cipher stream.


we have seen that, in quantum mechanics, a inner product is, on the one hand, a measure of the distinguishability of two states, and  ‘the value e’  on the other. Should one score ‘e’, its a measure of the transition probability. perhaps e is just a representative of its weight class, and perhaps one goes about counting any distinction equivalent to a weight difference.

Posted in crypto

Realty ws-trust IDP interworking with AAD token issuer, in saml bearer grant



Using fiddler proxy, we were able to craft delivery of custom metadata from our IDP whose endpoint addresses now meets the expectations of the Micosoft ADAL libraries saml-bearer grant flow.


The only code change we made to this service was  add a nameid format property to the subject field (of value unspecified). But, Im really not convinved that that has anything to do with sudden interoperability. Making our active and passive STS have configurable values for that property doesn’t seem particularly useful, though is vaguely more correct.

Posted in AAD


signup using chrome (not IE) to get a (windows) cert




create microblog and channel, and first post.


in IE metro mode, note how to login to site






after a second prompt to select the cert, we do get a modern UI




not succesful on windows phone 8.1, having loaded up certs/keys, etc; probably as the TLS handshake doesn’t validly induce certificate selector. Too many assumptions being made in cert naming, no doubt.

Posted in webid

k-order propagation in hyperbolic reasoning calculation spaces




if we think in terms of Wildberger’s universal hyberbolic geometry, the case for defenses against linear cryptanalysis is one of ensuring that a wheel of points is assumed to have to each point a binary value and one looks back from the current pointer sufficient pads to get the components of the codeword, that has the requisite number of hamming bits. IN the case of differential cryptanalysis, one looks back enough component terms so one has k supports

Posted in crypto

hamming weights, correlation immunity, proportional bulge algebras




Correlation of Boolean Functions – MIT – Massachusetts Institute



the original conception of golomb is far more intuitive than others, particularly when taking into consideration



this guy does a great job of reasoning much like Turing and co reasoned in 1943, using ratios in a hyperbolic computation graph space that reasons with correlations (contrasting null points on the distinguished circle – like enigma rotor points – with points on the overlying triangle – which represent non-unitary correlations linking plaintext to ciphertext).


now it becomes very obvious why Turing, in On permutations,  is so adamant to set the mean of vector length to be 1. He is entirely reasoning in proportions.

It will be fun to see if Wildberger can get us all the way from here – at as he says the elementary stuff that is key to “thinking differet” – to fourier transforms computed in proportional algebras.

We know folks in the cryptanalytical attack on Tunny made exactly that leap.

Posted in crypto

Microsoft Azure blog and AAD



you cannot comment on the microsoft azure blog using a live id OR an organizationalID.


you have to wonder what goes on in mind of some folks. Or perhaps comments are just a throwaway that noone really wants – but have to be seen to provide.

Posted in AAD

wordpress and SSO culture; letting google eliminate Trulia

It’s been a while since I analyzed – looking at how it has taken up or set a trend.



we see that the MISSION of a google enabling SSO to the site is so that it can sell linking services – indirectly. Google wishes to be in the enforcement bisiness – selling visibility of your linking circles. To this end, it wants you (and your wordpress site) to have API credentials back on the IDP ‘’s ports, so that your (or your site) can exploit the linking APIs to publish references.

it is interesting to see JUSTS HOW EFFECTIVELY wordpress were able to showcase the real driver behind oauth (for websites). Whereas Microsoft Graph API concept is very much about personalizing a site, for Google the equivalent service is all about outsourcing lead generation, where each circle is a targeted marketing group for corporations.

At the same time, Google, I totally get it. I could easily sell this concept to realtors (as one example of a lead-generation based industry). Google could then easily attack Trulia with this.

So, if I want to defeat Trulia and its attack on the establishment MLS world, perhaps all I have to do is release the Google Krakon, letting realtors individually use their google circles to do the same lead marketing that Trulia wants to outsource. Rather than give Google streams of MLS listings, we just grant API access…

Then amplify the attack, by allowing linked in to do the same… enabling of them to showcase how little Trulia is doing (that folks cannot get for free).

Posted in RETS

defcon money-making non-romp, avoiding discussing of internal subversion by NSA et al

One thing the US govt gave up on, a few years ago, was using the CISSP program to “embed” trusted souls in the heart of corporate america – as it moved to the internet and away from private circuits. These were the folks who were to lead the production of a local capability to “rewrite” communications systems, on demand. Born as the pre-cursor to mandatory wiretapping capabilities in technology, this was the response to the “gap”. The gap is filled by people, indoctrinated and educated to deceive on who is the paymaster. The pay comes in the “accession” to certain committees (with stipends and travel budgets, etc), or “job mobility”.

This didn’t work too well, long term (for reasons I can explain to those with CISSPs or equivalent). Though it did fill the gap in 1996 era, when it was “most needed”.

Its replacement was the pentester – the creation of an entire “independently minded” group of folks with intimate knowledge of the vulnerabilities of corporate systems.

Such as defcon are breeding grounds for the “culture” of pen testers whose ultimate paymaster is not the client but those with a desire to covertly subvert corporate systems, having got an insiders view (from the pentester’s work).

Defcon is a good business for its founder; and one notes now how little dissent is tolerate in its papers and format. Dissent that notes how defcon culture itself is subverted, is entirely vulnerable, and has infact been wholly penetrated is NOT ALLOWED.

Once upon a time defcon could enjoy a good public rant (and enjoy a good humiliation-inducing ribbing (drubbing in English English) of folks such as me who would come down and say the above). But no more. Its too frightened. Not even the fake beer works, any more. The chains of chinese girls being paraded and then farmed out to the elite hackers are to be no more (not being able to get visas, thanks to China/US cyber spat going *too* public).

The little side “contract” from “certain US agencies” ( a favorite phrase on the defcon cognisenti as they do they james bond impression) is at risk. And that’s my new lexus car payment (says the defcon subverted).

So now someone will hack my American password (which will take about 14s, since its made and protected using American technology) on wordpress, and put up a purile pawning statement. We are SO GOOD! What we wont show is how to do it in 2s (if only you had lots more computers, assuming defcon technical types could network their brains together to actually cooperate against the “interna” threat).

I wonder if General Keith will be paid to do a walk-on, to help his retirement fund get to a 100 million?

Wonder if stories about my own “sexual urges” are to be given a public work over, and whether we can see who lies behind the “initiative”” as we see the “raw” face of pen testing culture come to the fore. bet we don’t see any papers at defcon at the links between the participants and those who “hold the little black books” on all of us.

Posted in rant

a ws-trust IDP emulating ADFS for use with AAD oauth bearer grant!10733&authkey=!AD_ZOBRkh010sdo&

Posted in AAD

align AAD with ADFS

$msolcred = Get-Credential -UserName  -Message “password for netmagic is FRED!”
Connect-MsolService -Credential $msolcred -ErrorAction Stop

$cert = “MIIC5jCCAc6gAwIBAgIQELcx5ZetWLxLu947oG+anjANBgkqhkiG9w0BAQsFADAvMS0wKwYDVQQDEyRBREZTIFNpZ25pbmcgLSBwZXRlcnZtMzIucmFwbWxzLmluZm8wHhcNMTQwNzAyMTg0NTIwWhcNMTUwNzAyMTg0NTIwWjAvMS0wKwYDVQQDEyRBREZTIFNpZ25pbmcgLSBwZXRlcnZtMzIucmFwbWxzLmluZm8wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDlrHS0Y6LIak6OFJulzl0EWAm/1CZbBTnGuh+geRt/Wllegtwfl7b+v8irjyDYeB59q0SlLb58H0Q4j5iYzDiuyXUwwNioTv4DKmDPsUQV3nbmV7HUqG0Ddc68Y46VRiLTN914QqKxgtaxLXx1qtxhOrrjlEGgyfpk4hDJv7njsCkI3YxBhBpNYdOQI23EX/LUP0Y2UiUdPlxueAqcoN0syR6WArBaVHARa35zJABevjm5jM9Qh8LiD+E3IhXjfMisi7xYYHiq3IogLZDg+y6hx4Nu/xpHuWHYqa+yNeB9GLjKrFpkK9xcWjMZkxeZ14Rd7GbhG7dGquRQIFomvdf1AgMBAAEwDQYJKoZIhvcNAQELBQADggEBAI/tJE+eFaFA7GsMYRzMWP2BiNY2MyaJfErSovq9IuvrQ7iqitJUkbCHcEjxhd2VmSo/EC+56feoyaoowwgq5q2qBA8oWg7LOg0UIP1EWCbeWbM6u3asU0rnyD6QD1RWjSkDI6G0G3NffbTPPjnOqAl99ZO1SYA50nwbM85dubHvMU2R1beFw1VXXVYjtyFft7c0Ksrg15jpYhQqKSjKkez/tGv9YIkt1mOHcJSO2QKuv/vyQjrqMM/bwxIet+hJ095RjoxaVnboWNZN2DHB/DVP0hBZlKkBOYRabJm6tWQMuxSzc4wqog1YYN5judGiORmaNJq5danHw98Sia7DLP4=”;

Set-MsolDomainFederationSettings -DomainName -SigningCertificate $cert

Get-MsolDomainFederationSettings -DomainName

$localhostcert = “MIIB0TCCATqgAwIBAgIQMB03MGH0sZRBtyF2QpuRSzANBgkqhkiG9w0BAQUFADAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwHhcNMTQwNzExMTgyNTIwWhcNMTkwNzExMDAwMDAwWjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBANxvf+N7ApYPEIZKB4yGbZTdN9vPz112Y19/rnZKrPbswvaS+4fJLrFEDTowbHCULo9QfCZLtKxGlnBxw+huZV6NWYlqfmauX9h2nW+j5yA8bqXgi3bmuwOgUgzceqx7Hpn/TnX2p2NNu/aRodb8pCyCl42hoA5ZYll9zscVQLeBAgMBAAGjJDAiMAsGA1UdDwQEAwIEsDATBgNVHSUEDDAKBggrBgEFBQcDATANBgkqhkiG9w0BAQUFAAOBgQAQhBKsWjhrjGMO563Alqh7UJo8M57Qjgej0P43LdeEUDiNa+HBo1tRr6BIxoWpAlxOXOCAF7G7yvrSb5MTZ5Oxn846cKGeZUP7aexpP//TOi3buiGR5dUsxaAn0BBPHDPsb2x3RoQHyW96eyhdAq4aI/Y01SqCTakZnztXjKI2IQ==”;

Set-MsolDomainFederationSettings -DomainName -SigningCertificate $localhostcert

Get-MsolDomainFederationSettings -DomainName

Posted in AAD

Torshioning – and On Permutations from Turing

While playing Norman Wildberger’s second video lecture on the “fundamental group” I heard material I have never addressed before – despite it being 20 years after I got Cameron’s course for computer scientists on fields, rings, groups, etc. It’s just fascinating to see/hear the geometric angle. This would make a far better second year engineer-math component of a science degree than the typically awfully taught linear algebra course.


Its just fascinating to see Wildburger after touching on multiplication and inverses apply the technique to the abstract torus – where there are 2 dimensions/generators to the (constant) loop.


At this point we start to think more about Turing’s On Permutation’s model – having seen that Turing has simply added another geometry to those that Wildberger gives as examples.


The theory gives us some quite “involved” modeling terms to work with commutativity, powers of path-functions, commutators and sets of generators on a particular surface (that constrains loops).

We even see something clearly modeled (algebraically by Turing) in the use of the projective plane as the base space for a covering domain argument. This addresses the special case in which powers of a generator can only equal the identity term.


The course element has given us several examples of “torshioning” (contraction).

We should remember, now, that Turing’s main point in his manuscript was that certain wiring plans for the Turing bombe drums produce a surface and a supporting algebra (and yes a unwanted bombe in 1950 would be allowing him to several wheels in series!). The U terms play the role of Alpha (to some power). Of course, the symbol of “U to some power” can be expanded as the U wheel being rotated foreward or backward by the power – as in setting an enigma box – such that alphabets of two wheels conjugate.

When a set of such rotated wheels have powers that sum to zero, we have the commuting “normal” case of course. When such powers sum to zero, we see also that we have the “definition” of the abstract cycle (vs a not-cycle that is)– as thought in the world of higher-dimensional homotopic spaces.

Posted in enigma

uk university prototypes for NSA university courses (in cyberwar)

So UCL-CS, one of the foremost internet colleges in the UK, with about as fine an internet pedigree as it gets, is now at the (semi-secret) forefront of interfacing academia to GCHQ. This both hurts and instills pride in me (since I worked there, and know lots of the folks now “fully involved”). I have to admit that I did something similar while working at UCL-CS (taking cash in my salary from the UK DERA agency, as it continued its ongoing (from 1940+) campaign to out do its sister agency – GCHQ – by doing things THAT ACTUALLY WORK, when spending “strategic funds”).In may case those funds sought to continue a world of ‘commodity crypto””’ and secure PEM/S-MIME email – that class of deployment that doesn’t really work but helps educate.

So, without naming names or department heads, we can look at the structure – and see from it just “what is wanted”. We can assume US universities, but NOT those already hosting schools that are just funded-proxies for certain agencies, will go the same way.

First, beyond the CS dept UCL happened to already have a police forensics-related department, long involved in the “science” of producing reliable reports. Think of it as a training ground for folks who will start in the CIS program. This department brings into the leadership group a hard core, fully indoctrinated scientist – who asks no moral questions (of himself) while imputing all manner of stuff about anyone else.

Second, UCL also happened to have long term affiliations with the US DARPA program – the intent to use intelligent routing and the like to keep the US ahead of the next sputnik. This program is is a gross deception (aiming to ensure theoretical national regulation boundaries on where data is present, when captured, are trivially subverted, on formal grounds). You subvert the foreign router so the packets are on US soil…

Third, UCL also happened to have certain individuals who belong to a certain elite club of early computer scientists who were part of the US/UK liason world, in the 1950s. This was a continuation of the successful, if only half intended, liason in crypto world of the 1930s based on, from the UK, the exchange of Wylie and Turing to Princeton. this is part of the oh so english “old boy club” whose clubby rules transferred easily into the spooky world. One sees this tradition continuing, with UCL folks being given access to certain American forums – that special US/UK relationship.

Back to what apparently it is that GCHQ want …from  a “cyber center of excellence” (some mish mash of senior academics from various cooperating departments”) we can list:

1. psychologists (like those who have produced classical work that has already shown up in leaked GCHQ documents that aim to subvert the internet, from “within the mind”). This is a mix of indoctrination of the elite (a little like the boyhood to manhood training in he Hitler Youth “thinking”, in 1930s Europe) and to “engage” in an out-thinking war (with the targets).

2. folks good at phone (in)security, and SIP in particular. This has to focus on putting power into american institutions (vs international forums).

3. folks who are good at academic-math-crypto (vs crypto/cipher design); producing prestige and wide ranging summarization abilities to track overall trends of knowhow and capability dissemination

4. how to leverage routing protocols to bias the “flows” of data in different types of network conditions (including wars).

5. how to indoctrinate folks passing through general engineering school programs so that their expectations of what can and should be achieved with commodity “security designs” is quite limited – to only the classical orange book features. This is the Victorian attitude of don’t educate the working class (since they might get expectations…); or the american race-hating program of don’t educate the “backward” black man (lest he vote).

Above all, the “cyber center of excellence” must be willing to “project the story” – and refuse to talk to the likes of me who speak up (else lose funding that defines one’s “professorship”). in the GCHQ (vs the DRA) culture, one has to find me and my kind embarrassing; and expect folks to use all the “high-educated” techniques to “mis-characterize” the very class of thinking that I represent. One has to think here, of Franco (in 1970, as he aged) or the English (failed) attempts to subvert Leninism in the 1920s, through an early “information war” campaign.

I find it quite amazing JUST HOW effective GCHQ indoctrination of the folks I once knew has been, and how little time it took to pervert academic liberalism and produce not one but several tinpot strutters (with mouths willing to spit venom, when noone things they are being overheard and reported on). All to keep a few dollars flowing!

But this is GCHQ at heart – an organization that relies on the manufacture of social discord to keep its little windows open.

Posted in rant

GCHQ manipulating wordpress site statistics

I’ve half-expected GCHQ to have been manipulating my wordpress site’s apparent statistics for a couple of years now (as I drivel on half-assed – but pointedly so – about crypto). Looking at the stats each day, there were just too many patterns about which articles garnered interest. It was LIKE someone was trying to engage in 1930-style impression management- but ended up looking like (and having the negative association of) Dr. Goebels.

I would not be in the least surprised to find that my site strangely partly visible otherwise “disadvantaged” , outside the US. At the same time, I have to say: WHY would one go to the trouble (since its mostly drivel)?

The point is that those who can, do. They feel entitled. The American version even feels exceptional (furthermore). Between them, they get the “masters of the universe” complex.  the information war (against any dissent) has to perpetuate – to keep the spooky fix’s high coursing through the veins of the indoctrinated.

It’s a social disease; and one that spread quickly to product managers of crypto-stuff in the cloud vendor community.

in my case, folks just want to “put me in my place” (some lower than low, english class, in that social  system that doesn’t exist…).

Posted in dunno

adfs v3 configuration for application

using visual studio 2013 pro, update 2, we used the c# wizard to make a sso-enabled project, which at project creation time we configured as shown next:


This app is hosted on the IIS express service of the same windows host running ADFS v3.


Above, we show how we configured the RP – at the IDP.  one MUST take the option and turn ON ws-fed and one MUST enter the RP site’s ACS endpoint.


To make windows integration authentication actually work, we had to turn on forms authentication,  for all RPs, as show above. I suspect this just rests the service somehow.

This gives us confidence that ADFS is setup now to be a simple IDP for the (not public) domain.


Now, to ensure this IDP is setup properly to cooperate with the FP relay at microsoftonline, we make the organizationid variant of the same project




At this point, we have lots of  confidence that our ADFS is working well and cooperates well with a MicrosoftOnline STS/FP to land on a registered application of the domain (and portal site, too, not shown).

this allows us to showcase that INDEED the IDP –> FP to saml bearer flow at the oauth endpoint, from the so-called headclass client use case, DOES work.



See!10656&authkey=!AC_k8xmZ6kPpZXo&ithint=file%2c.saz for fiddler trace.

Posted in ADFS


Set-MsolDomainFederationSettings -DomainName -FederationBrandName -ActiveLogOnUri -IssuerUri -PassiveLogOnUri -LogOffUri


Set-MsolDomainFederationSettings -DomainName -FederationBrandName -ActiveLogOnUri -IssuerUri -PassiveLogOnUri -LogOffUri -MetadataExchangeUri

Posted in AAD

more DES (crypto rubbish by Peter)

Now for one of my math-like missives (that is the kind of submission from a crank that every math professor gets in the post once in a while from an amateur math nut). Its on topics I don’t understand. As usual Im just throwing concepts at a writing wall, and seeing what happens.

The idea  starts with swapping of characters in a the words of plaintext, so as to minimize the spectral bulge giving away their unique ‘plaintext-stream’ character – as used in the attack on Tunny. Of course a swap is a 2 element cycle, or factor in an expression chain of such cycles that “operate” on the original stream. The choice of which values to swap – or which factors to include in the transform equivalently – is ‘decided’ by an algorithm that wants to remove all the inherent non-commutativity found in the plaintext making the resulting, and now-swapped image of the words be, formally, a commutative stream.

Since that is bullshit math, what I;’m alluding to is the property that such a ‘commutative’ stream will almost always shows up the same spectral character no matter which additional *and random swaps* are added. Of course, its just possible that those random swaps would just happen to create a stream whose character is normal plaintext, of course…

Now when I think of non-commutativity, generally,  I think of a deformed rectangle, and the corner kick from that corner that is not ‘right”– because the linesman marking out the lines of the soccer pitch made one line miss the intended junction point, having wandered off square as his marking machine laid down the chalk. He thus adds a small fifth ‘correcting’ line between the (painted at the wrong angle but with the correct actual length) line to that other line forming up the other side of the corner.

There are many lengths and angle for said corrective term.

I then imagine that the spectra gap of the eigenvalues of the operator is always positive, since the homotopy functions, as the plaintext stream morphs into the ciphertext stream, as the ciphering operator remove the spectral characters that signal the non-commutativity…by computing a long sequence of factors that sequence the swaps. The original bigram is the long line in the parallelogram, and the factor swap are the corrective line.

each time we do such a swap, we deform the crazy parallelogram treating it as a loop (centered at X0, the faulty corner) , which increasing comes close to a true rectangle, with the corrective line limiting to a single point (that is almost but not zero). The final state of affairs is the ultimate function in the homotopy class.

Now our factors could be have been q-ary cycles, with greater than 2 allowing for transforms based on graphs in which vertices have q outgoing (and distinct) edges. but, we keep the q=2 (pure swaps that it) so that our image space is unitary (based on all the usual orthonormal vector basis, mean length of vectors being 1, …yada yada).

we know that the rules of expander graphs – when designing the wiring of the nominal-enigma drum on a turing bombe – are related to the spectral gap. I now imagine the SIZE OF THAT GAP to be a unique measure of the factor chain that made the plaintext into a commutative stream. When q= 2, all such measure values will be in the small class possible which is continuous … in the sense that there is a distinct value for each homotopy function in its class, if for no other reason.

Of course, this nothing different form Turing’s write up of a factor chain, in the profs book.

But WHAT I also intuit is that the length of the point, in the limit, is a unique “base” for the logarithms used in belief propagation algorithms used in cryptanalytical attack processes.

We have to imagine that there is a machine/chemistry that can, in small quantities, do base-specific calculating by counting very small sample in huge fields. On the basis that one only needs a good “hint” or two of the DES keys bits to “get a start”, lets imagine now that the big trapdoor secret is that one can count at a level of accuracy that seems impossible to achieve (except for small numbers of runs).

So lets image such a machine can distinguish between the terms from two expression in different log bases – each one a function of the swaps, as above. That is, it can tell the difference between two loops. In particular, it can tell the difference between the constant loop (of point size) and ANY other. it can detect the difference, that is.

Posted in crypto

latest thoughts on what Turing teaches about ciphering and cryptanalysis

Folks following this blog (all 3 of you) might know that I’ve spent now 4 years learning to do cryptanalysis. 3 of them was finding out what it even was/is. One test of what I am even now still learning about the core of the theory is: to describe “how DES works” (not how to break DES, note). That is, what are the design principles that enable ciphering (and defeat cryptanalysis)?

Of course, my world is somewhat ideal – since real cryptanalysis includes subverting the human being in some way or other (so that the problem is actually  rather smaller than in the academic model). One notes how Stewart Baker (ex NSA) plies his trade on this distinction, reinforcing how true Americans would WANT to engage in such systemic deceptions (or else be labeled traitors, in true yankee treasoning).

So here goes with the latest treatise on DES.

Life starts when one notes, in Tunny era, that it is detection of plaintext characteristics that allows certain of the attacks on Tunny chi, motor and psi streams to proceed. IN math terms, this means there is a word problem – in which the ordering of the characters in the plaintext is somewhat predictable. It is thus the duty of a cipher, when working against plaintext distinguishability, to diffuse such statistics.

Turing teaches us how. He plots a world in which paths on a manifold (a doughnut, say) are functions of plaintext, perhaps known as alpha(). IN the world of homotopies, it is the aim to deform that function to one now known as beta(). IN so doing, the distinguishability of bigrams and trigrams etc in the alpha() version of the text is reduced to apparent uniformity (in the beta() version).

In a world in which probability is conserved, how does one “hide” the distinguishability? The answer is: to add to the math model, beyond homotopy, the notion of homomorphism. With this, we now get images space (and inverse images spaces, too). The goal is to move distinguishability from the text to a space I’m going to call “typicality”.

Folks doing math courses are properly trained to know the difference between a limit in averages from a “full limit”. That is, there are different types of integrals – and we get to focus on the first type: those in which, in the average case, “measures are concentrated”.

Well, that sound vague (and academic). So what do we mean?

This all means that the deforming that should be occurring – as alpha() becomes beta() in the homotopic transform – is matched by a similar transform in the group-theoretic projection of the alpha() and the beta(). After all, alpha() is just a walk subject to the rules of the geometry – and a continuous one, at that.

so now we are getting even more  vague and technical, by introducing continuity. So, Turing! help us.

Fortunately he does, when teaching that all we have to do to embrace continuity is think in terms of a grating (and a microscope). Each time we focus on our subject under the scope, we note the image is split up into parallel lines (of a certain uniform width). If then we swap lens and look at it perhaps 10* greater magnification, the uniform width of the lines does NOT change. That is our grating got 10* finer, too.


The lines on the Google doodle are NOT continuous, between pitch and tv image

So we have to a very intuitive model of limits – from continuity – as the predictability of one character following another in plaintext is to be diffused, as alpha() becomes beta().

But, now, back in the world of our discrete space, as our  domain space is mapped by homomorphism onto a discrete cayley graph, in image space, we have to ask: so where does the ciphering come in?

In the mapping from one world to another we have to recall that not ONLY is there a mapping going on but that mapping is a covering map. SO now we have to consider projection planes for the target, onto a one-dimensional space. More than that, we have to consider what happens we when map BACK from projection space ONTO the union of discrete spaces in the image of alpha().

At this point, lets introduce a missing notion: the normal operator. This is that, somewhat connected to the kernel of the homomorphism, that subgroup whose left and right cosets are the same. This, of course, is related to our core topic: commutativity of our beta function or its image (and the non-cummutativity or “plaintext predictability, from tunny” in the original plaintext of the alpha() function, or its image).

The real point is that we are nominally swapping the positions of characters in the original plaintext so that the probability mass – originally found there – moves into the image space – where we can now play certain games.

Our mental model is that expressions in plaintext become expressions over terms of the group members in the normal subgroup (that set of wheels whose powers sum to zero, etc etc). We get a longer and longer expression, in image space (formed from normal terms of the …).

If this were the entire story, it would be boring. But there is more fun.

One of the cute things that Turing teaches, rather obliquely to hide the ciphering material from the 1950s or 1940s UK censors no doubt, is that “probability mass” can be concentrated. And, after all, this what Tunny documentation teaches us too (when one manipulates depths so as to concentrate probability into n bands of standard deviation width, too).

so now, as one formulates up the long expressions, in image of beta() space, we have to remember that IN PARALLEL folks are ordering the normal terms. The predictability that was in the plaintext transfers … to the predictability that its one this or that term in beta-image() space that occurs next. Furthermore, most of the probability has been concentrated in the (entirely predictable) elements of the beta-image expression, leaving only residual and TRANSFORMED improbability left- reflected in the ordering of the expression terms.

So to defeat  the very attack on Tunny used by GCCS, one must  find – whether its it’s the DES Process or other – a method to diffuse the original plaintexts probabilities into improbabilities that the PROJECTED image of beta() is any one particular mapped value in the space of the image of alpha().

We have a clear amplification of probability space size that is, having amplified the ‘grating’ size – and thus making it harder to apply machine power to attack the combinatorics.

Posted in crypto

adfs default claim rules





c:[Type == ""]

=> issue(store = “Active Directory”, types = (“;, “”), query = “samAccountName={0};userPrincipalName,objectGUID;{1}”, param = regexreplace(c.Value, “(?<domain>[^\\]+)\\(?<user>.+)”, “${user}”), param = c.Value);

c:[Type == ""]

=> issue(Type = “”, Value = c.Value, Properties[""] = “urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified”);

Posted in AAD

ws-trust, office online, and AAD

Hopefully, the following sample will give us all some orientation on how to make the ws-trustSTS/AAD integration actually work.


The project comes with a client and service, along with the “usual” instructions on how to configure the cloud projection records of these processes. So, having downloaded the sample/ code, we make our client seek to talk to our (existing) todoservice’s API:




On our ADDS/ADFS v3 server, we created a user, bound to a domain that is already certified in our netmagic.onmicrosoft.domain AAD tenant ( Our goal is thus to challenge the user “”, intending that ADFS checks the credentials of name and password.




To the ADFS config we guess we need to add a relying party, with name urn:federation:MicrosoftOnline – guessing at the required claims.



We know our ADFS setup is good, since we can emulate ADALs use of ws-trust – which also allows us to easily see “what” ADFS does. (for example it uses sha256).,



When we try the sample now, we DO get further:


on actually creating the aad user, in the right tenant, we that we can at least the issued token processed, properly, by AAD – which then objects to the content defining the subject.


To get that far we had to (obviously) align the ADFS parameters with the federationdomain record in AAD. Next, we will actually create a user, under that domain.


$msolcred = Get-Credential -UserName  `
                            -Message “password for netmagic is FRED!”
Connect-MsolService -Credential $msolcred -ErrorAction Stop

    $setfed = Get-MsolDomainFederationSettings -DomainName
    $aActiveLogOnUri = $setfed.ActiveLogOnUri
    $aFederationBrandName = $setfed.FederationBrandName
    $aIssuerUri =     “”

    $aLogOffUri = $setfed.LogOffUri
    $aMetadataExchangeUri = $setfed.MetadataExchangeUri
    $aPassiveLogOnUri = $setfed.PassiveLogOnUri
    $aSigningCertificate = “MIIC5jCCAc6gAwIBAgIQELcx5ZetWLxLu947oG+anjANBgkqhkiG9w0BAQsFADAvMS0wKwYDVQQDEyRBREZTIFNpZ25pbmcgLSBwZXRlcnZtMzIucmFwbWxzLmluZm8wHhcNMTQwNzAyMTg0NTIwWhcNMTUwNzAyMTg0NTIwWjAvMS0wKwYDVQQDEyRBREZTIFNpZ25pbmcgLSBwZXRlcnZtMzIucmFwbWxzLmluZm8wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDlrHS0Y6LIak6OFJulzl0EWAm/1CZbBTnGuh+geRt/Wllegtwfl7b+v8irjyDYeB59q0SlLb58H0Q4j5iYzDiuyXUwwNioTv4DKmDPsUQV3nbmV7HUqG0Ddc68Y46VRiLTN914QqKxgtaxLXx1qtxhOrrjlEGgyfpk4hDJv7njsCkI3YxBhBpNYdOQI23EX/LUP0Y2UiUdPlxueAqcoN0syR6WArBaVHARa35zJABevjm5jM9Qh8LiD+E3IhXjfMisi7xYYHiq3IogLZDg+y6hx4Nu/xpHuWHYqa+yNeB9GLjKrFpkK9xcWjMZkxeZ14Rd7GbhG7dGquRQIFomvdf1AgMBAAEwDQYJKoZIhvcNAQELBQADggEBAI/tJE+eFaFA7GsMYRzMWP2BiNY2MyaJfErSovq9IuvrQ7iqitJUkbCHcEjxhd2VmSo/EC+56feoyaoowwgq5q2qBA8oWg7LOg0UIP1EWCbeWbM6u3asU0rnyD6QD1RWjSkDI6G0G3NffbTPPjnOqAl99ZO1SYA50nwbM85dubHvMU2R1beFw1VXXVYjtyFft7c0Ksrg15jpYhQqKSjKkez/tGv9YIkt1mOHcJSO2QKuv/vyQjrqMM/bwxIet+hJ095RjoxaVnboWNZN2DHB/DVP0hBZlKkBOYRabJm6tWQMuxSzc4wqog1YYN5judGiORmaNJq5danHw98Sia7DLP4=”

    $newuri = “”

    Set-MsolDomainFederationSettings -DomainName -FederationBrandName $aFederationBrandName -ActiveLogOnUri $newuri -IssuerUri $aIssuerUri -PassiveLogOnUri $aLogOffUri -LogOffUri $aLogOffUri -SigningCertificate $cert  -MetadataExchangeUri $aMetadataExchangeUri
    Get-MsolDomainFederationSettings -DomainName

$name = “andy”
$upn = $name + “”
$displayname = $name + “_at_rapmls_info”
$guidstring = “12e48fe7-96be-40a9-9ffb-8fd1e91891d3″;
$base64ofstring = [System.Convert]::ToBase64String(
new-msolUser –userprincipalname $upn -immutableID $base64ofstring `
-lastname At_Rapattoni –firstname $name –Displayname $displayname `
-BlockCredential $false

where we get the “official” guid for the immutableid field from the directory record for the user:


Since this is all difficult, we also installed AAD sync (beta) – and it seems worth figuring this out now. So far, we can make it sync only a couple of strange accounts!



more to come on this, I feel.

PS, we fiddled some more, deleting the user and adding him a new, this time with the know same immutableID as is being asserted by the issuer.



Posted in AAD

deleting an AAD tenant: microsoft deceptive language


it can be done (finally!)

wonder what happened to the argument, and the person who used to make it, about why it was “inappropriate”?

Of course, its an American database – so you can be assured “nothing is REALLY deleted” (so NSA can snoop it, and FBI can get a warrant.)

I think Microsoft should be chastised for its duplicity on “deletion”.

I know its hard living up to the exceptioanlism claim, each and every day. But really? does anyone believe an American corporation any more, over deletion?

Posted in Azure AD

algebraic topology

I like this course on topology as i feel like I’m sitting in Turing’s 1920s shoes while still at school – when he got roughly the same lecture from some now infamous Cambridge math lecturers. Our current teacher has a nice, non-formalist manner to his lecturing.  It aligns nicely with the approach take in 1900 math – before the very formalism that was studying for machines took over the humans doing math.


Of particular interest is the material on constant functions.

Turing’s language game suggests he is assuming that any reader would be part of an ongoing course and thus would have context that helped support each argument point. One notes that rationale is simply NOT stated, being presumably contextually defined. In particular, his arguments concerning uniformity, continuity and word problems all make a lot more sense (as does the one-dimensional function ‘alpha()’).

Turing’s argument form, in his On Permutations manuscript, is interested in relations between groups and their (distance) invariants, where the groups happen to be those used in enigma machines and the like (doing polymorphic encoding based on the friedman square formed by the rotation and translation of suitable groups)

We are used, in coding theory, of the trivial code being part of the basis set – one of the atomic blocs out of which certain entire families of other codes can be formed. And here we see two further notions, related to this: 1) the mapping of various point in (I,I) to traversal of a single path, in image space, and 2) an indicator function, taking 1 when ones image point is one said line and 0, otherwise.

Back on Turings’ 7 point geometry, in one particular coset the “particles” wandering (in parallel) along the geometry are on fixed circuits. One can look them as 3 independent cycles (each supporting 1 particles motion) or as 1 cycle (in 3 dimensional space) with an single electron that manifest itself in 3 places at once (based on the magic of quantum mechanics)

Posted in coding theory, early computing

exceptionally gifted certs


One of the hidden agendas of the NSTIC is, I suspect, to have the cloud vendors – already sending our log files to NSA for “critical infrastructure protection purposes” – replace the X.509 encoded cert with a JWT encoding (of the same thing). And its hard to argue why NOT (since the old format is a bit dated). Little changes, one might think – with obvious benefits.

what the old format is doing, not that we intended it this way, is keeping the current web consistent with the attitudes and knowhow of the 1995 era web. Rather than the militarized web planned by the US, with the connivance of the cloud vendors.

This is WHY is JUST SO IMPORTANT to dump the old format – because with it goes the old stuff that is “hard to spy on”.

it also allows all the old PKI ideas to come back (now in the guise of JWTs) – and THIS TIME folks are “going to design” it the way PKI was supposed to be done (which is not the way the web did it).

I wonder if the microsoft line engineers KNOW that they are part of a bigger plan? obviously, the indoctrinated product manager do, and presumably use management and communication skills to “ensure all ‘keep the  faith’” – and uphold the cover stories.

Posted in NSTIC

How to manipulate NSA. Axiomatize it@



Well that was easy!

Posted in dunno

cryptanalytical mechanisms not based on silicon; Turing light cones




The case for a detector (for very tiny discriminants, from differential trails) NOT being based in silicon chips but being based in nuclear knowhow is STRONG.

The case for a cryptanalytical search engine, working at an accuracy of 10 points per zillion is yet stronger when one looks at the PREDECESSOR technology of super-K. Though one does see, with super-K, a clear relationship with Turing’s interest in certain light cones and projective geometry theorems (from the pure math taught up to 1945) seeing as they relate to known American methods for radiation/crypto detection based on photodiodes – that all harkens back also to very early counters used at GCCS (before electronic computers were applied).

So do NSA and GCCS ( and France, and Russia.. etc) all have cryptanlaytical “detectors” based on huge vats of cadmium chloride, argon production, and counters of radioactive argon? Or are things more like super-K?

Assuming Turing the chemist (vs Turing the mathematician) had figured out the theoretical basis of the next generation of cryptanalytical search engine , one can see why the Americans would – being exceptional types – arrange for his early death, once he had been hounded by the UK and become an uncontrollable risk (as he reacted to the hounding by become every more alienated and “likely to seek foreign friends”).

Posted in crypto

assertion obtained from IDP and sent on to AAD /token endpoint



using Microsoft.IdentityModel.Clients.ActiveDirectory;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace ConsoleApplication1
    class Program
        static void Main(string[] args)
                AuthenticationContext ac = new AuthenticationContext("", false, null);
                var y = getitsync(ac);
                while (true) ;
            catch (Exception ex)

        async static Task<AuthenticationResult> getitsync(AuthenticationContext ac)
            var y = await getit(ac);
            return y;
        static async Task<AuthenticationResult> getit(AuthenticationContext ac)
            AuthenticationResult x = await ac.AcquireTokenAsync("", "a6e4ee63-87f3-45af-a5db-05099ab9f001", new UserCredential("", "1234"));
            return x;

console application, build against the ADAL dot net latest library from nuget

Posted in AAD

Turing and markov chains and operator norms




what is interesting is to see how Turing models the same concepts (and how he does not, too)


above, we now understand why he is pointing out, given the rules of markov chains that enable one to calculate the indeterminate’s stationary distribution, why he mentions g(a) >=0 (not negative) and g(a) > 0 for some values in the input vector. We can assume that the statement that the f-bar is one is the stochastic condition that induces the contraction of the norms, as the operator acts on the function space of g vectors. In thinking that aligns closely with Markov’s original thinking, we see how conditional independence is brought into the argument when considering specifically 2-hop on the graph. Of course, the 7-point geometry is “all about” 2 hops – and their properties.


Where Turing talks about certain self-conjugate subgroup, more modern folks talk about signed measures – which balance. Its specifically the ‘’”balance” that its reflected in the “must sum to zero” – since any movement left cancels with any movement right.


Now we understand that K is the unique stationary distribution and is an eigenvector. Turing is reasoning in terms of operator norms, where eigenvectors correspond to eigenvalues.

more formally



Now Turing also makes oblique references to properties about the uniform distribution, as it interacts with the operator. Could it be a reference to this:



its also interesting to wonder if the beta, above, influenced the Tunny folks and their notions of proportional bulges. Anyways, getting back to facts:

Turing seems to be indirectly reasoning that as the conditional dependence of terms from the plaintext (as represented by how an input term is mapped to its output value along with the next term, similarly) are averaged out and contracted so that the fixed point attractors draw the flows towards the uniform points. Its almost as if he is interested in convergence of averages (in a mostly closed space); since he doesn’t specifically mention concept that aligns with aperiodicity. In order to calculate both the stationary distribution use a beta  (and 1 – beta) – that corrective factor needed for and induced by the non-abelian group operator that brings the overall system into regularity.

OF course, we are familiar with the overall form of the equation (weighted by (1-b) on one branch and by (b) on the other) from the study of symmetric channels.

We have to note, since the example text notes that the norm for L1 is what works in producing the contraction that Turings example is in L2 norm. But, then, Turing is really interested in two step on the graph. His contraction occurs as a model for the conditional dependency of two terms in the plaintext – which in tunny terms is like cryptanalyzing the next bit in the tunny wheel (for hamming weights in binary) , or the most likely second character in an bigram (for hamming weights in q-ary fields where the #fixed points is the number of changes in the uprights component-wise mappings).

We see that there are various notions of independence:


Posted in coding theory

binding with issue/cancel

The dotNet ADAL library expects to learn the endpoint of the active STS able to verify a user’s challenge data from the metadata – to be recovered by the library from an endpoint associated with the “federated” account information also retrieved – by the library – from AAD. This is unlike other practices, in the microsoft online world,  in which the endpoint is learned from the IDP record, for the certified domain (for the user).

based on this code, we see generally how the “oauth” server (by exposing a port that just “happens”



Image | Posted on by

Its nice to finally find modern material directly on topic – writ respect to Turings On Permutation’s manuscript. its also interesting to take the modern presentation and compare it to Turing’s world, of probably 1938. For Turing writing shows his orientation (and assuredly the context of his writing): cryptanalysis.

We can assume that Turing went on a GCCS course in 1938ish and learned the lingo. In particular, he probably learned to think in terms of “beetles” and uprights/rods and drums.

There is just something rather unpretentious about the writing style of Diaconis – specially when you consider when its dealing with what is, if written without the obliqueness-inducing academic language – a verboten topic: cryptographic wheel wiring. Rather like Turing, whose obliqueness was pre-functory doing little to hide the context – analyzing german enigma Italian herbern and American sigaba machine,  Diacnois in 1996 was perhaps reflecting the time times (1996) when crypto walls were tumbling down.


A second paper does us proud, tool


Random Walks on Finite Groups: A Survey of Analytic Techniques. P. Diaconis L. Saloff-Coste, Prob. Meas. on Groups XI, H. Heyer (ed.), World Scientific Singapore, pp. 44-75. [PDF]

we get simple presentation of how markov chains relate to measure functions and probability distributions



we see a simple presentation of “detailed balance”


we see folks heading for fourier basis and invariant theory


we also see why one is interested in calculating under different norms


in general, we now see how “normalized counting” is part of the 1936+ era “secret sauce” – turning all this theory and algebra into practical ways of mounting cryptanalytical attacks. One assumes that Turing was taught the isomorph attack on enigma for example, in 1938.

Concerning cosets, normal subgroups and aperiodicity and An, we see


We see now WHY turing was concerned to search out those uprights which were exactly Sn or An.

Next, we see another specific Turing component – undiscussed in his work (since he is talking to an elite audience).


In Turings work, this was a simplification of his first thought (concerning delta), modeled by Diaconis (above)


(we though the 1 was there to count up how my edges landed on each output pad of the wheel, in the fano plane!)

Specifically, we also see an An argument pertinent to understanding the Turing manuscript:


Just as interesting is the material on class functions (and bi-invariancy):


Posted in coding theory