SIPRnet certs/cards and Ping Federate (or rather JBOSS) SSL

Since PingFederate also has a (entirely standard, Java). jks based trust store for the certs used by its SSL endpoints (actually hosted by jboss), everything we see here could work there.  Indeed, the Ping Federate console does a better job, nicely hiding the need to use the java tools. If the SSL is using a server-side SSL offloader card (built into NIC), then the console can be doing key loading directly into is trusted boundary.


Assume it does.

Note how that OCSP root is not imported into the trust store of the SSL engine – meaning the SSL engine is not enforcing the (out of band) OCSP process for the layer 4 client entity certs, used for peer entity authentication.

And note that the CSP that talks to the card crypto boundary to pull the attribute fields (cert values) is not mapping the intermediate certs into the right windows store. Should make you rather suspicious of the quality level of all this.

But, as it says clearly: this is some half-assed SIPRnet not NIRPnet thing. This is feel good stuff

Posted in pingfederate

quantum walks–for quotient groups (slide show) (slide show with context…) (paper version)

Does a great job of putting into simple geometric diagrams notions of eigenspectrums and the change of “representation” when one uses the quantum apparatus of analysis.


This is is actually a return to the 1930s – when folks were  far more comfortable  “modeling” on the complex (unit) circle.

Perhaps its inappropriate to project this material back on to intepreting “whats going on in “Turing’s On Permutations manuscript, but using this (excellent) minimal set of concepts we can do so. The parallels are quite amazing.

First, Ill assume its undoubted that Turing dominated graph theory and the theory of halting; and that he also dominated the central limit theorem. In the topic set, rather than define a Turing machine that so interprets graphs and halting on “marked states” as a number whose binary fraction recurs forever (once one has reached the stopping/halting state), we just work with the graphs themselves. The random walk, using normal cbits, allows us to model a stationary distribution – and capture homogeneity. Whereas Turing said K is the constant density, folks here say: tall it 1. it’s the largest eigen value in the spectrum. Much as in linear cryptanalysis, every other component-density making up the overall stationary distribution is (on the “eigenvalue scale”) some distance from K (or 1). in cbit space, one is  projecting onto the constant vector.

When one now uses a quantum walk (instead of a random walk), one is thinking in terms of qbits (rather than classical bits, or cbits). Now we need unitary processes (i.e. reversible processes). Of course, Turing built such a model out a sequence of engima-style rotors, considering the sequence of several inputs and outputs in a chain, to give him his model of a unitary process.

Now what is interesting about the modern material is that the author constructs a “channel” –an “interpolation” under the weight function of probabilities that either the marked denstity is in operation (P’) or the unmarked denstity is in operation (P). The join density (of the channel) can then be modeled (as Pi(s))), and one can project onto this “state vector.

Now, its highly intuitive that the density of U and that of M are a bit like in the applied math we did at 16 – the sig() and cos() contributions – allowing projection distances to be calculated.

Now “thinking in phase space”, we can leave behind the cbit analogy and just THINK in terms of phase angles – since we just MOVE TO an measure of the set (of marked/unmarked vectircs) which is in the eigenvalue basis. Now the angular rotation from the stationary distribution of the marked covering graph is the “distance” of the projection onto the current state of the joint density, of this channel.

Then we wee what we say in Turing, considering the quadratic relationship. For turing he was acutlaly focussed on making a code (rather than the corresponding cryptanalysis problem). So we see him using particular properties of normal subspaces to create a particular set of marked nodes, and he wants to show that indeed in superposition space there is a uniform probability of any state evolving, at some limiting distribution of his linear functional.

Ok, That essay gets 10 for content, 2 for style!

Posted in coding theory

Debunking OAUTH2; SAML in drag; Cleopatra’s asp



OAUTH2 and openid have become a US government contractor fiesta de caja – or a nice little earner, to use a London expression. And the US taxpayer pays. Above, I show  the true face of OAUTH2. Its just what you had before (but now it no longer interworks). Of course it “WILL” in the interim future (when filthy luca flows more liberally).

Why would the US be complicit in this charade? Because it doesn’t want the old stuff to interwork. It wants only the new stuff to interwork – and only when it is subject to “new” rules on interworking that have no technical requirements. (They do have political requirements, concerning how the internet is to be indirectly used for US warfare planning processes.) Of course, the trusted vendor maintains this series of parenthetical charades.

The picture comes originally from It’s annotations are mine.

Note how in my “rendition” the authorization server is now a classical guard, governing access to resources.Its precious little different to a Unix kernels ACLs guarding the attributes in the password file! Note how the only OAUTH style resource anyone is interested in today is…ahem, the user record whose properties might enable some website to customize its behaviour (Hi Peter!) without having to ask me to fill out an account-signup form. IN other worlds, the combination of a Authorization Server and a Resource Server is a “IDP” – the party that does user challenges by one means or another, and then issues one or more assertions containing a list of name/value pairs. Just as SAML allowed 19 parties to divvy up the formal roles on the formal casting list, so does OAUTH2. Of course, only 2 matter: the Audience and the Company.

The website seeking to borrowing the user record from the IDP is just the SP, dressed in complicated new clothing. Yes its an old joke, its all sold by the OAUTH2 tailor (in the eyes of the Emporer, anyways) and yes the public can see through the joke.

Now, OAUTH does have some element of modernization in how this or that party at the protocol bash interacts. It even gives them all a new sixties name (groovy “authorization grants”). Arguably, the new names are better than the old ones (the 20 years old Liberty “name federations”). But, a spade is a fork is a digger – a muck racking took at the end of the day.

Now given the nature of the folks who I saw running the shibang, it really doesn’t surprise me that folks reinvented the wheel, playing the global politics like Philip (father of Greek song and dance act:  Alexander). When you conquer Persia you might as well start wearing Persian robes, like the local boys, dispose of the annoying Eunuchs in the old political class with an impromtu head-on-pole party; and generally take on the mantle of the former ruler.

Quite how this OAUTH metaphors as history ends, getting to Ptolemeis ruling Egypt and getting it all on with Ceasar, Mark Anthony, Augustus and Cleopatra in the great “.asp ending” to the play … I don’t know. But lets wait and see how the OAUTH2 saga gets to tell the same old story. The ending is assured.

Posted in oauth

from IPSO to STU-III

Those who designed the early internet we still live with were business men – perfectly happy to rig the national and military telco infrastructure to suit their interests of their (US centric) business models. This means folks were use “R&D” to “make the case’ for that which they wanted anyways – revenue-generating and long term contracting practices that would underwrite huge capital investments that in turn gained them access to huge loans (for other business sector ventures applying the same kind o thing now to, say, the intelligence, or academic or commercial internet).

One has to look at IPSO and STU-III as two outliers in this space.


lets assume the above simply talks about end-user certs – bearing security labels, and device-certs loaded in trusted store on the device so device “capabilities” limit what user-certs might seek to have the device be used for. 

This is rather a different world to the “secure IP phone” of the 1970s, in which the phone’s DCE/DTE interface could label the outgoing IP packet with an IPSO marking (much as today internet apps mark ethernet frames with priority markings, that the intelligent-switch then acts upon (or not, depending on whether the marking device is trusted or not).

Posted in dunno

Acoustic cryptanalysis – valicert root keys valicert root keys A little known fact is that i was mc at the generation of three ca/ssl root keys still used widely. The cpus were spied on by a truck/trailer placed in the car parking lot of the adjacent building. My assumption at the time was that what folks wanted was the primality testing evidence. The ceremony was observed by Price Waterhouse. I always assumed that one of the individuals, not acting for the firm but for others to whom the firm owed a ‘higher duty,’ participated in facilitating recording activities, at the required fidelity. It was interesting to watch the charade (on video playback). Not citable. No permission granted for linking or downloading content. This is marked ‘peter fouo’. You may not precis, paraphrase or quote even 1 word.

Posted in coding theory

Increasing the Signal-to-Noise Ratio With More Noise | Azimuth

Posted in coding theory

Access Token “validation”–by Ping Federate acting as Resource Server; ASP.NET account linking

We extended our ASP.NET OAUTH2 provider so its GetUserData method, called as part of ValidateAuthentication, talks the Ping Federate authorization server STS endpoint asking it to validate the access token received earlier (and presumably via a webservice call from the resource “client”). In response, it presents a new token type (ping federate proprietary),which contains a field called (happenstance) “access token”. Parsed as a JSON object, it’s an array of name/value pairs – source ultimately to the Ping Federate access token attribute mapping screen.


This allows us finally! to say we have built an ASPNET OAUTH2 provider to talk to Ping Federate, since we see the final attempt to do some account linking based on the message exchange



Well that was not too bad. What about 5h all in all?

Well done Microsoft (ASP.NET team). WEll done andrew@dotnetOpenAuth. And well done Ping Identity (for showing what to do). well done me, for being a second class type who wont give in to the American putdown labeling attached to me (and 5.5 billion others, who are not “exceptional”)

Posted in pingfederate

ASP.NET OAUTH2 provider to Ping Federate’s Authorization Server – part 4

In the post “ASP.NET OAUTH2 provider to Ping Federate’s Authorization Server – part 3”   we made an ASP.NET website, acting as SP, initiate a request against an OAUTH authorization service. Our Provider class got back the authorization code – a one time code to go get the very first acess token. Getting the latter is now our mission:



There is not really a lot to say… except do the above  All I changed from the ACS implementation was the endpoint (to /as/token.oauth2); and added the SSL ignore cert validation errors (because Ping Federate is still operating with a self-signed SSL server cert).

Obviously, the next stage is to do the same again, this time using Ping Identity’s proprietary validate_token access grant type. In the callback framework that means calling


Posted in pingfederate

ASP.NET OAUTH2 provider to Ping Federate’s Authorization Server – part 3

In the post “ASP.NET OAUTH2 provider to Ping Federate’s Authorization Server – part 2”   we made an ASP.NET website, acting as SP, initiate a request against an OAUTH authorization service.

By suitable selection of port in the code, we get a round trip in which the response is deliver to our handler, as desired. It has all that it needs to be handled by the waiting provider (instance). Obviously, it also has the authorization code, as shown.


Now the return handler expected to see a /oauth2 component in what ASP.NET programming calls the pathinfo property. In this build, for dot net 4.5 and perhaps a change of ASP.NET version, by default the “page” identified by the returnURI supplied by the dotnetOpenAuth/ASP.NET framework I’m using on this host does not suffix .aspx. Thus the pathinfo (the path fragment after the “page”) is not determined by the pipeline, upon response processing (since the paradigm has changed). To fix this (in a hacky manner), we simply amend the redirectURI in our class to add .aspx to what we assume to the be the (old paradigm) page name – and update the corresponding Ping Federate vendor record to include the .aspx component. Obviously, you do better in production code!

Now we can start to debug our response processing! What our (bug fixed) initial response handling does is simply unpack the OAUTH-style state into the form the dotNetOpenAUTH/ASP.NET wants it handled.


Finally, the method in our own provider class for verifying the authentication (response) iis invoked. It due invokes its base class method which duly calls our subclass’s appropriately named QueryAccessToken method. Its job will be to exchange the authorization code for a token, minted by the STS feature of Ping Federate!

This isn’t hard, is it!!




Let’s do token minting (and then attribute collection) later!

Posted in pingfederate

ASP.NET OAUTH2 provider to Ping Federate’s Authorization Server – part 2

In the post “ASP.NET OAUTH2 provider to Ping Federate’s Authorization Server – part 1” we got ourselves a basic implementation of a provider class for OAUTH2. It cooperates with a set of ASP.NET page handlers that build upon the events and messages of the OAUTH2 authorization_code procedure. In short, those add to the OAUTH protocol value-added “request formation” and “response handling” procedures, where the latter involves lots of post protocol account linking and database work.

We invoke the OAUTH process by clicking the button that chooses our “Ping Federate” server as the authorizing/authenticating partner of this SP website.



The page handlers invoke the dotnetopenauth framework – which finally calls the first method in our provider class. The output is this method is the returnURI Address, with suitable local parameters. We see two local parameters (at 1) added to tie the incoming response back to the correct outstanding protocol state block (waiting for responses from PingFederate). They identify the class of provider and the unique instance/session identifier or “SID”). The handling at 2 allows this site to be a protocol bridge, where the returnURL parameter stored in a cookie (storing this address “even lower” in the stack of return values) will allow the indicates OAUTH response message handler in turn to invoke the next outstanding layer of protocol activity retrieved from the (cookie) stack. I.e. perhaps generate a ws-fedp response).

Rant Alert: For privacy reasons, this information is hidden form the nosy American IDP and its policy (probably) of sharing information on who is associating with whom with NSA or DHS or some contractor proxy in the UK or US (to skirt local laws). Sigh (at the duplicity process…of IDP vendors).


note that enforcement of port 80, above, may need to be other port value – depending on how you debug or deploy.  You also may not DESIRE to downgrade from an https indication…

The next step is to learn the address of the OAUTH authorization server (i.e. a Ping Federate endpoint). This happens in the GetServiceLoginURL method. In short, it formulates the request message to the OAUTH authorization service, moving pseudo-state information FROM the returnURI to its correct position in the request. This is important – especially when working with conforming OAUTH protocol servers (since they handle the latter form of the state information correctly, and typically ignore forms that attempt to leverage parameters on the redirectURI).



Since Ping Federate is programmed to only return messages to registered returnURI addresses, we must update the vendor’s list (of potential return endpoint):


Note how in GetServiceLoginURL  call we actually altered the return URI prepared by the earlier method, adding /oauth2 as a pathinfo element. And, note how we register this in PingFederate, too. Between registration, indication on the request, and the additional signal represented by the “oauth2” pathinfo, state handling can be handled upon return. We see how this happens, later.

We now see request handling et all the way through to the various checks so that an IDP challenge occurs. Clearly we are doing SOMETHING right.


We will address the second half of “response process” in the third part of the series.

Posted in pingfederate

DES initial/final permutation. A rationale?


Presumably, IBM had to consider the same argument – given the amount of silicon taken up.

Perhaps we see that there were criteria other than cryptographic strength. There are also cryptographic CONTROL plane aspects one wants to enforce. And, one may want to make a stacked die (even back in 1975) – assuming that the same silicon is also going to be driving a crypto-search machine (not a encipher/decipher function machine). Remember, IBM’s largest customer is NSA – for the language/sorting function of the 1970-era agency. Its only by 1990s that SUN gets a hold.

It also occurs to me that really one doesn’t want to use DES ECB with a source that has any language characteristics (i.e is other than IID). The original (1977-era) model of (triple) DES ECB for 57-bit (yes 57) key wrapping is one use, as a cipher one has a different use – given that initial permutation.

Posted in DES

hagelin cipher (1975) and linear programming

Whether a 1950s-era Hagelin cipher is implemented on a lug/rotor contraption or a hand held calculator makes no difference. Its not chaotic; and does not seek to base is strength on the notion of the distinctiveness of 2 evolutions of the probability densities.Rather, its very linear; and therefore susceptible to linear programming attacks engaged in “approximation”.

And so when we see the NSA (cryptolog) phrase “manual cipher”, read Hagelin machine (circa 1970/1980, not 1945).Believe the company when it says its protections were fulfilled as designed – which is not to say that the tweaks made were not sent to NSA; or the micro was not so irradiated by Motorola so as to induce emissions that a sensitive detector could pick up (later).

To be fanciful, imagine that the Swiss calculator (with chip dies made in the “custom” Siemens ICC fab) is now in one’s super-secret Iranian military intelligence facility …full of well indoctrinated coding clerks doing best practice. But, they also go home, taking “information” with them. Of course it decays pretty fast, so one needs ones a sensitive detector to be scanning the human “carrier” – who is a damn sight more predictable and recurring in his habits than the algorithm. Use the unsuspected “easy vector” to compromise the hard vector (pun).

More practically, assume that the micro is induced to make errors in its fetch-execute cycle which subtly alter the path of the “Software” finding “lugs” so as to leave a poker-style tell (that reduces the search, if you know the tell).

Now, at the same time one has to be careful – since all those 20 year old calculators still exist (and our friends in Iran can now go test them TODAY with our collective and different level of appreciation and compute power). Our Iranian friends (and there are some, amongst the religious nuts of that and related regions of the planet) can even today GO BACK in time. They can go find out now how hard it would be to perform the search, assuming there are tells. So, the first thing to do is FIND the tell(s); now you assume they exist.

There is nothing that the highly indoctrinated subgroup within the US hates more than to be bested in the secrecy world of double-think (even 20 years later!). If you want to “annoy” the old men then “showcase” the spying (of 20/30/40 years ago) – leading to increased distrust (today). Now you are playing Kayla-style (fictional) politics (even with formally-irrelevant crypto history). But it still has power; and its entirely legitimate POLITICAL power.


Let’s assume that the block on releasing the likes of the 1945-era Tunny report comes to an end by 2000 simply because the era of reading “manual ciphers” is just over  – by that point. Those who were going to be duped, were duped; and new techniques are relevant. So consider that perhaps we have the Iranian people’s revolution from the local dictator to thank – for putting people before empire. We can also assuming that continuing blocks on Testery reports from 1945ish are still in place because they hint at “the human skills” needed when puzzling over even modern cryptanalytical solutions.

Well lets take all that scifi and fantasy now, and go find some reality:



giving us, with some anti-clockwise rotates:


Now I wish I could remember where I was reading, just the other day, something on normed spaces, in which one wanted to know which of the linear vectors was “nearest”. This contrasted with picking the nearest point. Perhaps it was in the PCA eigenspectrum decomposition stuff I was reading from UCL.

Posted in crypto

crypt-oAG and Arizona


Hardly! I have good reason to believe that the reference is to Motorola GSTG (as it was, and no longer is). And, I worked for it; and certain persons who, 10 years before, were “fully indoctrinated”.

The fun times in that job were two visits. One went and looked at lots of old military boxes (presumably full of circuits), from the Caneware projects. The others was a closet holding the root key (phone unit) for the worldwide civilian secure phone system (the civilian version of the STU-III, with different electronics and ciphering). Next door was (in a non compartmented area) was the folks selling the Type I LAN encryptors (with its red book distributed TCB concept) and the “secure” satellite phone system.

What was MOST interesting was the engineering method in use.

I remember meeting the general manager. It was quite fascinating meeting someone who had spent a lifetime doing military contracts (in a classified facility). I don’t think he knew that another world existed. All they really knew was that NSA had collapsed from within; and the old world had ended.

Posted in crypto

le grand saga de crypt-oAG

Boy does the internet not make a hash of things.

First, realize that in 1990 a SUN 3 workstation just happens to have the ability to support custom boards – with lots of custom-programmed (in the VSLI sense) LUTs. Your job as a cryptanalyst, having been briefed by the cryptologicians, is to “intuitionistically” break this or that variant of a hagelin cipher. If you have the information about the tweaks made to the general mechanism for country X, so much the better. Your job is to sit there, in a remarkably similar manner to 1945 colossus cryptographers (in the UK nomenclature) and let you human brain do what the computer cannot. But DON’T underestimate the combination of human-driven (computer) search! There was a t (me when I could tell you WHICH (concert) pianist was playing (a major labels recording)! I could hear the intonations of his/her favorite piano, and the way his/her particular muscles were tuned into its touch and the response of its particular mechanism, for the particular way the hand/arm would have to move – to play certain note sequences.

So assume the cryptographer can do the same thing, with his/her favorite cipher “music”, as its intones its way on an FPGA “instrument”, with display on the sun III framebuffer! Its an art, similar in some ways to reading xrays or MRIs.

Nows I think about it, I had great fun also re-learning to play on a (high end) electronic piano, with no traditional mechanism, no soundboard, no vibrating strings that buzzed at your ear drum. Playing with 16 bit polyphony was great fun too (and seemed more than the brain could deal with, back then), as was using a PC to design one’s own PCM-encoded waveforms that could be uploaded. It was also fun re-learning to “touch”: a keyboard thinking in terms of inducing the responsive computer to change the attack and decay filter for the particular waveform – quite a different musical instrument to the true mechanical piano (even though it looked like one).

Posted in crypto

ASP.NET OAUTH2 provider to Ping Federate’s Authorization Server – part 1

In an earlier post we set the stage for integration of a ASP.NET webforms application with the Ping Federate OAUTH2 authorization server. We showed that the application could talk to some other oauth-like provider (Google). Let’s get to the next stage and try to develop a “provider” class that plugs into the ASP.NET/dotnetopenauth framework for those OAUTH2 websites seeking to talk to Authorization Servers, IDPs, and graph endpoints (web services supplying user records!)

Remember! Don’t get frightened. The language of OAUTH2 is all designed to intimidate you, making it seem ultra complex such that only “professionals” and “experts” have much of a say. It’s VERY simple, in fact. Anyone can do this (even me, I hope)!

First, to our host let’s add the windows identity framework (and SDK).


Then, to our source code project we can add a reference to the “microsoft” identity DLL, enabling us to work with “claims”:


Since we are starting our “OAUTH2 provider” with the class we already built to talk to the Azure ACS-enabled Authorization Server, we imported the prototype class into our project. Now we can use associated packages. First we resolve the identity class (above), and then the resolve the reference. to the NewtonSoft JSON DLL.

Since access tokens from Ping apparently come in JSON form, we will add a nuget package for handling the parsing of JSONP objects:


We also added a reference to two standard dotNet DLL: system.runtime.serialization, and system.identity. These round out JSON and Claims object support.

We now have a compiled provider  class(not that it works). It basically exposes the right interface –enabling vendor-specific behavior to specialize the dotnetOpenAuth framework integrated with ASP.NET.


Before we can registered this provider, we need to collect some data from the Ping Federate configuration (endpoint addresses for (1) the STS that mints access tokens & (2) the authorization server that administers the consent and issuing of “persistent” grants), and the application-client’s “vendor” credentials):



This allows us to specify a registration (again noting that the data is  neither perfect nor working at this point). But, we are making SOME progress. We have an view of the landscape on which we can now paint the actual players.


The last piece we need, recalling what we learned from a similar exercise on talking to the Azure ACS OAUTH2 procedures, is some code that we added to the suggested ASP.NET page handlers. This handles the syntax of the OAUTH2 “state” information (containing ASP.NET provider names AND anti-CSRF value) and the authorization code. For this we add to the handler a recognizer routine, that detects the authorization_code coming back from Ping Federate. It basically intercepts the response, and simply prepares the fields so that they can be handled either by the provider class or by the ASP.NET provider framework ensuring the message goes to the correct provider class instance..


note, the Convert.ToInt16 may want to be more general. For example Convert.ToInt32() will allow for a wider range of valid port numbers.

Since this all compiles, we can tomorrow start to debug it all – and make it all fit with the Ping Federate way of doing things. Let’s cross our fingers and hope we can make something work that plays the role of the “ac_client” in the OAuth2Playground web app – which we have outgrown.

Posted in pingfederate

Ping Identity and OAUTH and SAML…

One of the things that just stands out about Ping Identity is the engineering. Boy do they understand what they are doing. So it doesn’t help when they have to deal with the likes of me (who is not exactly in the first class league). But, I does what I can. It really helps when the likes of Ping clear up OAUTH, and make it obvious that it ain’t NO different to what I has been doing for years now (with SAML2). There is just a new window dressing.

So for years we have had SPs and IDPs. User visits SP, and no session is found; so a websso interaction then occurs (to get the assertion from which the SP session is minted). Ok so far; being precious little different to exactly what happens in SSL (when one arrives without an SSL session, so an SSL handshake happens to get one, from which a SP’s cookie-session is minted!) I gets it, when not soaked on Kentucky brine.

What is the websso interaction? Well it’s just a redirect to the  login page flow on  the IDP site. The process concludes when the SP receives back a posted assertion containing the “persistent” name of the party who just passed the IDP’s user challenge. (A persistent name is little more than a salted-hash of your account name!) Well only some few million ASP.NET websites do that, using forms auth cookies! And only half of them in the last year have upgraded to the variant of the same thing, using ws-fedp websso.

This would be the conclusion of it all except for the fact that in some cases we want to make ANOTHER trip to the IDP this time to collect user attributes (in the IDP-maintained user record, for the user just named in the last interaction). For this, for years we have used the SAML2 “artifact” binding process. That is, the SP site does another redirect to the IDP (which already has a user session) which automatically returns a reference code in the URL, by return. The SP uses the reference code (known as an artifactId) to go make backroom server-server web service call on the IDP to pick up a signed record full of attributes about the referenced/named user record. Perhaps the two servers may authenticated each other, too – using a “client management” (i.e. uid/password!)

Ok. So to OAUTH.

Well it’s the same as the above, except everything got renamed. The artifactId is now the access-token “reference” – obtained as one swaps an authorization_code – obtained via a browser interaction with,   urr, an IDP website that, urr, authenticates and challenges users (just as above) to mint IDP and SAML/OAUTH sessions. The “de-referenced reference” is an itself access-token (of a type OTHER THAN a “reference type”  including such blob formats as: proprietary-ping, SWT, JWT, or something else some American company invents for no other reason than to hold on to a customer base via a last mile blob-toolkit lockins).

Well at least Ping taught me that all is well in OAUTH land; since absolutely nothing has really changed. With lots of US government money, folks have succeeding in reinventing the wheel, and the tax payer gets little (or nothing) they didn’t already pay for already (when the same govt paid for SAML2)! Now its an all “new” American wheel, all shiny and ready for a big marketing campaign. Or not; since today OAUTH systems don’t actually do that little matter known as “interwork”. Since this is kind of important (even at my second class status, I knows that), I might as well just use the SAML equivalent process (since at least vendor systems interwork!)

Presumably there is method to the US government madness in funding OAUTH. Probably, next generation spying or citizen surveillance issues lie at the heart of it all. All change on the blob front gives the opportunity to change the policy, with vendors “leading the charge”.

Posted in pingfederate, SAML, spying

Ping Federate’s OAUTH access token grants/types

At we documented the work we did evaluating the Ping Federate OAUTH support. Originally, the idea was that purchasing it would enable us to talk to the APIs of Office 365 (and Exchange Online, and Sharepoint Online, in  particular). While having 2 OAUTH implementation actually interworking may (or may not) be in the plans of Ping and Microsoft, apparently its useless for THAT purpose, today.

So what does it do?

Well I suppose we should look at the areas where its mostly likely that interworking would fail – in the format of the access token. So what does Ping Federate (v6.something) even do?

Well, we can now see, using a new run of the ac_client “vendor/client”, what is exchanged for the authorization_code granted by the authorization server:


if one makes yet ANOTHER round trip to the STS, using a ping identity proprietary grant type , one can swap the opaque token for a proprietary parseable token (with attributes):


Presumably, at some point Ping will sell an “token kit” that outputs the same kind of access tokens used by others (e.g. the SWT or JWT also used by Microsoft products, based on semi-standards).

Well, I think I’m at least understanding the features and current limits of the product and its security concept – and this aligns with the rather predicable way that the OAUTH marketplace is shaping up in the US (with yet another round of proprietary tokens, with upgrades fees if you want standards and thus the ability to actually interwork!). Strange that the US just will not offer standards-based security (probably because there is then no way to make any money!)

Posted in pingfederate

Offce 365 federated domains–working with metadata (mex)

at this skydrive file  we see the public metadata that Ping Federate (eval edition) exposes to the world about our Ping Federate implemented ws-trust endpoint. This is presumably consumed by the Office 365  process of configuring the SP-STS.

Not having used this feature before, lets make some notes (before I forget it all):


produces (partial)


Posted in pingfederate



evidently not xtal balls…



Building interpretation is evidently turning into strange source:-


and the xtal balls have multiplied, as the problem got bigger…

The following absolutely captures the folks I encountered (fun… in its most strange form!). IN English, they would be call train spotters (apart from the married with kids, bit, with MSc/MA!).


we even see perpetuation of coventry myth (total bullshit… note) in yet another pro-anglo moment: (Do you think the average yank could give a damn about Coventry, having seen Berlin!?)


censor missed one…remembering this is 1978.


image (wonderful article for the computer history, not dissimilar to Samet’s story)

In general, one can see why modern NSA is so proud to publish this particular magazine –  it a people story. but murder is in the air…



so what happened to MR X? (or Ms XXX these days)?

the pictures get ever more interesting (especially once annotated, by me). now we know the “src” of the balls on the FANX…not the that mass/volume ratio actually computes…


squared (not) analysts make their poetic mark on 1920s ideas that NSA math types are struggling to adopt:


The “interplay” between the moments problem and Pearson Curve fitting.


The former is DEFINITELY worth some followup, being what I’ve come to expect.


leading to


Let’s get back to browsing cryptolog, rather than wandering the web!

We see that lots of NSA math is focused on COLLECTING the signal given raw data (not just attacking the cipher):


Perhaps I should re-read the military cryptanalysis books, only because apparently they were training material for a whole generation (of folks, who were not computer scientist oriented). One really see that lots of NSA problems are management (of people), rather than technology.


interesting use of language by an insider. Must think of STU-III as a data modem.


Posted in dunno

Ping Federate ws-trust to Office 365–attempt #1

Building on, lets finish the evaluating Ping Federate and Office 365 by looking at how the ws-trust component works. In short we will deploy a ws-trust STS and Outlook 2010 (trying to be more like the typical office worker).

We fill out the ws-trust configuration parameters, augmenting thereby the existing SP-connection that already existed. The pertinent instructions seem to be:


(“” in our case, omitting the https:// scheme AND delimiter)

Before saving we see:




Then we installed outlook (on a host that is  NOT a member of a domain, note):


next, we ensure that the Office 365 account is not only “registered” (using the New-MsolUser cmdlet) but has a “subscription” attached to it (so the SP services work, in addition to the FederationGateway). For this task, logon as main domain administrator (, in my case), and use the web interface to assign a “license”:


We tested the Outlook Web access, using the menus. Next, we want Outlook (thick client) to work, so we can determine that the Ping Federate ws-trust setup is correct. Thus, we view:


Hmm. By no means easy – since The autodiscovered process prompts for a credentials.

Sounds like another Ping “failed” to document – probably because its not seamless.

Posted in pingfederate

Protected: tiltman on secrets

This content is password protected. To view it please enter your password below:

Posted in crypto, early computing

talking Realty IDPs to Office 365 (via Ping Federate)

Ping Identity disclosures show how a ws-fedp IDP (talking modern ws-fedp with suitable claims) can talk to the Microsoft Online SSO federation gateway – a multi-tenant SAML2/ws-fedp FP that supports 3 SPs (and any additional “enterprise SPs”) located in (or attached to) each Office 365 tenant’s cloud: Exchange, Sharepoint and Lynx.


So we  can talk powershell to our office 365 instance, we installed the usual bits of administration middleware: the online signin assistant service, at, and the powershell module (and supporting command window) at

Obviously we connect and can list our users:

$msolcred = get-credential
connect-msolservice -credential $msolcred


Using a guid-making site and the advice here, let’s PREPARE TO make a new user at the SP:

new-msolUser –userprincipalname -immutableID ZWM1ZDc1YmQtNzcwMS00YjRhLWI0ZjEtMjVjOGE3MGJiYjFh -lastname Williams2 –firstname Peter2 –Displayname “Peter2 Williams2 User” -BlockCredential $false

Before we can execute this successfully, we have to establish a websso connection between Microsoft online FP/SP gateway and our IDP – which we prepare by creating a prototype attribute contract and endpoint at the IDP:-

Attribute Contract (1 per user, in this demo!)


The endpoint’s link information (“SP connection” in Ping Federate terminology) is


This gives us our metadata link:


In terms of SSL compatibility, we maximize the probability of interworking with Microsoft Azure https clients:


To setup the other side (i.e. the SP’s IDP connection), we use the usual power shell commands – also used in the ADFS case:

New-MsolDomain -Name -Authentication Federated


Name                                Status         Authentication                        Unverified     Federated

To produce the DNS-centric verification information we run the following (and then run off to get a public CNAME TXT record published).

Get-MsolDomainVerificationDns -DomainName

CanonicalName :
ExtensionData : System.Runtime.Serialization.ExtensionDataObject
Capability    : None
IsOptional    :
Label         :
ObjectId      : fe8b277b-6665-477a-82a5-13d12093c912
Ttl           : 3600

Once the CNAME is published, we will run something like

$domainName = “”

$issuer = “”
$idp = “″

$brandName = “Rapattoni SSO Portal”

$cert = “MIIBzDCC…ATmgAwIBA”

Confirm-MsolDomain -DomainName “$domainName”  -FederationBrandName “$brandName”  -IssuerUri “$issuer”  -PassiveLogOnUri “$uri”  -SigningCertificate  $cert

On running the above referencing our own (WIF-library) IDP, however, we cannot make this or any variant be accepted by Office (for now). It seems like Office 365 seems to want a real STS to exist with a real queryable metadata endpoint (which we never implemented). So, to make some progress today let’s just make Ping Federate work as an IDP!

Simply do what the instructions say to do (though we avoided the LDAP and ws-trust config, just using the HTML IDP adaptor for now) along with a couple of IMPORTANT caveats:

Confirm-MsolDomain -DomainName

-FederationBrandName “rapattoni”

-ActiveLogOnUri “

-IssuerUri “urn:idp:pfrapattoni”

-PassiveLogOnUri “

-LogOffUri “

-SigningCertificate $cert 

-MetadataExchangeUri “

The record resulting from using the “Get-MsolDomainFederationSettings” command is:


Get-MsolDomainFederationSettings –DomainName

Note the ‘issuerUri’ field. You must NOT use the value for the name suggested in the Ping Federate documentation (as essentially it is already registered … by someone else). Use some variant , therefore! (This wasted an hour, since the Microsoft error response only said “something” …was already in use; and the Ping Federation documentation is its usual miserable self that hides or understates or gives little context on the last 1% of technical info.)

To be fair to ping (on the issue above), there is text – that is comprehensible ONCE you know all the issue! (which is why you are reading this blog on Ping Federate rather than talking to a for-fee solutions architect, no!?)


Next! The evaluation edition of the Ping Federate doesn’t seem to allow one out of the box to add to the websso attribute contract attribute in just any old namespace. Using a variety of namespaces for attribute types is required for Office 365 interworking, of course!

Let’s use the know-how documented deep in the help file:


The file noted there starts out life as:


And so we amend it, logically so the console will also expose the “” namespace when you add named attribute types (with unusual namespaces) to the attribute contract:


Thus we can now complete the websso attribute contract definition (also specifying some hard coded values, note).


We are almost ready for a trial against office! to be invoked using but, though we have a “connected domain”, we have yet to create a user in said domain as an office licensed user. And, we have yet to THAT record to denote its (base64-encoded) GUID … in its immutableID field!

We do that, using the command given (way) above:


Typing at the prompt induced the microsoft signin controls to show the following


which gives… (after some fiddling with local DNS resolution, within my naming domain);


leading to success (I think!)


which mechanism we get to see as in


ok. tomorrow, we get to make our own IDP work! Well Done Ping!

Posted in ADFS, pingfederate, SSO | 1 Comment

ASP.NET and PingFederate OAUTH2 Authorization Service

Let’s make ourselves a simple OAUTH-friendly testing environment. Install the free Microsoft visual Studio for Web evaluation package and create a web project (in c#) using the web forms technology. To arm the raw OAUTH/OpenID capabilities of this sites login flow, simply uncomment the Google provider in the AuthConfig.cs file, as shown. Run it to make sure that at least all works – out of the box. Now we know that the dotNetOpenAuth framework is up and running, working with at least the built-in “google” provider.


This gives our users a login challenge experience:


Having completed the Google side of the user challenge process, Google’s IDP send back a result message to our site. This then does local “account linking” of the Google name to a local account (whose value just happens to be the name from the first IDP to perform this process).. Later, we will see ourselves binding a second name to this local account – authorized by the Ping Federate server using OAUTH2 protocols and procedures.


(note we had to run the test twice, while the mdb account linking db got set up properly).

This work merely sets up a stage. It enables us to add in our own OAUTH2 provider class – and use it to talk to Ping Federate’s authorization server endpoints (rather than Google endpoints). Obviously, we want Ping Federate endpoints to play the roles of Authorization Server and the (access) token-issuing STS. The former gives out the so-called “authorization_Code” as a result of the user’s expression of the consent and the latter mints an “bearer” access token for the site that can show it has the authorization code. Given a bearer token, one logically  make API calls citing the token. In he case of Ping Federate, one can make a particular “api” call – a second call to the STS inviting it to take the bearer token  as input and issue as response a new, more “informative” token – with the various attributes of the user’s record.

Posted in pingfederate | 1 Comment

from sboxes to chaotically arranged fixed point reasons about DES starting with Shannon’s abstract rotor machine. It then goes on to use Feistel’s original “step” cipher concept to consider what a data flow machine really does.

Let’s summarize what we have learned about coding theory assuming that what the topics relate to the ECB mode of DES. First let’s look at LDPC, then Turbo and then Feistel/DES ciphers.

We know from Turing/Newman that folks took “1930s decision procedures” concerned with abstract detection and decoding and turned them into the likes of Colossus-supported rectangle convergence an Banburismus (for solving the naval enigma indicator system). In each case, math-architecture notions of ‘measure’ were at the heart of the thinking. This mixed 1920s thinking about the math of early nuclear physics with the then on-going move towards formalizing the mechanics of proof in math, via logic. Of course, the limits of logic became apparent, as did the application of limit theory for the updated forms of Newton’s methods of doing calculation by approximating functions with series.

With measure theory we see that any space can have both intrinsic and extrinsic information – or internal and externally-defined coordinate systems upon which the statements of motion are fixed.  And, one might have multiple such descriptions;  much like a building has internal dynamics (of its steel frame), internal pressure (of its cooling system), external volume (on the sidewalk) and area (on the skyline). In coding theory, we have multiple measures. What is more, the measures combine, to define new measures that leverage chaotic ideas. In particular, with such as Turbo codes two generated sequences measuring a common set of randomly permuted information bits may interact WITH EACH OTHER to produce a type of “dynamic measure” that can “calculate” – as the measures, acting as inner Phi and CHI streams, converge to decode an outer stream (and correct some errors!)

Ok! so in the LDPC world we have a “re-application” of Colossus sum/product thinking in which the outer “information bit” measure interacts with the outer “parity bit measure” each on one figurative side of the permutation matrix (the colossus-era co-variance rectangle). By passing messages in much the same way as a nuclear cyclotron refines gas to produce nuclear fuel, the mutual information is refined by the diffusion process of “two interacting measure systems”.

With Turbocodes, we see a more elaborate but similar architecture in which two shift registers like at the side of the permutation matrix. Rather than the matrix be the ping ping table as in LDPC, now we have a two level process in which each side’s shift register implements its own message passing run (using BCJR, vs sum/product) before the information bit side (say) passes the result of its refining operations across the table to the other wide that also runs BCJR, but with drivers from the parity check bits.

Now Fiestels original dadta flow machine also showed a very Turbocode like  nature. Rather than have parity bits driving a BCJR-shift register, he has key bits wander through a shift register. Rahter than have a sparse adjacency matrix with cyclic code permutation blocks as in LDPC, his data flow machine used the principles of sboxes and mod-2 bit flipping to create a large random permutation whose subspaces would, each round, we used to help refine the production of an encoded plaintext.

Ok, so the SP network and the one-way function are really now helping us understand des. Its better to thinking in terms of chaotic processes, in which the DES output is the code produces once the algorithm converges to a fixed point, due to the data flow machine producing custom fixed point attractors. Looking at how DES derived from the fiestel step cipher, and given the way that LDPC/Turbo codes work when decoding, we can see the principles of ciphering (vs decoding).

ok it now makes sense when an NSA type says that DES is unique in using sboxes. Its just that the s-box is a novel incarnation of wider principles, and its its one that is very much tied to binary boolean algebra. its one what nicely show how to parameterize the chaotic process to create the fixed point attraction basins one desires – as one creates a map of the surface of the code.

Posted in crypto

Enterprise Application for Office 365

There are quite a few moving pieces in the OAUTH to Office 365 experiment. But one is new: the ability to install a new SP (on a third party web host) that cooperates with the 3 standard SPs : sharepoint, exchange and lynx. After all, when exchange email shows a content with a tel: address or a sharepoint document, one wants a SSO experience between them all no?

And similarly, when you augment that backoffice with your own ‘enterprise application”’:



Posted in oauth

Aiming for PingFederate to Office 365 Exchange APIs, via OAUTH tokens

Now, the reason we are interested in either ACS or Ping Federate’s OAUTH2 support is because we want to use the new Office 365 Exchange Client APIs – that are apparently OAUTH2-guarded these days.

It seems sensible to create an office 365 tenant – giving us a set of API  endpoint to test the result of Ping Federate’s work.

Will it work?  Does this pattern have the right flows? Does OAUTH2 really induce compatibility and interworking?

Let’s find out! IThe goal is to invoke Sharepoint Online APIs and/or Exchange Online APIs using Ping Federate as the Authorization Server and (JSON)token minting site.

Sign up for office at … ! I can now advise (rewriting the mail after some weeks) that one SHOULD use the Enterprise trial option rather than the Small Business edition option shown below:



Posted in oauth, pingfederate | 1 Comment

Comparing Ping Federate v6.10 OAUTH features with Azure ACS v2

Back here we reported on how we used Microsoft Azure’s ACS OAUTH2 feature set. We were able to write a web client and web service that generated and consumed OAUTH tokens. I support of these entities, an Azure ACS tenant did “middleware work” …implementing the so called “authorization_code grant.”

The authorization code grant assumes a world of devices supporting an internet browser and “apps” – downloaded to augment the platform. The grant type is specifically involved in the task of “provisioning” apps – ensuring that the downloaded app also has the necessary security and personalization configuration from information supplied by the user via a browser-experience– that induces the initial download of the (well-provisioned) app. Downloading may give the user a new app on the platform and the code, to be entered when the app first starts. With that code, the app is able to complete its provisioning and access user information on remote web sites.

The user visits some vendor’s website with a browser – to which the user with an account with the vendor wants “to connect-up” said account with their IDP-managed membership record. This simply avoids account proliferation, and eases lifecycle management of accounts (and first-time provisioning  of those non-browser apps discussed above).

The (app) vendor’s website enrolling and thereafter “supporting” website should also become entitled post connect-up to make web service calls to the IDP .. to pull or to update the remote membership record. To authorize this ongoing server-server connection hookup, OAUTH2 gets involved – delivering a consent-UI flow using web pages and browser and asking: do you want your IDP membership record to flow to vendor X and should the IDP assign some of your read/write powers on that remote record to the vendor? So, both inter-site connections and authorization_codes are delivered so that sites can support each other when delivering data to apps – that provision themselves given the code.

If we recall, we created in our Azure ACS tenant a so-called “service principal”  record for the vendor – known more generally as “client management”. That is… we configured a per-vendor clientid/password pair. We also created an SP in our ACS tenant – arming and deploying an STS that will mint “access” tokens – in some or other blob format. Of course, it will be the authorized vendor working to add some kind of value to the IDP’s membership record who uses this STS to convert the “authorization_codeword” minted by the consent process into the first signed token which, upon its return by the STS, the vendor will thereafter attach to its server-initiated web calls to read/write the membership record.

We also recall adding a consent.aspx page to the membership management component of our IDP. This delivered the one-time “do you want to connect-up…” GUI experience to the user…as he/she goes about connecting-up the vendor site with the IDP membership site’s oauth-guarded data service. And we recall seeing how upon gauging user consent the consent.aspx page would itself make a web call to ACS – to create a “delegation record” (recording the  user’s connecting-up assent). The result from ACS was the one-time “authorization_code”, minted specifically for this newly mint delegation record. Passed back pass back to the vendor site via various browser-based redirects. the vendor’ site flow in charge of the user’s “connect-up experience” can swap it for for a real token by calling the SP/STS token-minting endpoint.

Let’s try and make Ping Federate server do the same thing. Then we can compare our integration experiences.

First install the Ping Federate server and its OAUTH playground, per the instructions. Once you have installed the license, you can change the console uid/password. Now launch the OAUTH playground website also hosted on the same jboss host as that hosting Ping Federate itself and use the settings button on the page (1) to auto-configure your shiny new Ping Federate Oauth configuration “for a vendor” :


We see the result of the site invoking web services for remote OAUTH client configuration/management, using the admin/2Federate credentials required for said calls. This populates 6 records, as shown. We are interested in just the “ac-client” vendor – since it showcases the equivalent of the authorization_code grant work from our Azure ACS work.

Back at the Ping Federate console, we see what that playground site just did, having got to this screen from the (new to Ping Federate) console



ok. So we have accomplished the equivalent of creating a service principal in an Azure ACS tenant for a new vendor, known as  “ac_Client”. Whereas in the Azure ACS world we used the ACS management API to register the uid/password/redirect, here the site’s configuration pages used the Ping Identity API to Ping Federate instead. (As with ACS, one can alternatively the management console to manually fill out a form communicating the same information fields).

Note that configuring access controls on the API port (for remote client management) requires one to set up a validator (a particular repository of vendorid/password pairs):



noting the difference of the above multi-screen from the older concept of “application authentication” to other web service ports offered by Ping Federate!


This leads us to understand the core configuration screen for the new OAUTH2 authorization service component of Ping Federate:


At 1, we see the management concept of “scopes” being configured – defining a description for the unnamed/default scope and defining additional “named scopes” relevant to the IDP-managed resources. Remember, these are like the custom-rights to be defined in ADRM– saying perhaps that you can or cannot print of forward… this kind of marked paragraph.

At 2, we see how to configure a couple of behavior parameters of the Ping Federate authorization-code element of service: how long a code will be good for (i.e. before when the vendor needs to cite it, to get back the first token from the associated STS), and how the code itself is to be generated (to address code spoofing/guessing).

At 3 we seem some advanced features we can come back to… MUCH later!

Let’s head back to the playground website noting that we need to configure just a little more per-flow setup in order to prepare to see something happen. We are interested in invoking the authorization code demo! To make the simulation of the vendor work, go back to the main screen, and choose the authorization code link (and read the tutorials too, if you wish).


At 1, above, we see what we recall from our our own OAuthClient provider class work (that we plugged into the ASP.NET OAUTH framework for vendor websites). We cealy see the vendorid and the indication of “code”  – inducing the Ping Federate-hosted OAUTH2 authorization service to invoke the “authorization_code” flow (vs alternatives).

At 2, we note first that these 2 parameters are optional in Ping Federate world (whereas they were not in the Azure ACS world). ACS required the caller to cite the URI address to the authorization server’s consent page, requiring that it align with the recorded address (in the service principal record).

At 3, we see the ability to request that a particular list of (pre-registered) custom named-scopes be associate with the authorization code “grant” (as finally visible in any token minted by the STS)

And at 4 we see the anti-CSRF support (that caused us so much pain, when writing the OAUTH2CLient class, initially). Fortunately, what we did back then to make an OAUTH2 client provider will server us here, too!

at 5, we see some PingFederate “value-add” – in which should no user session exist at the Authorization server during consent page flow, a websso request can be sent off to the desired IDP to induce an authentication session at the authorization server itself (acting as a pseudo-SP). And as is typical in PF land, one gets to indicate particular querystring parameters on the URI that will induce Ping Federate to initiated a websso flow with at IDP: identity the IDP connection (of this pseudo-SP) and the IDP adaptor that the IDP should use.

Note how PingFederate’s nice, modern capability to use authenticationContext switching feature of the SAML2 protocol – as a means of choosing IDP adaptors – is missing. Hmm.

Anyways, since no such values are supplied by this particular playground page, the Default IDP connection is applied –  as we see when we hit the go button! A fiddler trace show the request being sent and a login challenge page being  rendered. Note there is no sign of the usual ping URL initiating websso on the IDP.


Once the user authenticates to the IDP via websso, we see the expected post-autentication challenge consent screen being rendered. Once the user consents to the continuance), the authorization process return the “code”:




fiddler trace:


Follow up token minting can then be completed by the vendor site. In this case, the STS correctly refuses to give one, since we supplied the code too late, making it an “expired” code):


With this we can play some more tomorrow, perhaps seeing if we can replace the playground site. Our goal should be to to apply own OAUTH2Client provider for ASP.NET instead, from here.

Posted in oauth

Looking at the Ellis Paper as if enigma


Let’s look at this through the eyes of someone steeped in the doctrine of rotor machines.

To a person of Ellis generation, the table is “generated by” the rotation of the rotors – creating the infamous Friedman square. Just as a rotation takes the parity vectors and creates an orthonormal basis for linear approximations, so rotating a enigma wheel “vector” leverages conjugation to create the diagonally-polarized rod square — and its inverse, moreover.

Assume Ellis is at GCHQ since 1960 c (since he comes across as an “old-timer”, speaking in generalities and what were once-cutting edge demonstrations of academic math expertise). Thus, he is fundamentally inculcated with the theory of rotor machines and the class of cryptosystem that derives from their general use. Thus he speaks in the terms what he knows (generated tables). He probably has lots of WWII-era background too, knowing for every “rod square” there is its inverse. its part of the DNA of rotors.

now lets say one is using a 1950s sigaba machine, primed with the daily key. The operator has proven the settings as correctly entered by enciphering A 13 times (to prove the right “signature” ciphertext is output as matched against the pre-computed signature on the daily cheat sheet). The operator now formulates a random number (coin tossing…), and communicates the ciphered version of this number to his peer leveraging the now-synced sigaba.

Of course, this was all standard key management protocol – and analysts for 30 years had been studying such “indicator protocols” – or key agreement methods, to use modern parlance. Ellis and co. are perfectly well aware of the American practice of using (orthogonal)  latin squares cards for indicator protocol purposes – to authenticate first the terminals/stations before engaging in the key agreement protocol. “Protocols” are part of the 1960s DNA for cryptosystems. – particularly at GCHQ with its institutional memory of the WWII-era hangups over naval engima indicator protocols relative strength (compared to airforce and railway enigma) and its ultimate weakness (to probabilistic oracles).

ok. Using M2 the originator’s plaintext then acts as the classical-key which is enciphered using a common (non) secret value x. Clearly M1’s (1d linear) function is generating a parameter that generates a derivative of M2. in rotor terms, assume X is used simply to *dynamically* move the “tire” of the enigma wheel. – changing the sub-space mapped by the M2 wheel.

Assume M3 is a just a reverse rod square, generated by rotors moving in the contrary manner to the encrypting rotors. These rotors tires are offset by x, too – so they match the subspace of the enciphering machine.

having agreed the subspace, one might be tempted to then start enciphering single characters. But that’s the novelty. The plaintext used to generate the subspace agreement has already been communicated.

But, clearly M1 has to have a definition that is known to M2 and M3.


Now an Ellis will also have known 1948-era methods of quadrature encoding – and the process of using large (hard to invert) matrices (aka tables) to cut down the decoding time needed to find the conditional probabilities by a “probability receiver” – that expects to encounter noise due to multi-path propagation and interference, etc. Such tables are generated on the computer, with presumably computers also being used to perform the matrix calculations. It doesn’t seem beyond belief that a tape-based matrix calculator was being used, once some other computer had generated the tape bearing the table). the “key” is the matrix.

Posted in crypto, enigma

Cocks speaking on NSA/GCHQ hiding drivers


Now, what Cocks does NOT reveal is that he went upto Cambridge in 1987 (to engender the next generation of spook techs, WWII-style).

I got to study the output of that “competition” between the students. This then gets tied up with IRTF, the MIT/RSA PEM project, and the DRA sponsoring UCL-CS to host “one of those students”.

IT would be interesting now to see how much certs were involved in the Brent telephone design. I have a good mental model of how the STU-III algorithms and processes work (but keep shush, out of respect for my American  hosts). They can reveal that when *they* feel its fit to do so. (Revealing that technical secret goes beyond the good host/good guest paradigm!)

Posted in crypto, early computing

Deleting HOST/[host] SPNs; addressing the last 20%

Deleting a host’s HOST/ SPN from the CN=PING (computer) account is probably not a good thing to do, OPERATIONALLY. Doing it to make Ping Federate interwork for the FIRST time (so you don’t go nuts), its fine. Now you know your Ping Federate setup is fundamentally sound (and you are not going nuts because some US export issue is making things fail, say) then you can consider how to deploy a sound INSTALLATION deployment.

In engineering, we distinguish between unit tests (in an idealized functional testing theatre) and system tests (in the intended deployment environment, or some close simulation thereof). I happen to be mostly involved in the former – which is show it COULD work. Of course, the idea has to be sufficient well researched that its not going to founder … in the actual deployment theatre.

In my world, I experiment with the latest tools – an the unit testing operates under modern assumptions. But, we deploy on ancient hosts, almost at end of life. Thus, whenever I counsel X publicly, don’t forget that there is a private conversation going on, too – with the person paying for the advice. This is that which takes a demo and turns it into a production capability.

Don’t get hooked on the first hit. At the same time, since the American scene is designed to overwhelm the foreigner (and induce a slave like response), don’t forget to play the game. Use technology from 10 years ago (that no longer has much value, to the hot tempered, heavy on the mindshare marketing Americans). Go back and now compete by deploying 80% of the features for 1% of the cost. You will generally find that no-one uses the top 20% of the features anyways.

So, be careful. The first hit is always free. But its has to be used right, to get long term health care. To get care that is also affordable, and doesn’t crush one, you also have to understand how the game is played on the cost side of the cost/benefit curve. So play it, intelligently! And play to win!

Posted in rant