Access Token “validation”–by Ping Federate acting as Resource Server; ASP.NET account linking

We extended our ASP.NET OAUTH2 provider so its GetUserData method, called as part of ValidateAuthentication, talks the Ping Federate authorization server STS endpoint asking it to validate the access token received earlier (and presumably via a webservice call from the resource “client”). In response, it presents a new token type (ping federate proprietary),which contains a field called (happenstance) “access token”. Parsed as a JSON object, it’s an array of name/value pairs – source ultimately to the Ping Federate access token attribute mapping screen.

image

This allows us finally! to say we have built an ASPNET OAUTH2 provider to talk to Ping Federate, since we see the final attempt to do some account linking based on the message exchange

image

image

Well that was not too bad. What about 5h all in all?

Well done Microsoft (ASP.NET team). WEll done andrew@dotnetOpenAuth. And well done Ping Identity (for showing what to do). well done me, for being a second class type who wont give in to the American putdown labeling attached to me (and 5.5 billion others, who are not “exceptional”)

Posted in pingfederate

ASP.NET OAUTH2 provider to Ping Federate’s Authorization Server – part 4

In the post “ASP.NET OAUTH2 provider to Ping Federate’s Authorization Server – part 3”   we made an ASP.NET website, acting as SP, initiate a request against an OAUTH authorization service. Our Provider class got back the authorization code – a one time code to go get the very first acess token. Getting the latter is now our mission:

image

image

There is not really a lot to say… except do the above  All I changed from the ACS implementation was the endpoint (to /as/token.oauth2); and added the SSL ignore cert validation errors (because Ping Federate is still operating with a self-signed SSL server cert).

Obviously, the next stage is to do the same again, this time using Ping Identity’s proprietary validate_token access grant type. In the callback framework that means calling

image

Posted in pingfederate

ASP.NET OAUTH2 provider to Ping Federate’s Authorization Server – part 3

In the post “ASP.NET OAUTH2 provider to Ping Federate’s Authorization Server – part 2”   we made an ASP.NET website, acting as SP, initiate a request against an OAUTH authorization service.

By suitable selection of port in the code, we get a round trip in which the response is deliver to our handler, as desired. It has all that it needs to be handled by the waiting provider (instance). Obviously, it also has the authorization code, as shown.

image

Now the return handler expected to see a /oauth2 component in what ASP.NET programming calls the pathinfo property. In this build, for dot net 4.5 and perhaps a change of ASP.NET version, by default the “page” identified by the returnURI supplied by the dotnetOpenAuth/ASP.NET framework I’m using on this host does not suffix .aspx. Thus the pathinfo (the path fragment after the “page”) is not determined by the pipeline, upon response processing (since the paradigm has changed). To fix this (in a hacky manner), we simply amend the redirectURI in our class to add .aspx to what we assume to the be the (old paradigm) page name – and update the corresponding Ping Federate vendor record to include the .aspx component. Obviously, you do better in production code!

Now we can start to debug our response processing! What our (bug fixed) initial response handling does is simply unpack the OAUTH-style state into the form the dotNetOpenAUTH/ASP.NET wants it handled.

image

Finally, the method in our own provider class for verifying the authentication (response) iis invoked. It due invokes its base class method which duly calls our subclass’s appropriately named QueryAccessToken method. Its job will be to exchange the authorization code for a token, minted by the STS feature of Ping Federate!

This isn’t hard, is it!!

image

 

image

Let’s do token minting (and then attribute collection) later!

Posted in pingfederate

ASP.NET OAUTH2 provider to Ping Federate’s Authorization Server – part 2

In the post “ASP.NET OAUTH2 provider to Ping Federate’s Authorization Server – part 1” we got ourselves a basic implementation of a provider class for OAUTH2. It cooperates with a set of ASP.NET page handlers that build upon the events and messages of the OAUTH2 authorization_code procedure. In short, those add to the OAUTH protocol value-added “request formation” and “response handling” procedures, where the latter involves lots of post protocol account linking and database work.

We invoke the OAUTH process by clicking the button that chooses our “Ping Federate” server as the authorizing/authenticating partner of this SP website.

 

image

The page handlers invoke the dotnetopenauth framework – which finally calls the first method in our provider class. The output is this method is the returnURI Address, with suitable local parameters. We see two local parameters (at 1) added to tie the incoming response back to the correct outstanding protocol state block (waiting for responses from PingFederate). They identify the class of provider and the unique instance/session identifier or “SID”). The handling at 2 allows this site to be a protocol bridge, where the returnURL parameter stored in a cookie (storing this address “even lower” in the stack of return values) will allow the indicates OAUTH response message handler in turn to invoke the next outstanding layer of protocol activity retrieved from the (cookie) stack. I.e. perhaps generate a ws-fedp response).

Rant Alert: For privacy reasons, this information is hidden form the nosy American IDP and its policy (probably) of sharing information on who is associating with whom with NSA or DHS or some contractor proxy in the UK or US (to skirt local laws). Sigh (at the duplicity process…of IDP vendors).

image

note that enforcement of port 80, above, may need to be other port value – depending on how you debug or deploy.  You also may not DESIRE to downgrade from an https indication…

The next step is to learn the address of the OAUTH authorization server (i.e. a Ping Federate endpoint). This happens in the GetServiceLoginURL method. In short, it formulates the request message to the OAUTH authorization service, moving pseudo-state information FROM the returnURI to its correct position in the request. This is important – especially when working with conforming OAUTH protocol servers (since they handle the latter form of the state information correctly, and typically ignore forms that attempt to leverage parameters on the redirectURI).

image

https://localhost:9031/as/authorization.oauth2?response_type=code&redirect_uri=http%3a%2f%2flocalhost%3a80%2fAccount%2fRegisterExternalLogin%2foauth2&client_id=ac_client&state=__provider__%3dPF_localhost-9031%26__sid__%3d8625dfade6204c19b8e54c7a5dbae4f8

Since Ping Federate is programmed to only return messages to registered returnURI addresses, we must update the vendor’s list (of potential return endpoint):

image

Note how in GetServiceLoginURL  call we actually altered the return URI prepared by the earlier method, adding /oauth2 as a pathinfo element. And, note how we register this in PingFederate, too. Between registration, indication on the request, and the additional signal represented by the “oauth2” pathinfo, state handling can be handled upon return. We see how this happens, later.

We now see request handling et all the way through to the various checks so that an IDP challenge occurs. Clearly we are doing SOMETHING right.

image

We will address the second half of “response process” in the third part of the series.

Posted in pingfederate

DES initial/final permutation. A rationale?

image

http://www.schneier.com/paper-blowfish-fse.html

Presumably, IBM had to consider the same argument – given the amount of silicon taken up.

Perhaps we see that there were criteria other than cryptographic strength. There are also cryptographic CONTROL plane aspects one wants to enforce. And, one may want to make a stacked die (even back in 1975) – assuming that the same silicon is also going to be driving a crypto-search machine (not a encipher/decipher function machine). Remember, IBM’s largest customer is NSA – for the language/sorting function of the 1970-era agency. Its only by 1990s that SUN gets a hold.

It also occurs to me that really one doesn’t want to use DES ECB with a source that has any language characteristics (i.e is other than IID). The original (1977-era) model of (triple) DES ECB for 57-bit (yes 57) key wrapping is one use, as a cipher one has a different use – given that initial permutation.

Posted in DES

hagelin cipher (1975) and linear programming

Whether a 1950s-era Hagelin cipher is implemented on a lug/rotor contraption or a hand held calculator makes no difference. Its not chaotic; and does not seek to base is strength on the notion of the distinctiveness of 2 evolutions of the probability densities.Rather, its very linear; and therefore susceptible to linear programming attacks engaged in “approximation”.

And so when we see the NSA (cryptolog) phrase “manual cipher”, read Hagelin machine (circa 1970/1980, not 1945).Believe the company when it says its protections were fulfilled as designed – which is not to say that the tweaks made were not sent to NSA; or the micro was not so irradiated by Motorola so as to induce emissions that a sensitive detector could pick up (later).

To be fanciful, imagine that the Swiss calculator (with chip dies made in the “custom” Siemens ICC fab) is now in one’s super-secret Iranian military intelligence facility …full of well indoctrinated coding clerks doing best practice. But, they also go home, taking “information” with them. Of course it decays pretty fast, so one needs ones a sensitive detector to be scanning the human “carrier” – who is a damn sight more predictable and recurring in his habits than the algorithm. Use the unsuspected “easy vector” to compromise the hard vector (pun).

More practically, assume that the micro is induced to make errors in its fetch-execute cycle which subtly alter the path of the “Software” finding “lugs” so as to leave a poker-style tell (that reduces the search, if you know the tell).

Now, at the same time one has to be careful – since all those 20 year old calculators still exist (and our friends in Iran can now go test them TODAY with our collective and different level of appreciation and compute power). Our Iranian friends (and there are some, amongst the religious nuts of that and related regions of the planet) can even today GO BACK in time. They can go find out now how hard it would be to perform the search, assuming there are tells. So, the first thing to do is FIND the tell(s); now you assume they exist.

There is nothing that the highly indoctrinated subgroup within the US hates more than to be bested in the secrecy world of double-think (even 20 years later!). If you want to “annoy” the old men then “showcase” the spying (of 20/30/40 years ago) – leading to increased distrust (today). Now you are playing Kayla-style (fictional) politics (even with formally-irrelevant crypto history). But it still has power; and its entirely legitimate POLITICAL power.

image

http://en.wikipedia.org/wiki/Linear_programming

Let’s assume that the block on releasing the likes of the 1945-era Tunny report comes to an end by 2000 simply because the era of reading “manual ciphers” is just over  – by that point. Those who were going to be duped, were duped; and new techniques are relevant. So consider that perhaps we have the Iranian people’s revolution from the local dictator to thank – for putting people before empire. We can also assuming that continuing blocks on Testery reports from 1945ish are still in place because they hint at “the human skills” needed when puzzling over even modern cryptanalytical solutions.

Well lets take all that scifi and fantasy now, and go find some reality:

image

image

http://www.amazon.com/exec/obidos/ASIN/3540306978/thealgorithmrepo#reader_3540306978

giving us, with some anti-clockwise rotates:

imageimage

Now I wish I could remember where I was reading, just the other day, something on normed spaces, in which one wanted to know which of the linear vectors was “nearest”. This contrasted with picking the nearest point. Perhaps it was in the PCA eigenspectrum decomposition stuff I was reading from UCL.

Posted in crypto

crypt-oAG and Arizona

http://biphome.spray.se/laszlob/cryptoag/buehler-tape.htm

image

Hardly! I have good reason to believe that the reference is to Motorola GSTG (as it was, and no longer is). And, I worked for it; and certain persons who, 10 years before, were “fully indoctrinated”.

The fun times in that job were two visits. One went and looked at lots of old military boxes (presumably full of circuits), from the Caneware projects. The others was a closet holding the root key (phone unit) for the worldwide civilian secure phone system (the civilian version of the STU-III, with different electronics and ciphering). Next door was (in a non compartmented area) was the folks selling the Type I LAN encryptors (with its red book distributed TCB concept) and the “secure” satellite phone system.

What was MOST interesting was the engineering method in use.

I remember meeting the general manager. It was quite fascinating meeting someone who had spent a lifetime doing military contracts (in a classified facility). I don’t think he knew that another world existed. All they really knew was that NSA had collapsed from within; and the old world had ended.

Posted in crypto

le grand saga de crypt-oAG

http://biphome.spray.se/laszlob/cryptoag/buehler-tape.htm

Boy does the internet not make a hash of things.

First, realize that in 1990 a SUN 3 workstation just happens to have the ability to support custom boards – with lots of custom-programmed (in the VSLI sense) LUTs. Your job as a cryptanalyst, having been briefed by the cryptologicians, is to “intuitionistically” break this or that variant of a hagelin cipher. If you have the information about the tweaks made to the general mechanism for country X, so much the better. Your job is to sit there, in a remarkably similar manner to 1945 colossus cryptographers (in the UK nomenclature) and let you human brain do what the computer cannot. But DON’T underestimate the combination of human-driven (computer) search! There was a t (me when I could tell you WHICH (concert) pianist was playing (a major labels recording)! I could hear the intonations of his/her favorite piano, and the way his/her particular muscles were tuned into its touch and the response of its particular mechanism, for the particular way the hand/arm would have to move – to play certain note sequences.

So assume the cryptographer can do the same thing, with his/her favorite cipher “music”, as its intones its way on an FPGA “instrument”, with display on the sun III framebuffer! Its an art, similar in some ways to reading xrays or MRIs.

Nows I think about it, I had great fun also re-learning to play on a (high end) electronic piano, with no traditional mechanism, no soundboard, no vibrating strings that buzzed at your ear drum. Playing with 16 bit polyphony was great fun too (and seemed more than the brain could deal with, back then), as was using a PC to design one’s own PCM-encoded waveforms that could be uploaded. It was also fun re-learning to “touch”: a keyboard thinking in terms of inducing the responsive computer to change the attack and decay filter for the particular waveform – quite a different musical instrument to the true mechanical piano (even though it looked like one).

Posted in crypto

ASP.NET OAUTH2 provider to Ping Federate’s Authorization Server – part 1

In an earlier post we set the stage for integration of a ASP.NET webforms application with the Ping Federate OAUTH2 authorization server. We showed that the application could talk to some other oauth-like provider (Google). Let’s get to the next stage and try to develop a “provider” class that plugs into the ASP.NET/dotnetopenauth framework for those OAUTH2 websites seeking to talk to Authorization Servers, IDPs, and graph endpoints (web services supplying user records!)

Remember! Don’t get frightened. The language of OAUTH2 is all designed to intimidate you, making it seem ultra complex such that only “professionals” and “experts” have much of a say. It’s VERY simple, in fact. Anyone can do this (even me, I hope)!

First, to our host let’s add the windows identity framework (and SDK).

image

Then, to our source code project we can add a reference to the “microsoft” identity DLL, enabling us to work with “claims”:

image

Since we are starting our “OAUTH2 provider” with the class we already built to talk to the Azure ACS-enabled Authorization Server, we imported the prototype class into our project. Now we can use associated packages. First we resolve the identity class (above), and then the resolve the reference. to the NewtonSoft JSON DLL.

Since access tokens from Ping apparently come in JSON form, we will add a nuget package for handling the parsing of JSONP objects:

image

We also added a reference to two standard dotNet DLL: system.runtime.serialization, and system.identity. These round out JSON and Claims object support.

We now have a compiled provider  class(not that it works). It basically exposes the right interface –enabling vendor-specific behavior to specialize the dotnetOpenAuth framework integrated with ASP.NET.

image

Before we can registered this provider, we need to collect some data from the Ping Federate configuration (endpoint addresses for (1) the STS that mints access tokens & (2) the authorization server that administers the consent and issuing of “persistent” grants), and the application-client’s “vendor” credentials):

image

image

This allows us to specify a registration (again noting that the data is  neither perfect nor working at this point). But, we are making SOME progress. We have an view of the landscape on which we can now paint the actual players.

image

The last piece we need, recalling what we learned from a similar exercise on talking to the Azure ACS OAUTH2 procedures, is some code that we added to the suggested ASP.NET page handlers. This handles the syntax of the OAUTH2 “state” information (containing ASP.NET provider names AND anti-CSRF value) and the authorization code. For this we add to the handler a recognizer routine, that detects the authorization_code coming back from Ping Federate. It basically intercepts the response, and simply prepares the fields so that they can be handled either by the provider class or by the ASP.NET provider framework ensuring the message goes to the correct provider class instance..

image

note, the Convert.ToInt16 may want to be more general. For example Convert.ToInt32() will allow for a wider range of valid port numbers.

Since this all compiles, we can tomorrow start to debug it all – and make it all fit with the Ping Federate way of doing things. Let’s cross our fingers and hope we can make something work that plays the role of the “ac_client” in the OAuth2Playground web app – which we have outgrown.

Posted in pingfederate

Ping Identity and OAUTH and SAML…

One of the things that just stands out about Ping Identity is the engineering. Boy do they understand what they are doing. So it doesn’t help when they have to deal with the likes of me (who is not exactly in the first class league). But, I does what I can. It really helps when the likes of Ping clear up OAUTH, and make it obvious that it ain’t NO different to what I has been doing for years now (with SAML2). There is just a new window dressing.

So for years we have had SPs and IDPs. User visits SP, and no session is found; so a websso interaction then occurs (to get the assertion from which the SP session is minted). Ok so far; being precious little different to exactly what happens in SSL (when one arrives without an SSL session, so an SSL handshake happens to get one, from which a SP’s cookie-session is minted!) I gets it, when not soaked on Kentucky brine.

What is the websso interaction? Well it’s just a redirect to the  login page flow on  the IDP site. The process concludes when the SP receives back a posted assertion containing the “persistent” name of the party who just passed the IDP’s user challenge. (A persistent name is little more than a salted-hash of your account name!) Well only some few million ASP.NET websites do that, using forms auth cookies! And only half of them in the last year have upgraded to the variant of the same thing, using ws-fedp websso.

This would be the conclusion of it all except for the fact that in some cases we want to make ANOTHER trip to the IDP this time to collect user attributes (in the IDP-maintained user record, for the user just named in the last interaction). For this, for years we have used the SAML2 “artifact” binding process. That is, the SP site does another redirect to the IDP (which already has a user session) which automatically returns a reference code in the URL, by return. The SP uses the reference code (known as an artifactId) to go make backroom server-server web service call on the IDP to pick up a signed record full of attributes about the referenced/named user record. Perhaps the two servers may authenticated each other, too – using a “client management” (i.e. uid/password!)

Ok. So to OAUTH.

Well it’s the same as the above, except everything got renamed. The artifactId is now the access-token “reference” – obtained as one swaps an authorization_code – obtained via a browser interaction with,   urr, an IDP website that, urr, authenticates and challenges users (just as above) to mint IDP and SAML/OAUTH sessions. The “de-referenced reference” is an itself access-token (of a type OTHER THAN a “reference type”  including such blob formats as: proprietary-ping, SWT, JWT, or something else some American company invents for no other reason than to hold on to a customer base via a last mile blob-toolkit lockins).

Well at least Ping taught me that all is well in OAUTH land; since absolutely nothing has really changed. With lots of US government money, folks have succeeding in reinventing the wheel, and the tax payer gets little (or nothing) they didn’t already pay for already (when the same govt paid for SAML2)! Now its an all “new” American wheel, all shiny and ready for a big marketing campaign. Or not; since today OAUTH systems don’t actually do that little matter known as “interwork”. Since this is kind of important (even at my second class status, I knows that), I might as well just use the SAML equivalent process (since at least vendor systems interwork!)

Presumably there is method to the US government madness in funding OAUTH. Probably, next generation spying or citizen surveillance issues lie at the heart of it all. All change on the blob front gives the opportunity to change the policy, with vendors “leading the charge”.

Posted in pingfederate, SAML, spying

Ping Federate’s OAUTH access token grants/types

At https://yorkporc.wordpress.com/2013/03/18/aiming-for-pingfederate-to-office-365-exchange-apis-via-oauth-tokens/ we documented the work we did evaluating the Ping Federate OAUTH support. Originally, the idea was that purchasing it would enable us to talk to the APIs of Office 365 (and Exchange Online, and Sharepoint Online, in  particular). While having 2 OAUTH implementation actually interworking may (or may not) be in the plans of Ping and Microsoft, apparently its useless for THAT purpose, today.

So what does it do?

Well I suppose we should look at the areas where its mostly likely that interworking would fail – in the format of the access token. So what does Ping Federate (v6.something) even do?

Well, we can now see, using a new run of the ac_client “vendor/client”, what is exchanged for the authorization_code granted by the authorization server:

image

if one makes yet ANOTHER round trip to the STS, using a ping identity proprietary grant type , one can swap the opaque token for a proprietary parseable token (with attributes):

image

Presumably, at some point Ping will sell an “token kit” that outputs the same kind of access tokens used by others (e.g. the SWT or JWT also used by Microsoft products, based on semi-standards).

Well, I think I’m at least understanding the features and current limits of the product and its security concept – and this aligns with the rather predicable way that the OAUTH marketplace is shaping up in the US (with yet another round of proprietary tokens, with upgrades fees if you want standards and thus the ability to actually interwork!). Strange that the US just will not offer standards-based security (probably because there is then no way to make any money!)

Posted in pingfederate

Offce 365 federated domains–working with metadata (mex)

at this skydrive file  we see the public metadata that Ping Federate (eval edition) exposes to the world about our Ping Federate implemented ws-trust endpoint. This is presumably consumed by the Office 365  process of configuring the SP-STS.

Not having used this feature before, lets make some notes (before I forget it all):

image

https://officeping.rapmlsqa.com:9031/pf/sts_mex.ping?PartnerSpId=urn:federation:MicrosoftOnline&Type=mex

produces (partial)

image

Posted in pingfederate

woops

image

http://cryptome.org/2013/03/cryptologs/cryptolog_11.pdf

evidently not xtal balls…

 

image

http://cryptome.org/2013/03/cryptologs/cryptolog_34.pdf

Building interpretation is evidently turning into strange source:-

image

http://cryptome.org/2013/03/cryptologs/cryptolog_42.pdf

and the xtal balls have multiplied, as the problem got bigger…

The following absolutely captures the folks I encountered (fun… in its most strange form!). IN English, they would be call train spotters (apart from the married with kids, bit, with MSc/MA!).

image

http://cryptome.org/2013/03/cryptologs/cryptolog_34.pdf

we even see perpetuation of coventry myth (total bullshit… note) in yet another pro-anglo moment: (Do you think the average yank could give a damn about Coventry, having seen Berlin!?)

image

http://cryptome.org/2013/03/cryptologs/cryptolog_42.pdf

censor missed one…remembering this is 1978.

image

image

http://cryptome.org/2013/03/cryptologs/cryptolog_42.pdf (wonderful article for the computer history, not dissimilar to Samet’s story)

In general, one can see why modern NSA is so proud to publish this particular magazine –  it a people story. but murder is in the air…

 

image

so what happened to MR X? (or Ms XXX these days)?

http://cryptome.org/2013/03/cryptologs/cryptolog_47.pdf

the pictures get ever more interesting (especially once annotated, by me). now we know the “src” of the balls on the FANX…not the that mass/volume ratio actually computes…

image

http://cryptome.org/2013/03/cryptologs/cryptolog_47.pdf

squared (not) analysts make their poetic mark on 1920s ideas that NSA math types are struggling to adopt:

image

http://cryptome.org/2013/03/cryptologs/cryptolog_61.pdf

The “interplay” between the moments problem and Pearson Curve fitting.

image

http://cryptome.org/2013/03/cryptologs/cryptolog_117.pdf

The former is DEFINITELY worth some followup, being what I’ve come to expect.

image

http://en.wikipedia.org/wiki/Measure_(mathematics)

leading to

image

http://en.wikipedia.org/wiki/Ergodic_theory

Let’s get back to browsing cryptolog, rather than wandering the web!

We see that lots of NSA math is focused on COLLECTING the signal given raw data (not just attacking the cipher):

image

http://cryptome.org/2013/03/cryptologs/cryptolog_117.pdf

Perhaps I should re-read the military cryptanalysis books, only because apparently they were training material for a whole generation (of folks, who were not computer scientist oriented). One really see that lots of NSA problems are management (of people), rather than technology.

image

http://cryptome.org/2013/03/cryptologs/cryptolog_119.pdf

interesting use of language by an insider. Must think of STU-III as a data modem.

image

http://cryptome.org/2013/03/cryptologs/cryptolog_120.pdf

Posted in dunno

Ping Federate ws-trust to Office 365–attempt #1

Building on https://yorkporc.wordpress.com/2013/03/19/talking-realty-idps-to-office-365/, lets finish the evaluating Ping Federate and Office 365 by looking at how the ws-trust component works. In short we will deploy a ws-trust STS and Outlook 2010 (trying to be more like the typical office worker).

We fill out the ws-trust configuration parameters, augmenting thereby the existing SP-connection that already existed. The pertinent instructions seem to be:

image

(“officeping.rapmlsqa.com:9031/idp/sts.wst” in our case, omitting the https:// scheme AND delimiter)

Before saving we see:

image

producing

image

Then we installed outlook (on a host that is  NOT a member of a domain, note):

image

next, we ensure that the Office 365 account is not only “registered” (using the New-MsolUser cmdlet) but has a “subscription” attached to it (so the SP services work, in addition to the FederationGateway). For this task, logon as main domain administrator (Administrator@oauthtest.onmicrosoft.com, in my case), and use the web interface to assign a “license”:

image

We tested the Outlook Web access, using the menus. Next, we want Outlook (thick client) to work, so we can determine that the Ping Federate ws-trust setup is correct. Thus, we view:

image

Hmm. By no means easy – since The autodiscovered process prompts for a credentials.

Sounds like another Ping “failed” to document – probably because its not seamless.

Posted in pingfederate

Protected: tiltman on secrets

This content is password protected. To view it please enter your password below:

Posted in crypto, early computing

talking Realty IDPs to Office 365 (via Ping Federate)

Ping Identity disclosures show how a ws-fedp IDP (talking modern ws-fedp with suitable claims) can talk to the Microsoft Online SSO federation gateway – a multi-tenant SAML2/ws-fedp FP that supports 3 SPs (and any additional “enterprise SPs”) located in (or attached to) each Office 365 tenant’s cloud: Exchange, Sharepoint and Lynx.

image

So we  can talk powershell to our office 365 instance, we installed the usual bits of administration middleware: the online signin assistant service, at http://www.microsoft.com/en-us/download/details.aspx?id=28177, and the powershell module (and supporting command window) at http://technet.microsoft.com/library/jj151815.aspx

Obviously we connect and can list our users:

$msolcred = get-credential
connect-msolservice -credential $msolcred

image

Using a guid-making site and the advice here, let’s PREPARE TO make a new user at the SP:

new-msolUser –userprincipalname peter@rapmlsqa.com -immutableID ZWM1ZDc1YmQtNzcwMS00YjRhLWI0ZjEtMjVjOGE3MGJiYjFh -lastname Williams2 –firstname Peter2 –Displayname “Peter2 Williams2 User” -BlockCredential $false

Before we can execute this successfully, we have to establish a websso connection between Microsoft online FP/SP gateway and our IDP – which we prepare by creating a prototype attribute contract and endpoint at the IDP:-

Attribute Contract (1 per user, in this demo!)

image

The endpoint’s link information (“SP connection” in Ping Federate terminology) is

image

This gives us our metadata link:

image

In terms of SSL compatibility, we maximize the probability of interworking with Microsoft Azure https clients:

image

To setup the other side (i.e. the SP’s IDP connection), we use the usual power shell commands – also used in the ADFS case:

New-MsolDomain -Name rapmlsqa.com -Authentication Federated

 

Name                                Status         Authentication
rapmlsqa.com                        Unverified     Federated

To produce the DNS-centric verification information we run the following (and then run off to get a public CNAME TXT record published).

Get-MsolDomainVerificationDns -DomainName rapmlsqa.com

CanonicalName : ps.microsoftonline.com
ExtensionData : System.Runtime.Serialization.ExtensionDataObject
Capability    : None
IsOptional    :
Label         : ms76622316.rapmlsqa.com
ObjectId      : fe8b277b-6665-477a-82a5-13d12093c912
Ttl           : 3600

Once the CNAME is published, we will run something like

$domainName = “rapmlsqa.com”

$issuer = “https://ssoportal.rapmlsqa.com/spinitiatedssohandler.aspx/vcrd”
$idp = “https://ssoportal.rapmlsqa.com/spinitiatedssohandler.aspx/vcrd/15″

$brandName = “Rapattoni SSO Portal”

$cert = “MIIBzDCC…ATmgAwIBA”

Confirm-MsolDomain -DomainName “$domainName”  -FederationBrandName “$brandName”  -IssuerUri “$issuer”  -PassiveLogOnUri “$uri”  -SigningCertificate  $cert

On running the above referencing our own (WIF-library) IDP, however, we cannot make this or any variant be accepted by Office (for now). It seems like Office 365 seems to want a real STS to exist with a real queryable metadata endpoint (which we never implemented). So, to make some progress today let’s just make Ping Federate work as an IDP!

Simply do what the instructions say to do (though we avoided the LDAP and ws-trust config, just using the HTML IDP adaptor for now) along with a couple of IMPORTANT caveats:

Confirm-MsolDomain -DomainName rapmlsqa.com

-FederationBrandName “rapattoni”

-ActiveLogOnUri “https://officeping.rapmlsqa.com:9031/idp/sts.wsf

-IssuerUri “urn:idp:pfrapattoni”

-PassiveLogOnUri “https://officeping.rapmlsqa.com:9031/idp/prp.wsf

-LogOffUri “https://officeping.rapmlsqa.com:9031/idp/prp.wsf

-SigningCertificate $cert 

-MetadataExchangeUri “https://officeping.rapmlsqa.com:9031/pf/sts_mex.ping?PartnerSpId=urn:federation:MicrosoftOnline&Type=mex

The record resulting from using the “Get-MsolDomainFederationSettings” command is:

image

Get-MsolDomainFederationSettings –DomainName  rapmlsqa.com

Note the ‘issuerUri’ field. You must NOT use the value for the name suggested in the Ping Federate documentation (as essentially it is already registered … by someone else). Use some variant , therefore! (This wasted an hour, since the Microsoft error response only said “something” …was already in use; and the Ping Federation documentation is its usual miserable self that hides or understates or gives little context on the last 1% of technical info.)

To be fair to ping (on the issue above), there is text – that is comprehensible ONCE you know all the issue! (which is why you are reading this blog on Ping Federate rather than talking to a for-fee solutions architect, no!?)

image

Next! The evaluation edition of the Ping Federate doesn’t seem to allow one out of the box to add to the websso attribute contract attribute in just any old namespace. Using a variety of namespaces for attribute types is required for Office 365 interworking, of course!

Let’s use the know-how documented deep in the help file:

image

The file noted there starts out life as:

image

And so we amend it, logically so the console will also expose the “http://schemas.microsoft.com/LiveID/Federation/2008/05/” namespace when you add named attribute types (with unusual namespaces) to the attribute contract:

image

Thus we can now complete the websso attribute contract definition (also specifying some hard coded values, note).

image

We are almost ready for a trial against office! to be invoked using https://portal.microsoftonline.com. but, though we have a “connected domain”, we have yet to create a user in said domain as an office licensed user. And, we have yet to THAT record to denote its (base64-encoded) GUID … in its immutableID field!

We do that, using the command given (way) above:

image

Typing peter@rapmlsqa.com at the prompt induced the microsoft signin controls to show the following

image

which gives… (after some fiddling with local DNS resolution, within my naming domain);

image

leading to success (I think!)

image

which mechanism we get to see as in

image

ok. tomorrow, we get to make our own IDP work! Well Done Ping!

Posted in ADFS, pingfederate, SSO | 1 Comment

ASP.NET and PingFederate OAUTH2 Authorization Service

Let’s make ourselves a simple OAUTH-friendly testing environment. Install the free Microsoft visual Studio for Web evaluation package and create a web project (in c#) using the web forms technology. To arm the raw OAUTH/OpenID capabilities of this sites login flow, simply uncomment the Google provider in the AuthConfig.cs file, as shown. Run it to make sure that at least all works – out of the box. Now we know that the dotNetOpenAuth framework is up and running, working with at least the built-in “google” provider.

image

This gives our users a login challenge experience:

imageimage

Having completed the Google side of the user challenge process, Google’s IDP send back a result message to our site. This then does local “account linking” of the Google name to a local account (whose value just happens to be the name from the first IDP to perform this process).. Later, we will see ourselves binding a second name to this local account – authorized by the Ping Federate server using OAUTH2 protocols and procedures.

imageimage

(note we had to run the test twice, while the mdb account linking db got set up properly).

This work merely sets up a stage. It enables us to add in our own OAUTH2 provider class – and use it to talk to Ping Federate’s authorization server endpoints (rather than Google endpoints). Obviously, we want Ping Federate endpoints to play the roles of Authorization Server and the (access) token-issuing STS. The former gives out the so-called “authorization_Code” as a result of the user’s expression of the consent and the latter mints an “bearer” access token for the site that can show it has the authorization code. Given a bearer token, one logically  make API calls citing the token. In he case of Ping Federate, one can make a particular “api” call – a second call to the STS inviting it to take the bearer token  as input and issue as response a new, more “informative” token – with the various attributes of the user’s record.

Posted in pingfederate | 1 Comment

from sboxes to chaotically arranged fixed point

http://www.cs.uiuc.edu/class/fa05/cs498sh/slides/lecture8-crypto.pdf reasons about DES starting with Shannon’s abstract rotor machine. It then goes on to use Feistel’s original “step” cipher concept to consider what a data flow machine really does.

Let’s summarize what we have learned about coding theory assuming that what the topics relate to the ECB mode of DES. First let’s look at LDPC, then Turbo and then Feistel/DES ciphers.

We know from Turing/Newman that folks took “1930s decision procedures” concerned with abstract detection and decoding and turned them into the likes of Colossus-supported rectangle convergence an Banburismus (for solving the naval enigma indicator system). In each case, math-architecture notions of ‘measure’ were at the heart of the thinking. This mixed 1920s thinking about the math of early nuclear physics with the then on-going move towards formalizing the mechanics of proof in math, via logic. Of course, the limits of logic became apparent, as did the application of limit theory for the updated forms of Newton’s methods of doing calculation by approximating functions with series.

With measure theory we see that any space can have both intrinsic and extrinsic information – or internal and externally-defined coordinate systems upon which the statements of motion are fixed.  And, one might have multiple such descriptions;  much like a building has internal dynamics (of its steel frame), internal pressure (of its cooling system), external volume (on the sidewalk) and area (on the skyline). In coding theory, we have multiple measures. What is more, the measures combine, to define new measures that leverage chaotic ideas. In particular, with such as Turbo codes two generated sequences measuring a common set of randomly permuted information bits may interact WITH EACH OTHER to produce a type of “dynamic measure” that can “calculate” – as the measures, acting as inner Phi and CHI streams, converge to decode an outer stream (and correct some errors!)

Ok! so in the LDPC world we have a “re-application” of Colossus sum/product thinking in which the outer “information bit” measure interacts with the outer “parity bit measure” each on one figurative side of the permutation matrix (the colossus-era co-variance rectangle). By passing messages in much the same way as a nuclear cyclotron refines gas to produce nuclear fuel, the mutual information is refined by the diffusion process of “two interacting measure systems”.

With Turbocodes, we see a more elaborate but similar architecture in which two shift registers like at the side of the permutation matrix. Rather than the matrix be the ping ping table as in LDPC, now we have a two level process in which each side’s shift register implements its own message passing run (using BCJR, vs sum/product) before the information bit side (say) passes the result of its refining operations across the table to the other wide that also runs BCJR, but with drivers from the parity check bits.

Now Fiestels original dadta flow machine also showed a very Turbocode like  nature. Rather than have parity bits driving a BCJR-shift register, he has key bits wander through a shift register. Rahter than have a sparse adjacency matrix with cyclic code permutation blocks as in LDPC, his data flow machine used the principles of sboxes and mod-2 bit flipping to create a large random permutation whose subspaces would, each round, we used to help refine the production of an encoded plaintext.

Ok, so the SP network and the one-way function are really now helping us understand des. Its better to thinking in terms of chaotic processes, in which the DES output is the code produces once the algorithm converges to a fixed point, due to the data flow machine producing custom fixed point attractors. Looking at how DES derived from the fiestel step cipher, and given the way that LDPC/Turbo codes work when decoding, we can see the principles of ciphering (vs decoding).

ok it now makes sense when an NSA type says that DES is unique in using sboxes. Its just that the s-box is a novel incarnation of wider principles, and its its one that is very much tied to binary boolean algebra. its one what nicely show how to parameterize the chaotic process to create the fixed point attraction basins one desires – as one creates a map of the surface of the code.

Posted in crypto

Enterprise Application for Office 365

There are quite a few moving pieces in the OAUTH to Office 365 experiment. But one is new: the ability to install a new SP (on a third party web host) that cooperates with the 3 standard SPs : sharepoint, exchange and lynx. After all, when exchange email shows a content with a tel: address or a sharepoint document, one wants a SSO experience between them all no?

And similarly, when you augment that backoffice with your own ‘enterprise application”’:

 

image

http://technet.microsoft.com/en-us/library/jj218623(v=exchg.150).aspx

Posted in oauth

Aiming for PingFederate to Office 365 Exchange APIs, via OAUTH tokens

Now, the reason we are interested in either ACS or Ping Federate’s OAUTH2 support is because we want to use the new Office 365 Exchange Client APIs – that are apparently OAUTH2-guarded these days.

It seems sensible to create an office 365 tenant – giving us a set of API  endpoint to test the result of Ping Federate’s work.

Will it work?  Does this pattern have the right flows? Does OAUTH2 really induce compatibility and interworking?

Let’s find out! IThe goal is to invoke Sharepoint Online APIs and/or Exchange Online APIs using Ping Federate as the Authorization Server and (JSON)token minting site.

Sign up for office at … http://office.microsoft.com ! I can now advise (rewriting the mail after some weeks) that one SHOULD use the Enterprise trial option rather than the Small Business edition option shown below:

image

image

Posted in oauth, pingfederate | 1 Comment

Comparing Ping Federate v6.10 OAUTH features with Azure ACS v2

Back here we reported on how we used Microsoft Azure’s ACS OAUTH2 feature set. We were able to write a web client and web service that generated and consumed OAUTH tokens. I support of these entities, an Azure ACS tenant did “middleware work” …implementing the so called “authorization_code grant.”

The authorization code grant assumes a world of devices supporting an internet browser and “apps” – downloaded to augment the platform. The grant type is specifically involved in the task of “provisioning” apps – ensuring that the downloaded app also has the necessary security and personalization configuration from information supplied by the user via a browser-experience– that induces the initial download of the (well-provisioned) app. Downloading may give the user a new app on the platform and the code, to be entered when the app first starts. With that code, the app is able to complete its provisioning and access user information on remote web sites.

The user visits some vendor’s website with a browser – to which the user with an account with the vendor wants “to connect-up” said account with their IDP-managed membership record. This simply avoids account proliferation, and eases lifecycle management of accounts (and first-time provisioning  of those non-browser apps discussed above).

The (app) vendor’s website enrolling and thereafter “supporting” website should also become entitled post connect-up to make web service calls to the IDP .. to pull or to update the remote membership record. To authorize this ongoing server-server connection hookup, OAUTH2 gets involved – delivering a consent-UI flow using web pages and browser and asking: do you want your IDP membership record to flow to vendor X and should the IDP assign some of your read/write powers on that remote record to the vendor? So, both inter-site connections and authorization_codes are delivered so that sites can support each other when delivering data to apps – that provision themselves given the code.


If we recall, we created in our Azure ACS tenant a so-called “service principal”  record for the vendor – known more generally as “client management”. That is… we configured a per-vendor clientid/password pair. We also created an SP in our ACS tenant – arming and deploying an STS that will mint “access” tokens – in some or other blob format. Of course, it will be the authorized vendor working to add some kind of value to the IDP’s membership record who uses this STS to convert the “authorization_codeword” minted by the consent process into the first signed token which, upon its return by the STS, the vendor will thereafter attach to its server-initiated web calls to read/write the membership record.

We also recall adding a consent.aspx page to the membership management component of our IDP. This delivered the one-time “do you want to connect-up…” GUI experience to the user…as he/she goes about connecting-up the vendor site with the IDP membership site’s oauth-guarded data service. And we recall seeing how upon gauging user consent the consent.aspx page would itself make a web call to ACS – to create a “delegation record” (recording the  user’s connecting-up assent). The result from ACS was the one-time “authorization_code”, minted specifically for this newly mint delegation record. Passed back pass back to the vendor site via various browser-based redirects. the vendor’ site flow in charge of the user’s “connect-up experience” can swap it for for a real token by calling the SP/STS token-minting endpoint.

Let’s try and make Ping Federate server do the same thing. Then we can compare our integration experiences.

First install the Ping Federate server and its OAUTH playground, per the instructions. Once you have installed the license, you can change the console uid/password. Now launch the OAUTH playground website also hosted on the same jboss host as that hosting Ping Federate itself and use the settings button on the page (1) to auto-configure your shiny new Ping Federate Oauth configuration “for a vendor” :

image

We see the result of the site invoking web services for remote OAUTH client configuration/management, using the admin/2Federate credentials required for said calls. This populates 6 records, as shown. We are interested in just the “ac-client” vendor – since it showcases the equivalent of the authorization_code grant work from our Azure ACS work.

Back at the Ping Federate console, we see what that playground site just did, having got to this screen from the (new to Ping Federate) console

image

image

ok. So we have accomplished the equivalent of creating a service principal in an Azure ACS tenant for a new vendor, known as  “ac_Client”. Whereas in the Azure ACS world we used the ACS management API to register the uid/password/redirect, here the site’s configuration pages used the Ping Identity API to Ping Federate instead. (As with ACS, one can alternatively the management console to manually fill out a form communicating the same information fields).

Note that configuring access controls on the API port (for remote client management) requires one to set up a validator (a particular repository of vendorid/password pairs):

image

image

noting the difference of the above multi-screen from the older concept of “application authentication” to other web service ports offered by Ping Federate!

image

This leads us to understand the core configuration screen for the new OAUTH2 authorization service component of Ping Federate:

image

At 1, we see the management concept of “scopes” being configured – defining a description for the unnamed/default scope and defining additional “named scopes” relevant to the IDP-managed resources. Remember, these are like the custom-rights to be defined in ADRM– saying perhaps that you can or cannot print of forward… this kind of marked paragraph.

At 2, we see how to configure a couple of behavior parameters of the Ping Federate authorization-code element of service: how long a code will be good for (i.e. before when the vendor needs to cite it, to get back the first token from the associated STS), and how the code itself is to be generated (to address code spoofing/guessing).

At 3 we seem some advanced features we can come back to… MUCH later!

Let’s head back to the playground website noting that we need to configure just a little more per-flow setup in order to prepare to see something happen. We are interested in invoking the authorization code demo! To make the simulation of the vendor work, go back to the main screen, and choose the authorization code link (and read the tutorials too, if you wish).

image

At 1, above, we see what we recall from our our own OAuthClient provider class work (that we plugged into the ASP.NET OAUTH framework for vendor websites). We cealy see the vendorid and the indication of “code”  – inducing the Ping Federate-hosted OAUTH2 authorization service to invoke the “authorization_code” flow (vs alternatives).

At 2, we note first that these 2 parameters are optional in Ping Federate world (whereas they were not in the Azure ACS world). ACS required the caller to cite the URI address to the authorization server’s consent page, requiring that it align with the recorded address (in the service principal record).

At 3, we see the ability to request that a particular list of (pre-registered) custom named-scopes be associate with the authorization code “grant” (as finally visible in any token minted by the STS)

And at 4 we see the anti-CSRF support (that caused us so much pain, when writing the OAUTH2CLient class, initially). Fortunately, what we did back then to make an OAUTH2 client provider will server us here, too!

at 5, we see some PingFederate “value-add” – in which should no user session exist at the Authorization server during consent page flow, a websso request can be sent off to the desired IDP to induce an authentication session at the authorization server itself (acting as a pseudo-SP). And as is typical in PF land, one gets to indicate particular querystring parameters on the URI that will induce Ping Federate to initiated a websso flow with at IDP: identity the IDP connection (of this pseudo-SP) and the IDP adaptor that the IDP should use.

Note how PingFederate’s nice, modern capability to use authenticationContext switching feature of the SAML2 protocol – as a means of choosing IDP adaptors – is missing. Hmm.

Anyways, since no such values are supplied by this particular playground page, the Default IDP connection is applied –  as we see when we hit the go button! A fiddler trace show the request being sent and a login challenge page being  rendered. Note there is no sign of the usual ping URL initiating websso on the IDP.

imageimage

Once the user authenticates to the IDP via websso, we see the expected post-autentication challenge consent screen being rendered. Once the user consents to the continuance), the authorization process return the “code”:

image

image

 

fiddler trace:

image

Follow up token minting can then be completed by the vendor site. In this case, the STS correctly refuses to give one, since we supplied the code too late, making it an “expired” code):

image

With this we can play some more tomorrow, perhaps seeing if we can replace the playground site. Our goal should be to to apply own OAUTH2Client provider for ASP.NET instead, from here.

Posted in oauth

Looking at the Ellis Paper as if enigma

http://jya.com/ellisdoc.htm

image

Let’s look at this through the eyes of someone steeped in the doctrine of rotor machines.

To a person of Ellis generation, the table is “generated by” the rotation of the rotors – creating the infamous Friedman square. Just as a rotation takes the parity vectors and creates an orthonormal basis for linear approximations, so rotating a enigma wheel “vector” leverages conjugation to create the diagonally-polarized rod square — and its inverse, moreover.

Assume Ellis is at GCHQ since 1960 c (since he comes across as an “old-timer”, speaking in generalities and what were once-cutting edge demonstrations of academic math expertise). Thus, he is fundamentally inculcated with the theory of rotor machines and the class of cryptosystem that derives from their general use. Thus he speaks in the terms what he knows (generated tables). He probably has lots of WWII-era background too, knowing for every “rod square” there is its inverse. its part of the DNA of rotors.

now lets say one is using a 1950s sigaba machine, primed with the daily key. The operator has proven the settings as correctly entered by enciphering A 13 times (to prove the right “signature” ciphertext is output as matched against the pre-computed signature on the daily cheat sheet). The operator now formulates a random number (coin tossing…), and communicates the ciphered version of this number to his peer leveraging the now-synced sigaba.

Of course, this was all standard key management protocol – and analysts for 30 years had been studying such “indicator protocols” – or key agreement methods, to use modern parlance. Ellis and co. are perfectly well aware of the American practice of using (orthogonal)  latin squares cards for indicator protocol purposes – to authenticate first the terminals/stations before engaging in the key agreement protocol. “Protocols” are part of the 1960s DNA for cryptosystems. – particularly at GCHQ with its institutional memory of the WWII-era hangups over naval engima indicator protocols relative strength (compared to airforce and railway enigma) and its ultimate weakness (to probabilistic oracles).

ok. Using M2 the originator’s plaintext then acts as the classical-key which is enciphered using a common (non) secret value x. Clearly M1’s (1d linear) function is generating a parameter that generates a derivative of M2. in rotor terms, assume X is used simply to *dynamically* move the “tire” of the enigma wheel. – changing the sub-space mapped by the M2 wheel.

Assume M3 is a just a reverse rod square, generated by rotors moving in the contrary manner to the encrypting rotors. These rotors tires are offset by x, too – so they match the subspace of the enciphering machine.

having agreed the subspace, one might be tempted to then start enciphering single characters. But that’s the novelty. The plaintext used to generate the subspace agreement has already been communicated.

But, clearly M1 has to have a definition that is known to M2 and M3.

 

Now an Ellis will also have known 1948-era methods of quadrature encoding – and the process of using large (hard to invert) matrices (aka tables) to cut down the decoding time needed to find the conditional probabilities by a “probability receiver” – that expects to encounter noise due to multi-path propagation and interference, etc. Such tables are generated on the computer, with presumably computers also being used to perform the matrix calculations. It doesn’t seem beyond belief that a tape-based matrix calculator was being used, once some other computer had generated the tape bearing the table). the “key” is the matrix.

Posted in crypto, enigma

Cocks speaking on NSA/GCHQ hiding drivers

http://www.zdnet.com/gchq-pioneers-on-birth-of-public-key-crypto_p2-3040090638/

image

Now, what Cocks does NOT reveal is that he went upto Cambridge in 1987 (to engender the next generation of spook techs, WWII-style).

I got to study the output of that “competition” between the students. This then gets tied up with IRTF, the MIT/RSA PEM project, and the DRA sponsoring UCL-CS to host “one of those students”.

IT would be interesting now to see how much certs were involved in the Brent telephone design. I have a good mental model of how the STU-III algorithms and processes work (but keep shush, out of respect for my American  hosts). They can reveal that when *they* feel its fit to do so. (Revealing that technical secret goes beyond the good host/good guest paradigm!)

Posted in crypto, early computing

Deleting HOST/[host] SPNs; addressing the last 20%

Deleting a host’s HOST/ping.rapping.com SPN from the CN=PING (computer) account is probably not a good thing to do, OPERATIONALLY. Doing it to make Ping Federate interwork for the FIRST time (so you don’t go nuts), its fine. Now you know your Ping Federate setup is fundamentally sound (and you are not going nuts because some US export issue is making things fail, say) then you can consider how to deploy a sound INSTALLATION deployment.

In engineering, we distinguish between unit tests (in an idealized functional testing theatre) and system tests (in the intended deployment environment, or some close simulation thereof). I happen to be mostly involved in the former – which is show it COULD work. Of course, the idea has to be sufficient well researched that its not going to founder … in the actual deployment theatre.

In my world, I experiment with the latest tools – an the unit testing operates under modern assumptions. But, we deploy on ancient hosts, almost at end of life. Thus, whenever I counsel X publicly, don’t forget that there is a private conversation going on, too – with the person paying for the advice. This is that which takes a demo and turns it into a production capability.

Don’t get hooked on the first hit. At the same time, since the American scene is designed to overwhelm the foreigner (and induce a slave like response), don’t forget to play the game. Use technology from 10 years ago (that no longer has much value, to the hot tempered, heavy on the mindshare marketing Americans). Go back and now compete by deploying 80% of the features for 1% of the cost. You will generally find that no-one uses the top 20% of the features anyways.

So, be careful. The first hit is always free. But its has to be used right, to get long term health care. To get care that is also affordable, and doesn’t crush one, you also have to understand how the game is played on the cost side of the cost/benefit curve. So play it, intelligently! And play to win!

Posted in rant

cryptanalytical background on binary/boolean functions

http://www.cs.cmu.edu/~odonnell/boolean-analysis/

image

Not just CS 101.

IN PARTICULAR, note the elementary introduction to approximate linearity.

image

http://www.cs.cmu.edu/~odonnell/boolean-analysis/lecture2.pdf

So much said, so simply.

image

http://www.cs.cmu.edu/~odonnell/boolean-analysis/lecture2.pdf

Or, as someone said in the 1930s, if you ask an oracle with a known-basis for fatalism to any variant of the usual questions one asks Oracles, one can use the stats to predict. Or, as NSA would call it, “searching for cryptographic keys intelligently”.

The interesting part about NSA –  over 1940s UK cryptographers – is how folks  used all the same “decision procedures” for image enhancement and signal isolation – not only finding a key. That typical 1970s “cryptanalyst” is not a cryptographer (focused on the math of ciphers). S/he is using the same processes to interpret the spy planes imagery and the naval spying ship’s signal scanning, or NASA moon/venus signals bouncing work.

Posted in crypto

imagining a nuclear scattering cryptanalytical computer

 

image

http://www.phys.uri.edu/~gerhard/MSS/ms91.pdf

image

Posted in crypto

1930s background to 1940s (and 1970s!) cryptanalysis

http://arxiv.org/abs/1112.3501

image

 

image

image

Consider the previous memo which fashioned a context looking at measures and limits, in an array of signs acting in cliques; relationship with specific heat.

In Turing/Newman’s mind, 1943ish, one has interaction of signs (+ and –, 0 and 1, true and false) according to binary channels, and one has Bethe lattices. The “local field” is perhaps the interaction of 3 bits of ciphertext from the same wheel bit (previous, current, next) and the same 3 bits from the wheel bit before the current wheel bit and the 3 associated with the next wheel bit. The couplings are dependencies between Chi and Phi wheels bits (and their 3 derivatives).

image

http://arxiv.org/abs/1112.3501

image

One can look at the tree as a node and two branches ( a type of local neighborhood). In that one has a product of the node by the branches; the magnetized sign vs the coupling(s) that influence sign flipping.

The ½(1 + misi) as the bias away from the (normalized) mean of 0.5. What is interesting in this formulation, beyond what Tunny documents said, is how the bias relates to the magnetization and the sign concepts. We didn’t have that “element” of the model, before

Posted in crypto

modern cake icing

We looked at markov random fields earlier, for example at https://yorkporc.wordpress.com/2012/04/08/from-tunny-to-the-semantic-web-via-conditional-random-fields-and-hidden-markov-models/

lets go back to the 1930s and see things as they were, for which we have a good source.

image

image_thumb6

citation for following quotes, also.

Here we get to think in terms of magnetizations and fields. But, its actually a very intuitive model; one doesn’t need advanced math skills. One gets to see how, in 1930s thinking, there are an infinite number of  “measures” for (Turing style) “configurations” – and any measure is a “Convex” combination of a couple of pure phases.

We get to see also the notion of Q functions, for Gibbs functions and limiting distributions:

image

This develops into the notion of “spontaneous” magnetization

image

When we look at the timeline , its interesting to see 1945 era work, building on 1930s pure research topics:

image

In general, we see that the very notion of “measure” is really quite intuitive – being some subgraph, and the energy attached. Different “measures” are different subsets (and the voltages induced by the potential applied to just those vertices in the graph – which induces an energy density – which can be contrasted with the energy density of the entire graph). Thus the Gibbs measure (or measures, rather) is equivalent to the markov random field idea.

The transformation between states (0/1) vs. spins (-1/+1) is also interesting:-

image

Trees splitting 1 branch into two are interesting for several reasons, including the the relationship to free groups and DES (in which 1 input bit gets to be “dependent” on 2 output bits).

image

The relationship between limit theorems and eigenvalues (and #2 in particular) pops up again:

image

Posted in coding theory

openid connect–a right royal mess. The **censored**

 

image

http://openid.net/specs/openid-connect-implicit-1_0-07.html

So, an “authorization” server  (doing simple “connection management” duties) can be doing authentication (now). Let’s not confuse things, huh?

“ID token” is a blob with an authentication claim (aka statement).

A “Claims provider” is something may “return” claims. Its not the same as an issuer that issues (and presumably also returns) claims (including authentication claim).

An RP doesn’t receive claims from either a Claims provider or from an Issuer- but from an OpenID provider.

A particular type of OpenID provider (self-issued) is an issuer however. But it doesn’t issue claims. It “issues” ID tokens.

Then there is the notion of identifier – which may or may not be a claim (I cannot tell).

Issuers have certain identifier forms (with https schemes). Quite what the relationship of the cert’s names is to the identifier I’ve no idea. I don’t know if OpenID provider have that identifier form, or not (and similarly I don’t know if Claims Providers even have identifiers of https form or otherwise.)

This reminds me of the IRS tax code. 50 years of hacks that don’t actually make any sense to 90% of the population.

At the same time, Im thinking of becoming a tax preparer (once the internet crashes). Its actually quite fun making sense of a mess. For all its mess, the IRS 1040 process is really quite simple – assuming you have done accounting 101. Perhaps in 50 years, OpenID security will have also reached that level of being a mess that CAN be fathomed not only by 1 person but by 10% of the population. It just has to survive that long.

Posted in OpenID

Lying and Manipulating vendors–in the age of cyberdefense

Personally, not being a sworn (or even a not-sworn!) American, I have no particular reason to do what sworn Americans do (like it or not): conform. In particular, I see no reason why China should not spy on America – given America spied on and spys on China. What American stole, by spying, so can China. That one stole behind some covert secret program makes no difference to 5-years old’s logic: its thieving. China has every right to redress the balance, till the benefits from American national policies on thieving have been equalized. America must obtain no long term benefit (from its last 50 year policy of IP theft, through spying). Once the powers are equalized , we will see if America is mature enough to do a deal, as adults. ( I doubt it, personally; its just not in the mental makeup.)

China folks can use exactly the same populace management techniques for effective spying on foreigners that America uses. But this seems effective given the last 50 years of internal controls on trust (so do it, China!) There is, though Hilary might disagree, nothing actually exceptional about the American body politic. It’s just another bunch of manipulative types like 100 other countries, who get off on manipulating each other within the body (and others bodies outside, as lemmings). The high is higher, apparently, when you manipulate the “Foreign” lemmings. A very special high is reserved for when you manipulative your own citizens worse than you manipulate the foreigners (all 5.5 billion of them). Get to That high, now you are FULLY indoctrinated (in the modernized form of institutional American racism).

Updating the old industrial-security indoctrination program (based on generating fear and loathing of the Commies), folks have adapted. Now you have your trusted vendors “manipulate” their customers, with fear. These days, it may be fear of patents. Why not, in a fear based society? FEAR, FEAR, FEAR (particularly behind closed doors, where one gets to amplify the FEAR.)

Now, the internet having made it well known the way this class of American thinks and the methods used to operate, things have to go beyond “mere fear” these days – otherwise what worked in 1931 doesn’t work anymore – by the time its 1938. One needs “conformance” – undying conformance. “Die for the fatherland” type compliance, from the general populace. One has to inculcate secrecy (and private meetings, with things said that “foreigners” cannot hear, not being “Trustworthy”).  One has to create a world of inner circles, in the *typical* firm. And, for that one needs the easily manipulated stooge to act as one’s proxy. There are no shortage of them, I’ve found.

What is interesting is the degree to which this is happening today. Can American spying effectiveness survive (with American spying on you, but not you on it) if the “methods” themselves are up for analysis? If the openness of America means you can see every lacky playing James Bond and M, can it work?

Surely, it just looks every day more ridiculous?

Posted in rant