Cryptanalysis reflection

The world of cryptanalysis gets ever easier to see in intuitive terms. The trick is to see the world as Turing saw it: before codes and calculating devices mechanized codes and their cryptanalysis. Folks had to learn to exploit a particular language of mathematics conceived to model the entire space of discrete calculation. Of course particular we know this as the design language of the state machine – that in its cayley graph or wavefunction forms served the early role of a software language for turing machines.

One cannot separate coding, state machines, cryptoanalysis, or turing machines. They are each a manifestation of the same thing: walking the (quantum) random walk.

– Coding takes the 1920s notion of a probability space so cleary applied to mathematical physics in the form of heisenburg or schrodinger theories. It requires the concept of a field to be represented in terms of group theory – in which the vertices, edges and invariants such as meansets of the Cayley “designer’s” graph represents the applied codeword security “rules” of the group, in a highly geometric manner.

– The notion of state require that we recognize that group actions can be expressed in such as the 2d complex plane which better characterizes the rules of possible state transitions better than does the raw presentation of the underying graph.

– Cryptanalysis recognizes that advanced, exploitable geometric properties exist between nodes in the graph when rendered on a suitable plane. There are concepts such as distance, measure, coordinates and weights. More refined and analytic than phase diagrams that express raw possibility limits of state transitions in finite dimensional worlds building upon the very idea of quantization, the weight notion capture the notion of volume of probability space carved out of highly abstract functional analysis spaces. This all builds on and up from the area of space implied by the edge-distance between some node and each of its neighboring nodes and the probability depth capturing the likelihood that a random variable model ling discrete events will have a particular graph-value.

– And turing machines are the meta-spaces  that governed by meta rules focused on the inner product of correlation relations and configurations for asymptotic limiting functions that can take such as the mean set induced by a sample function and tie together the rules of the group, the constraints of the probability field, the nature of the measure or distribution of probabilities and either output a codeword consistent with the rules or analyse a ciphertext codeword to distinguish it from a random event.

To a Turing in 1939 perhaps encountering the production side of the process of spying for the first time what is not new is the theory of codes, ciphers, and cryptanalysis. For he has been studying it in its theoretical manifestation for quite some time, quite evidently, in both the us and uk. All he has to do is apply the formalism theory to the particular nature of the various enigma machines he encounters – which introduces him yet more firmly to an aspect of the puzzle he has hardly encountered to that point: the notion of complexity of the search problem that stresses the relationship between the notion of security and cryptanalysis.

Though Turing surely knows the theoretical relevancy of such as the conjugacy search problem and has embraced the idea that certain coding groups engender reverse-searching problems that so expand the search space that the time required delivers “security” (that it takes longer to search out the key by brute force methods on average than the useful lifetime of the ciphertext) he also knows that the nature of the search problem changes in the calculation complexity sense – if only you move your calculator’s workings into the complex plane supporting phase or state space. The notion of the trapdoor, due to a change of scale or rotation of coordinates is not unknown.

In state space it becomes apparent that one can approximate the “inner nature” of coding functions using frequency analysis to at least guess crypto keys. the 1960s ldpc sum/product decoding is clear a minor variant of cryptanalytica process using centers and meanset theory to reduce the search problem – based perhaps on exploiting collisions within the differentials in the functions underlying ciphers (such as suitably engineered hagelin machines).

So the hunt has to be on for transforms that easily move a problem in discrete space into frequency space – in the days before the FFT, of course. However, let us not be overwhelmed by the FFT, for we know that even back in 1945, from Tunny documentation, that folks already had a discrete form of the FFT – what we now call the WHT. It merely exploits elementary generator sets (plus and negative signs on 1), that special heisenburg relation between parity and the fourier transform, and pairwise counting that reveals kasiski style bulges in distributions/measures, long used to crack indicator systems based on vigenere squares.

And so it falls to Turing and Newman. There topology focuses on the classical case of the cayley graph where the generators are the signs +1 and -1 (so usefully applied to peano arithmetic). These elementary members of the generating set nicely enable one to approach more generally what the old timers in room 40 of the Admiralty had been doing since WWI – when breaking the vigerere square (formulate depth cages for possible codeword lengths finding which auto-correlation measure is closed to the expected value). Having found the overall distance constraint for the unique space represented by the particular ciphertext in what we would these days call either additive or multiplicative auto and cross correlation, then consider the pairwise possibilities within each column of depths. One measures them – in terms of the possibilities that things agree or disagree in terms of sign counts.

As with breaking the codeword of a vigenere square through such as the computing the index of coincidence and then performing a frequency analysis of the depths in a particular column of cipher, more generally folks learn to identify the center of a graph – as manifest by its frequency spectrum. This is that “configuration” – to use a turingesque phrase – of vertices that allows particular nodes to be the center of the distribution of the space – and whose quantization oscillator now accounts for the variance found in ciphertexts that have depths/collisions.

In some sense the (tiny amount of) variance in these particular elements ultimately accounts for all the variance of the sets that can be generated. These foundational-fluctuations are a basis for the graph’s measure, much as generators and the non commutativity properties ultimately account for the evolution and transitions in a particular cayley graph making codebooks.

We tend to think of the turing machine as having started with the intensely discrete mechanism that slowly evolves, post war, into the notion of the indeterministic machine – driven by the probabalistic oracle already outlined (and obviously classified) in Turings phd  thesis. But its not clear to me that this is the correct order. It seems more likely that a Turing, having studied measure theory, started with the notion that certain configurations can, in the limit, control variance – if only the mechanizing graph can be so constrained with the likes of cutsets. And use of free groups can deliver the desired levels of control over variance – giving one an algorithm for designing configurations of graphs (to be explored by turing machine “runtimes”) leveraging underlying groups such as high dimension braid groups that enable the cryptosystem designer to distinguish different types of security protocols, including indicator protocols such as the type Turing spent so much time attacking: naval enigma.

Posted in coding theory, crypto

Kindle Fire HD – UK o2 “inbound” email problem

In absolutely typical UK fashion (i.e. the corporations are contemptuous of consumers) it is hard to make a UK edition Kindle Fire HD device talk to your o2 email servers. This is despite o2 publishing all the technical details! What information the support sites omit is the remedy! From internet comments, the firm just cannot be bothered to tell you what ORDER to do things, given some quirk in the Kindle software.

First make your incoming email work (ignoring any and «all» fields to do with outgoing) email settings. That is the magic. This really means changing the word “pop3” in the suggested domain-name for the incoming email server to the word “mail”. Now make it work, before proceeding.

Later on, once the incoming email is visible as expected, go change the suggested outgoing port to 25. Do not do anything else! Do not fiddle with names, logins, addresses, passwords….

Sigh.

O2 know this, but don’t let on. Keep the English peasants stupid and dependent, apparently. Make money by hiding information. Now O2 can deliver “service”.

Don’t forget, peasant, that when trying to make incoming email work you may be in a world in which you may have already configured your Outlook client to be also downloading incoming emails from O2. And, you MAY WELL have set outlook settings “to delete the emails from the server” upon delivery (to outlook). You need to reconfigure outlook for your kindle – so outlook leaves the emails on the server upon receipt. That is, ensure outlook only deletes the server side copy when you delete a message using some device’s app’s delete button. This way, you will see your incoming test emails in your kindle – since no longer may outlook be having the server delete from the server (before the kindle has a chance to connect)

Oh O2! Real customer service is NOT that hard.

Posted in coding theory

UK internet – a deception based public security policy

Travelling to the UK for the first time in many years I encountered public policy –  as regards the internet security.

Typical public wifi acess points (where occasionally available) require identification – of email address or mobile phone number. That is, one probably commits  a fraud by using anonymous identification. This is not disclosed and is one of thoze american style hangem via a 1000 formal violations pre-arrangements (for when and ever if there be a need or desire to exert pressure, during interrogation). In public policy terms, one notes the design style: load the dice.

On the last day, just before travel, suddently corporate email ceases to work. The point is to isolate, of course: in prep for isolation and the planned delivery of emotional distress. One notes the design style: plan for isolation (so the loaded die have more value).

Despite being a united partner, lufthansa could not emit boarding passes for all legs (unlike the us outbound). This surprised the agent – used of course to normal practice (and having good intuition on what induces variance). One notes the design style: isolate on non-us soil (invoking guantanamo logic) using compliant proxy agents (Germany).

Harassement takes many forms: so ensure that concerning money, lack of money will imply lack of standing. Of course too much cash will mean something else (equally disadvatageous). Cash is not always cash of course as i found out (since my English notes turned out to be no longer “valid tender”). Note the public policy: force electronic money (subject to monitoring).

Now to be fair (not that the sentiment can be expected in return) all the policy goals have sensible intent – in preparation for the 0.0001%). Tax dodging, money laudering of cash for drugs, and surprise: to maximise interrogation advantage (ie find where the damn bomb is located, before it goes off).but what we note is the preparation – applying all the techniques to everyone. And that aint to be anti-discriminatory! Its there to address subversion.

Now also interesting where deception based attacks on Internet encryption were mounted. Phones (vs tablets with wifi-only capability) do not show the issuer of the cert and do not show the ciohersuite in effect. It could be a firewall-issued cert (spoofing the true website by https proxying) and a null ciphersuite for all you know. Remember, both enable cleartext reading of the packets (in real time). It was interesting to experiment in various wifi worlds on which https sites in the us worked or which worked with what type of warnings. One notes the public policy; Internet encryption is there to make you feel good, not enforce any privacy rights (that you probably don’t have to start with, and will be suspended in any case upon mere utterance of the certain magic spell; public safety!

Using an american issued device in europe was also revealing. Some sites (inckuding googlz uj relay site) could detect which kindle device i was using and refuse to complete login for “unrecognized” devices. This did not interfere with web browsing note; but did interfere with any and all subscriptions (that hinged on a Google websso assertion). Of Course google.com was a front for google.co.uk, and i never found a means to bypass uk national controls (built  into googles cloud). Dns,  Certs and redirects thru https connect proxies had all been well massaged to present the illusion of a global web – that is not actually a web of national webs. Of course the public policy is to distinguish the web from nationalky regulated sites (able to withstand better web attacks, having access to higher assurance security credentials). But look at the elaborate deception, coordinated behind the scenes with other governments and the multinational clouds and the identity Providers!

My us subscription to netflix content worked – in the sense that i had now access to uk-licensex cobtent. Presumably, for copyright and censorship reasons, not all us works can be made available to uk subscribers. Things like uk censor ratings might be missing, for example. But it did work. So where is the history of my viewing habits now stored, ready for analysis? Is it in the uk, us, or half and half? Yes i assume i have no expectation of privacy in either place, even should my packets be tagged with notices asserting my expectations. Behaviour based analysis has to be the new public policy norm, one notes, much like the surveillance of ones habits at the public library.

To be fair, nothing in the above was particularly hindering, annoying or even unnatural. When in england, id rather netflix be giving me a change of scenery. Id be happier that netflix.com be a synonym for netflix.co.uk (when accessed from ip addresses in uk-registered isp address allocations). But not always is it so! Of course, i have no choice.

Now im not going to bother bemoaning the phone roaming, the data plan policies or similar. Of course they are spying on me, and of course my location and video and mic are controllable remotely. Its the public phone system, stupid (and nothing has changed public policy wise in a hundred jahr.)

Uk updates of firmware onto the kindles were interesting, too. Different packages were installed. Uk bugging-prep capability was not installed onto the non-euro-distributed kindles. This affected how the different installs of the skype app worked note, built to leverage the different (now bugged/buggable/buggered(?)) platforms. Of course, public policy protects the vendors who knowingly participate in the charade, allowing them to claim they know of no vulnerabilities in their software (as tested on an unreal platform).

So where does this leave the typical uk consumer?

Well, i struggle to deny id do much different – given the nature of the subversive-trained nutter on the bus trying to make a large explosion as it wanders through trafalger square, right infront of a live broadcast. But, i also see a uk failing; why not own up (rather than engage in multi level deception of your own public). This is a failing of oublic policy: cromwells spies being preferred over policy modernization.

Do folks really believe that anti anti-subversion orientation doesn’t include briefings on all the above!

Posted in coding theory

WCF service with DotNet4.5 Claims–with STS

To a solution made with Visual Studio 2012 add a WCF service for the dot 4.5 framework (specifically). Then, use the latest I&A tool to add the system.identity model configuration, having identified your active STS service and its metadata endpoint – as built originally using the tooling from Visual Studio 2010 (with the WIF SDK for dotNet4.0) and running under dotNet4.0, augmented with the WIF libraries.

The net result is a file similar the following which successfully identifies the IDP and its certificate. However you must change the bindings metadata endpoint, as shown. (It defaults to an assumed ADFS path.)

image

Assuming the STS offers 2 endpoints, one for Office365 friendly wstrust of Feb 2005 and one for version 1.3 of WS-Trust, we need the client built and configured using svcutil to use the 1.3 endpoint.

image

We had the change the client config:

image

We had to change the anonymous address from svcutil to the 1.3 endpoint of the STS; and add binding – that we happened to have- identifying the client credential to be sent to the STS.

image

Finally. no cardspace or its strange policy errors; and no weird and wonderful WIF-specific configuration of the service.

Posted in SSO

Forcing Ping Identity to Adopt WIF metadata from IDPs

ITs been  a pain making our otherwise excelleng Ping Federate servers (version a few years ago) talk to modern ws-fedp IDPs built using Microsoft developer-focussed WIF toolkits. There was always “SOMETHING” that didn’t work.

Well, now we understand how to make a passive STS from WIF emit (i) a Feb 2005-serialized response, (ii) include an authentication statement, and (ii) be signed with RSA/SHA1, we know from last weeks report that WIF IDP built in 10m with Visual Studio can now talk to Ping Gederates ws-fedp SP.

But, Ping Federate is a pain to set up, as it does support metadata import emitted by such IDPs!

So we decided to play the game back. Our IDP now dynamically emits a true role descriptor and now a second role description intended for consumption by Ping Federate (pretending to consume a SAML2 IDP in its excellent metadata-driven console.).

Remember, we are just issuing something that Ping Federate can import (so there are no typing errors). SO we take our ws-fedp descriptor and re-export it as a SAML2 IDP descriptor:

image

Then we use Ping Federate to import a SAML2 IDP, as normal. Then, once saved and ready, we change it be a ws-fedp connection simply by editing (when the server is offline) the sourceid.saml2-metadata file. We give it the desired protocol and binding values:

image

So we get around a vendor’s biases (implicit or intended) against WIF IDPs – with only minimal fuss.

Posted in pingfederate

substitution ACS OAUTH for the PingFederate OAUTH AS in the Oauth2Playground

image

As shown above, we took a working Ping Federate installation with OAUTH feature and its working OAUTH2Playground site – that showcases the protocol.  Then we copied the expanded war directory for the playground so we can make a parallel UI use our own AS (rather than Ping Federate’s AS).

Within the copy, we amended the “case1-authorization.jsp” and “token-endpoint-proxy.jsp” – to use the URL paths of our own OAUTH AS – and our own https port.

Then we changed the SSL cert on the IIS express that hosts our debuggable site to use our godaddy cert (using the Windows netsh tools, for the port in question). We also allowed iisexpress to bind to the certified domain name.

image

now, the PingFederate hosted oauthplayground site will work, even proxing token requests (using java chain building). Everything uses the GoDaddy cert. So we add it to the trust point list that Ping Federates manages – also used by the playground site, evidently.

image

We also now alter the settings of the substitute playground to point to our endpoints

image

Next we register the Client (a vendor consuming the API) to have the right callback URI:

image

Thus we can now try the playground, first seeking to get an authorization code:

image

getting

image

image

The consent process returns control to Ping Federate hosted OAuth2Playground:

image

Since we now have enabled the proxy in the playground WAR app to trust our SSL cert, w can pass along the JWT issued by ACS:

image

Posted in oauth, pingfederate

acs signing a jwt ; wstrust token verification

The token we receive from the OAUTH endpoint of our Azure ACS namespace has a (decoded_ header field given below.

{“typ”:”JWT”,”alg”:”RS256″,”x5t”:”70W3nPRCCzSeXuqwsBVy2KMSMPk”}

Using fiddler tools base64 decoder, we change – and _ back to + and /, and add the padding char(s)

image

It’s supposed to be a hash, and probably an SHA1 hash.

image

We see our GoDaddy cert is:

image

Now lets say that the cert has a critical extension. And it’s a URL, say, that demand that the verified contact a given OCSP responder.

If we now receive the JWT over a ws-trust channel, will the seucrity token resolvers pick up the JWT’s reference, locate the cert AND verify the cert chain?

Posted in SSO

ACS-based OUTH2 AS augmenting Google IDP (via ACS)

Let’s see how far we have got in emulating Ping Federate’s OAUTH engine (for the authorization grant).

We start out as will any vendor/client, making a request for the grant type:

image

https://localhost:44302/as/authorization.oauth2.aspx?client_id=FabrikamClient2&response_type=code&client_secret=FabrikamSecret
&redirect_uri=https://localhost:44302/as/authorization.oauth2.aspx&scope=&
state=123&idp=Google&pfidpadapterid=

At 1 we see our AS consuming  our ACS namespace’s store of “service identifiers,” checking up on the redirect URI given on the query (query string, not FORM POST for now).

Since there is no user session at the AS, the Session creation mechanism kicks in and also shows a list of “IDP Connections” – including Azure AD. Remember, Selecting Google here really means send a request to ACS…which will chain/gateway the request to Google’s WebSSO Cloud…which will chain the request Ping One, which will… Anyways, at the end of the day ACS issues us a SAML assertion that we verify, pulling the metadata on the fly.

our AS is in account linking mode for this IDP connection, and thus we get a challenge – to logon as the local IDP that governs linking.

image

image

The net result of this one time linking, for an inbound Google identity, is some UI (remember this is a prototype…!)

image

Using the Return option, we resume the OAUTH AS processing, that consumes local (rapstaff) account whose profile is driven by the SAML assertions relayed by ACS from Google. We get the consent screen – that which guards release of the authorization code and storage of the persistent grant in ACS.

image

This gets us back to our simulated vendor site, ready for the vendor to call the access token (STS) service to convert the “code” into an longer-lived access token.

image

Ok, we see lots of the pieces coming together! When we now use the postback of the token minting form to actually talk to the ACS access token minting service, we get as result:

image

I suppose the thing to do next is the refresh token, swap from symmetric-signed SWT to RSA-signed JWT and then verify the signature as a “client”, pulling the public key from ACS, somehow.

But, even now, if put this site behind the Ping Identity Oauth2Playground UI for developers of client sites, I suspect it will work reasonably well as is.

To expose our old AS available by button pressing as a protocol engine for real clients, all we really added was a handler for the authorization and token endpoints:

image

authorization endpoint

image

token endpoint

The trace on the wire tells the whole story:

image

if we swap the token for JWT and assign a signing key

image

we get now a JWT

image

Since the DateTimes are not in a friendly dotNet syntax, one converts (from Microsoft sample code for SWT notbefore and expiry times)

image

End.

Posted in oauth, pingfederate

virtal machine’s hosting DC and SAML assertion testers (showing time validation issues)

Remember, when the DC is hosted on a virtual machine using hyper-V, the host of the VMs induces the DC to change its time to sync with the host. Being a DC, it then updates all its domain hosts – which get the same (wrong) time as the hyper-v host.

image

You might think the DC host’s time as set by the domain-admin was authoritative – but its NOT!

image

Nice military attack vector, here. Assume your DC is hosted in Azure VMs = and thus a “request” to Microsoft (Azure) to re-set the time on a given DC VM could induce all sorts of “nice” effects.

Posted in Computer and Internet

Building an Azure ACS OAUTH AS–emulating the AS of Ping Federate–part #1 getting to consent

The goal is to use the OAUTH2Playground website of Ping Identity’s Ping Federate server (evaluation edition) to talk no longer to the OAUTH2 AS feature that are built in but to an AS we build ourselves — in an ASP.NET website – supported by the Azure ACS. We know then we have largely emulated Ping Federate’s own AS behaviour using its own test site – and we have done this at a good enough level for getting a first OAUTH project off the ground (by learning from the best).

Using Visual Studio 2012 create a web forms project, internet type. Register Google as an IDP, leveraging the built-in OAUTH provider that comes with ASP.NET Now add a class with our OAUTH provider; and register it alongside google. This allows us to send OAUTH2 messages to an Azure ACS namespace’s token issuing endpoint. It also allows the provider to leverage other “support”. interfaces that allow our AS to delegate to ACS much of the work of implementing an AS service.

The account linking model of the default website thus build allows to show that we have logged on to a locally-managed account (called rapstaff) having used an Google-asserted identity. IN other words, we have a user session on our AS, induced by an inbound ws-fedp SAML1.1 assertion received from ACS – since this is normally how we talk to Google’s IDP. We can now press the CIS button, which emulates an OAUTH client (called Fabrikam2) invoking the AS and its token issuing endpoint. Obviously, we will make the URLs look and respond like the equivalents in PingFederate’s implementation.

image

What we have just done with button clicks is the equivalent of use this screen from the oauth2playground:

image

image

Now typically, when we want a URL to specify which IDP to use, we invoke our particular pattern (which sends off an ws-fedp request to ACS, requesting further gatewaying with google’s openid endpoints).

https://localhost:44302/v2/wsfederation/Default.aspx/google

Now we can accomplish some but not all of what the Ping URL does from the start, by having a page class that redirects

image

If no SAML session exists, this lands on a IDP chooser page. Upon the return of the assertion from Google, say, the CIS  button is logically pressed … continuing onto the AS phase of the flow (invoked also by the CIS button). If a local SP-side session exists, the chooser and authentication process is omitted – passing straight to Go!

image

Once invoked, we see the (CIS) provider pass control to our implementation of the  authorization_code handling component of our AS (which delegates much of its implementation to Azure ACS). At the point, non of the parameters show are configurable from the ping emulating URL.

So, now to get rid of button pressing and make the flow be driven from the formal interface of an OAUTH AS endpoint. We implement a form handler

image

 

With an existing user session (for a local user “rapstaff” previously linked from google websso), a user is directed to the consent page – that will authorize creation of an “authorization_code” persistent grant.

image

If there is no user session, we get a ping federate like experience. The OAUTH AS site invokes an “IDP Connection”, as selected by the user:

image

Next, we see the storage of an authorization grant – using the (OAUTH-guarded) ACS service – which also mints the authorization code to be issues to user devices.

image

image

from the relying party NAME, the token issuing service will require that the party seeking to swap the code for a token must cite the realm of the named party – acting as a scope.

Posted in oauth, pingfederate

spying on the OAUTH interaction with ACS; limits to fiddler

the Azure ACS offers an OAUTH endpoint – for access tokens. One gives the parameters of the authorization grant and it gives back an access token. Or rather it does if you know what to send, specifically.

To spy on a working user of the token endpoint (to learn the magical parameters that work), turn off https. Then install and use fiddler.

image

To make fiddler capture a server–initiated http request,  we added the system proxy:

image

Since the endpoint is working over https, we had expected to be able to leverage the https MITM of fiddler. But, the client code is designed to detect MITM https (being a token endpoint consuming service, after all). Thus, it will not accept Fiddler’s spoofing certs ; they are always invalid.

But, we were luck enough to be able to send things over http (just so we could learn).

image

We can now see the name/value pairs sent, with which encoding, etc.

image

When we use our own code, we can now see the handshake with Azure ACS when depositing the authorization grant – and getting the authorization_code. So we know that the issuing criteria are satisfied (using a setup from some Microsoft sample code).

image

On the code consuming side, we see our request (which then we compare with the sample)

image

vs sample…(allowing for change of redirect value, when using our apparatus):

image

Posted in oauth, SSO

Building an ACS-based OAUTH2 Authorization Server instead of using Ping Federate OAUTH AS

This is a difficult post to write – if only because sales folks can be so ham-fisted at their negotiation art at times – not realizing  THAT – perhaps because of poor support from their marketing department – competition changes with time. In particular, for Ping Identity, the Azure ACS – at its ZERO DOLLAR price point – makes it cost effective to program against that service to create an OAUTH2 authorization server yourself. Furthermore, with the general availability of Azure AD and its complementing nature with ACS namespaces (plural), yet another feature of Ping Federate has been commoditized: that of exposing conforming SAML2 IDP endpoint (supported by simpler WIF/ws-fedp endpoints in your own software set). The focus of Azure AD on creating one own app-communities (those authorized to plugin to one’s own APIs) and enforcing these “communities of interest” should give even Ping Identities 7.0 series features of Ping Federate a run for their money (Im guessing)!

Now, I can see Ping sales engineers arguing that the Azure AD/ACS commodity support is incomplete. But, having used Ping software for years as a customer with a perpetual license, I can also attest (and I known the firm concurs with me, in semi-private) that no one in the world uses the high-end features of SAML2. We are in the COMMODITY phase of the market (where lots of folks use the bottom 80% of a standard’s features, as designed 10 years ago…and when interworking is largely error and hassle-free). This is clearly the case for SAML2 asserting parties supported by now by a directory graph endpoints – with which to get additional claims about the user, including claims from the VAR chain that go beyond the scope of what information Azure AD itself manages.

Now we have all been actually using OAUTH2 in production for a while now – simply when using the management service API of ACS itself to provision IDP and RP records. We have libraries for the client that exchange a resource-owners credential set using the resource-owner grant authorization mode of the standard. That is, a username/password is exchanged for an SWT formatted access token – that is used in API call request header thereafter to drive the ODATA data interface exposed by ACS with which to manage the rest of the management entities (including IDPs, RPs, claim transformation rules and even client credential set).

image

This is how, for example, normally one populates clientids/passwords. It is also how one creates “delegation records” – a management entity which enables one to store the persistent grants – such as the “authorization_code” grant type. Apparently Windows Azure Marketplace and Sharepoint Online are based on these capabilities – so Im in good company by relying on it!

Now I feel sorry for pPing Identity in losing a sale – in that their education of ME in the last month was just superb; with their OAUTH2Playground site doing a really good job of laying out the case for what OAUTH does, generally. It becomes really quite simple. Furthermore, on a sales eval license the firm exposed me to all the features of their engine (PingFederate). Learning to configure at “certification grade” all the advanced modes was itself educational – forcing one to realize how an complete authorization server really works – when supporting delivering the “decision” service BEHIND the protocol endpoints. What the server would allow and not allow (by design) in configuration terms was educational in and of itself – since the limits imposed reinforced certain architectural patterns. It would not let me “abuse” OAUTH  – thus ensuring Ill be reasonably compliant with others when interworking.

But, its not to be. We are back to using the authorizationserver support library from ACS, which allows us in ASP.NET to build our own authorization server logic. It has taken be about 8 hours to remember what I did last time (in pure prototyping mode), adding a consent handler to an ASP.NET web forms project already armed with the ability to do ws-fedp and OAUTH2 interactions with IDPs doing user authentication. The authorizationserver support library from Microsoft provides the guts for buildings ones own AS – enabling one to handoff to ACS, via the OAUTH2-powered API the service offered to your/my “3rd party Authorization servers”, to do the hard work an security critical elements of doing “persistent grant management” – and enforcing the so-called authorization_code rules. While I already miss the “polished” services of PingFederate that added SO MUCH VALUE over and above this raw capability, I got something working fast with Microsoft services and software. So well done Microsoft.

And, for OUR project, that’s all I need. And its all I can afford.

Now that Microsoft have distributed the official JWT securitytoken handler, an access_token endpoint handler be able to mint signed JWTs, too.

Finally, we see that the WSDL for the ACS ws-trust version 1.3 (but no 1.5) interface enables us to issue and renew tokens, given a usernametoken – and we have seen how this can issue us JWTs. Now that we understand how a thick client using WCF should be managing its own client-side token cache so a JWT so minted by one ws-trust interaction can be used across multiple client channel endpoints  and contracts/interfaces (while still allowing the underlying WCF token provider to invoke the issue/renew token trust delivered by ACS as required), we even understand well what Ping was offering in the ws-trust area – when trying to support a WCF installation. BOth Microsoft and Ping are lacking documentation here, note.

We also got to play with the Ping Federate endpoint that implemented the “SAML authorization grant” service – swapping  a SAML token for an access token. Unlike the integration of SAML via websso with the user-centric grant mechanism, with the full SAML authorization grant mechanism we could map ANY claim from SAML blob into the access token. I suppose this might help us in the ACS-mediated websso case, where we want to swap the SAML blog with claims mapped from Azure AD into a JWT that the directory API will accept, even if signed by third parties.

So thanks Ping Identity for a superb evaluation and a first class and superb training experience. I’m sorry I cannot even repay that – except by recommending others (customers in cash-richer markets) and contrasting what a “value-adding” authorization server really does over the barebones infrastructure of Microsoft Azure ACS AND NOW AZURE AD offers – to the likes of us living with government defense-subsidies …on cash-poor main street.

Posted in oauth

first debugging trial of PingFederate (modern) to Azure ACS SAML2P endpoints

image

image

https://www.rapmlsqa.info:9031/sp/startSSO.ping?PartnerIdpId=https://sts.windows.net/04fb5be2-29ac-48b8-ad54-327e4ff243a0/

Ping Federate (modern) imports the  metadata, producing the local configuration as shown above.

image

https://login.windows.net/04fb5be2-29ac-48b8-ad54-327e4ff243a0/federationmetadata/2007-06/federationmetadata.xml

For Microsoft engineering (doing final debugging). this was the result:

image

Well, obviously I’m missing the step of importing the IDPSSO descriptor listed by the SP, into Azure AD.

So how do I do that?

well lets configure PingFederate SP to both be named as an entity using a http URI in the validated namespace and also listen on an endpoint in the same. Also, we logically add an SP record to the IDP authorizing information flow via assertion:

image

image

The result is slightly better than before:

image

image

Lets now simplify, and ensure we are using a managed account, first. With it we can with our existing setup get a SAMLP response:

image

So, lets now change the name/address of the APP  and make it the ACS endpoint of the SAML SP server awaiting the response:

image

We get at least the processing of the response, now!

image

Clearly ,we can now process the result, but it violates an SP policy. We see a missing subject name in the authentication statement (which probably upsets the traditionalists):

image

 

image

We can extend the time period allows by PingFederate (globally), thus:

image

Suggests Ping Identity needs to make the time allowance configurable by IDP CONNECTION – not that my opinion is probably of any consequence.

This allows a first end-end trial:

image

image

image

Some Request options cause the Azure Endpoints to crash (e.g. force authn)


Also, since Aure AD is using RSA with SHA256 I wonder if it will work with our 2 year old SAML server (that may not support that ciphersuite)?

Posted in SAML, SSO

A US privacy fantasy–based on OpenID Connect

A certain gentleman who once worked at BBN on an NSA contract for key management system production had the unfortunate responsibility to refuse to answer simple technical questions – since he didn’t know whether the answer had been classified or not. Often, it’s the terminology that is classified (not the method) – meaning one violates laws even by using the codewords or even specialized technical terms with non-cleared folks. Finding this all rather ridiculous (but doing it anyway), he would listen to non-cleared folks ‘conjectures” about the meaning of or operation of some technical widget – particularly if it was transitioning from military to civilian applications. They would often quite wild and overly ambitious (not that he could deny or confirm this). But, they were often humerous – as folks found “inner meaning” in glimmers of interpretation of this or that.

So let’s play the game.

Let’s imagine that the US is committed to privacy (finally), and this means enforcing it. (It also means everyone ELSE has to enforce it similarly, so the US doesn’t suffer harm through it taking on more responsibility than others; everyone must suffer the pain equally!)

To take such a political hit (since this means the last 20 years of “market-led doctrine” failed, with the final end of Thatcherism and the last 10-years of gunboat diplomacy and wars of humiliation), the US has to get something else it wants – more desperately than giving up its “free market” privacy dogma. Of course, it cannot ADMIT it wants what it wants… (since it classified! to prevent folks getting a negotiating advantage!)

And of course that is the centralized gathering and scanning of “cybersecurity logging” records (purportedly to measure attack patterns by having lots of sensor nodes out there…). BUt, in the space of consumer privacy, this means having a mostly centralized directory service – that LIMITS who gets the directory record of users as the they wander out to some half-baked SP app site (that may not really give a damn about how it handles your privacy). The directory operator becomes the “privacy-policy” enforcement point – limiting who gets what, of personal identity attributes.

One sees in the Azure AD rollout EXACTLY this element of policy control, though Microsoft are going to some pains to hide the bigger picture. (Remember the palladium and Passport scandals!)

Now I cannot say that the bigger picture is exactly unwanted or socially undesirable; and is clearly no longer technologically difficult using webby technology. Actually it never was particularly difficult, as we proved in the Allied and US-internal-services shared military Directory(s) world 25+ years ago  – using earlier forms of the signed-tokens now being contemplated in the world of OpenID Connect.

privacy – security – trust (by spying on the logs) :– the eternal braid group.

Posted in rant

Azure AD (IDP proxy) and ADFS/PingFederate IDP

To verify a domain in AAD, first remove it from office 365! Sigh!

My advice is NOT to use the console – which has a particular verification procedure (based on adding a TXT record). In powershell create a federated class domain, get the (other particular style of ) validation information done, and then and only then verify the site. At the same time, one gets to setup the endpoint that allows Azure AD to talk back to ADFS for passive and active reasons.

Using the managed account to login to the AAD via the powershell tool, now add a user (and their unique immutableid/UPN).

Posted in SSO

Connecting Azure AD IDP to an ACS-mediated SP

Using an MSDN-Azure subscription, create an Azure AD instance – alongside an ACS instance/namespace. my AAD is called pingtest1 and the ACS is called homepw. Typically, one authenticates to the Azure portal using a live.com identity, which populates the first administrator.

Add a second administrator, whose identity claim is in the namespace of the AAD instance (Administrator@pingtest1…) in my case.

image

The goal is now to hook up ACS with AD – by importing the IDPs metadata into the ACS SP. First we add a throwaway “integrated app” (on “https:/localhost/throwaway” URLs) into AAD which exposes all the endpoints:

image

Then we do the usual ACS screen to the the IDP:

image

Then we use Visual STudio 2012 to which we have added the identity and access tool (from extensions). To the web form project we add the passive STS, using the wizard (all as conventional). WE selected the ACS option (homepw) which showed us we could bind AAD now to our new RP (to be provisioned in ACS). A login at the App induces a flow  to choose an IDP form choises we made as provisioning administrator:

image

image

image

Then we use our temporary password in a screen flow that sets the permanent password. The assertion chain then starts back to our RP:

image

And this is as far as we can get … until now we use the powershell for AAD to make a service principal (with redirect) for ACS homepw. And that we try next – since now the Administrator@pingtest1.onmicrosoft.com user is evidently well provisioned and fully live.

image

shows our RP.. and no entry for ACS!

Now according to ACS, for our namespace, our RP endpoint expecting an assertion is

image

https://homepw.accesscontrol.windows.net/v2/wsfederation

So we change our make serviceprincipal script thus:

image

We run this in the AAD console (having installed it, yada yada):

image

image

We see the final record:

image

Having generated some claim mapping rules in ACS for this IDP previously, we try a run of the SP again.

It STILL doesn’t work .. as the name presented by ACS is https://…. whereas we were able to register only a name of form homepw/homepw.access…

So, lets try to use the IA tool… spoofing ACS – since this evidently invokes a different means of registering service principals. First we delete the principals we added above.

 

image

image

And this doesn’t work either…

ok. so lets try something else – NOT using the formulaic name form. Lets simply use the (https) of the ACS URL

image

Hurray. we got from MVC SP to ACS, to Azure AD (to MicrosoftONline Login authentication service) back to ACS and to our MVC App.

image

You just have to know to make your script behave a bit more like the I&A tool in Visual Studio (not that this worked for me, by itself).

It may or may NOT be relevant that I have added to the list of “integrated apps” using the console

image

Posted in AAD, SSO

faster webservice (Saml) authn to office365 endpoints

It takes just a second to access the web service endpoint of Exchange Online setup with federated security. From a windows form,

image

The code from the sample takes 20s (as it goes about discovering a URL that is actually pretty static in Office365 land and only then bothers to hit the IP-STS):

image

Slow original sample

Obviously, we are supposed to discover at “connection time” and locally persist the URL – for use the second time!

Posted in office365, SSO

Client side WCF “token-providers”

From http://weblogs.asp.net/cibrax/archive/2006/03/27/441227.aspx we get to learn about the client side security token managers – and using a SAML token intended for a logically-named entity across several ports:-

image

As on the server-side, we see that the key to customizing the security model of the client is through subclassing the ClientCredentials class:

image

On the server side we saw that it was the the token manager to take a look at the URI identifying potential token types and hook up the authenticator classes for the token. On the client we see the opposite duty of  the manger: to create a token provider class (for classes of token type). In this case, it’s the saml token type (a member of the issuedtoken class). The custom mostly borrows functionality from the standard apparatus for such tokens.

image

image 

But it does do a little specialization allowing us to control when we take the token from cache or when we talk to the STS to get a new one (that is then stored).

image

This all makes a lot of sense!

Posted in SSO

Understanding securitytokenmanager and bindings in WCF

I never had a good mental model of how the security apparatus works in WCF.

We see first how the main control point on the server for the developer is the service credential. First configure the cert on the default behavior and then substitute one’s own behavior. It has the same cert assignment but now one gets to set your own securitytokenmanager class.

image

image

The token manager is a recognizer of tokentypes – presented as host opening time. The tokentypes presented depend on the binding assigned to the endpoints of the service (as read from the service definition in web.config). For each token type as part of the opening process, essentially one attaches an authenticator class for that token type

image

image

Because we added the  establishSecurityContext=false property to the message requirements declaration, the host opening protocol no longer asks the manager to bind an authenticator to the “sessiontoken” type of token. Normally, a library authenticator would look after it, in any case. Of course, we still get asked to recognise “http://schemas.microsoft.com/ws/2006/05/servicemodel/tokens/AnonymousSslnego” because we have yet to turn off the “service cert negotiation feature” trying to make things as simple as possible!

image

Ok, so we see how via servicecredentials behaviour we get to control the configuration of tokentype authenticators. Now, the URI for the username token used in the WCF client is NOT the same as the URI found in the WSSE namespace for a usernametoken to be presented in a ws-trust RST, normally. Small gotcha, note: “http://schemas.microsoft.com/ws/2006/05/identitymodel/tokens/UserName”.

To react to WSSE namespace username tokens we would have to recognize

image

But we are learning! (or rather, we are learning what folks who learned in the Microsoft WSE era work already know!)

Ignoring stuff about authorization policies (which we can think of as just a container for passing claim sets between the token handers/generators and our service implementation), at the heart of the matter lies the ability to register a class that confirms whether the username/password combo is good (and list some claims as a result). Most of the work is inherited from the windows account checker.

image

Posted in SAML, SSO

Trulioo and UK IDAP

image

Wonderful. The company doesn’t even feature SSO itself…

Yet another company that wants to sell SSO “services” to others who rely on IDPs – but it itself will not rely. There is no “case” for it, evidently. If there is no case here, why elsewhere?

So apart from the mythical logon to government websites by contractor employees (which is just as easily solved with smartcards, client certs and urr, what the US calls CAC cards) what is the use of all this?

It seems everyone wants to be equifax – providing the “evidence” upon which YOU make trust judgements. Its just fascinating to see how little even then-industry-insiders today understand what VeriSign ADDED back then to its equifax data feed – and how this directly impacted its valuation as a “owner of **new**-generation of telco-class infrastructure”.

Posted in SSO

Making the RST-R and Assertion from Active STS to Office365

Using our knowhow in WIF generated from the passive STS effort, its trivial now (mostly) to make the kind of assertion and response message required by Office 365 (to authenticate outlook or other thick clients).

image

Assuming one has a raw username token processor class and a token generator from the codeplex best practices samples, make the following changes to get closer to Office 365 compatibility!

To the username token processor we add the authentication statement (and some claims that perhaps arguably ought to be better added later, in the GetOutputClaims method!)

image

And then, the usual RSA/SHA1 signing method is required:

image

And to ensure the authentication statement actually gets minted (and the claims associated with authentication of the username token get populated in the authorization statement) we pass through the authenticated status (from red 1, above) via the trick shown as blue.

image

End.

Posted in office365, SSO

looking at chained openid connect, given oauth2.0 authorization code grant

I now understand enough about OAUTH2.0 in its modern incarnations to have a look at openid connect. What does the latter do?

The answer is… I don’t know (but I have every reason to not trust the folks involved, if only because of how the phone companies lied and spied on folks – setting the precedent for how its likely OpenID connect will work, socially, too.)

When we look at http://openid.net/specs/openid-connect-basic-1_0.html, let imagine we are now working with the 25+ year-old architecture of the secure x.500 directory. Assume the old concept of the universal directory and openid connect are one and the same – and the core features are common. What MIGHT they be? Assume that technology changes (as bits, bytes and blobs shift between binary, XML, text and back to binary…); and that such changes are essentially irrelevant.

Well! We know from the write-up that an SP might receive a web visit from some consumer with a purported name claim (e.g. peter@rapmlsqa.info) – perhaps in the Google namespace rather than my namespace, that of the Windows Identity land or Yahoo land. The visit is to the likes of one’s Office365 share point site – rather than a simple website.

Of course, let’s say we are registered with the Windows world (being marginally less evil than Google). Assume the politics evolves (quite quickly) such that one MUST use a TTP to mediate ones web presence with consumers (rather than use the web concept from TBL) Thus we can perform the OAUTH2.0 authorization_code handshake – either for consumer consent, or for tenant admins to consent to the presence and value-add of data-consuming APPs (from third party vendors).

Through our Azure ACS instance, we have access to Google Microsoft and Yahoo IDPs of course – who we assume (under USG “prompting”) are decided to allow reciprocal access to each others’ directory access points, for any tenants of each _other’s_ cloud. To authorize the other cloud’s SP to access the IDP’s directory graph API endpoints of course requires an OAUTH-2.0 dance – in which the authorization server serving the SP (this being a Windows world AS, when helping out ACS supporting a Windows SP website and web services) issues a signed JWT – that is “viable” in a cross-vendor world because the certificate/signature on the JWT can be evaluated at the directory graph endpoint(s) of the two other IDPs. After all, its just, a signed blob, supported by a cert – giving it mobility.

So is this what OpenID Connect REALLY is – that mere OAUTH 2.0 is not?

Ill guess it’s the fact that certain OAUTH AS/STS issuing the JWTs as “identity tokens” (subtly distinct from the authorization token within the same access token response) carry political-privileges such that the SP web app “governed by ” one cloud vendor can make web service calls with it to any of the “national infrastructure” directory endpoints.

If I think like X.500, folk in the Windows SP world will likely STILL not make direct access to a Yahoo directory endpoint. Rather its more likely that a chained directory operation will be performed, with the chaining being orchestrated by the “cooperating clouds”. (And you can be assured that NSA/DHS/CIA/FBI are interested in JUST such a collection/collation point, since its now EASY to get the “consumer-centric” trap/trace records for the “cybersecurity” mission).

If this is what’s going on, this is JUST AS WAS IN secure X.500 concept of operations (as practiced in the Allied military Directory, for one). One issued (as a “governed” DUA/SP/App) a signed request to a local DSA (e.g. Windows SP/SharePointApp  to Windows DSA/ACS) and Windows Cloud will presumably now go chain off the (signed by JWT bearer) request to Yahoo’s graph  API say (and proxy the response from Yahoo back). It would also natrually resign the response – as required for the local security domain but only with symmetric keys that authorize and LIMIT the use of the (yahoo-managed) directory result at the particular SP’s _governed_ webservices (the APPs on particular registered webservice-endpoints, supporting such as Sharepoint Online at participating tenants)..

Now, this is 100%  conjecture. But, its what I would do (with 25 year old directory technology, rebuilt with webby blobs). AS with the Directory of 30 years ago, its the POLITICAL power that drives it – and of course it’s the politics that MAY WELL UDNERMINE it (once folks see it). What I can say, given its authorizing apps as well as access to the directory naming context is that its MORE than X.500 – which may induce greater social acceptance .. since “there is more” value.

If this concept were to be classified (so noone has much say, with this being the true point of the classification…), it would have merely a NATO Confidential label. In the US, it would be (stupidly) classified at top-secret (to give it an allure of importance within the million government and contractor employees entitled to see it…).

Posted in oauth, OpenID

Making a Realty Active STS for Office365 be visible to the world…

I’ll document this only because it took 16 hours of head banging to do it; what should have take an hour and DID take an hour with PingFederate. Its quite fascinating to see the gotchas I overcame (once one is makes “non-developer” assumptions; as found in sample code).

image

So now we have proved that we can expose an IIS-hosted STS in our data center concept – though truth be told so far it’s sample code running as pages/services co-resident with our main ASP.NET webapp.But, something works; end-end.

Obviously, the next fix is to apply our programming knowhow to make a response that Office 365 would want to see (with Authentication Statement, etc); and test that against the Office SP. Then we figure how to apply our own username token validation class (rather than the sample’s class – which actually checks nothing right now.).

Wow. Raw persistence counts for everything.

Some notes:

if you are going to host such as a sample in a subdirectory of a webapp (with its own web.config distinct from that of the sample in the subdir), ensure Application settings are in the parent directory. Similarly, replace securitytokenhandlers (used by the subordinate path pages/services) in the parent too; registered token handlers in the bin/ directory of the parent (though logically used by the subordinate path pages).

image

At the same time, the WCF settings have to the in the web.config within the subdirectory, being tied to the path denoting the address of the service.

Posted in SSO

Changing the default Visual Studio WCF STS to be Office365-compatible

What few changes do we have to make to our working ws-trust client/server to make it use the odler profile of ws-trust (as used by PingFederate and Outlook/Office365, evidently)?

Let’s state the desired output, as shown by client-side traces:

image

request

image

and response

To make the client, we simply made a few obvious changes of constants:

image

And the server config file was not much harder:

image

based on these trials, its obvious trivial to alter the config of our pseudo-production STS to offer the right version of ws-trust for Office 365 purposes:

image


PS

Note how with the custom Binding we started to play with the idea that the SSL load balancer might continue to terminal the SSL session (leaving the hop between LB and resource server not secured by https). Does this affect binding config  in IIS6? do we still  expose the https binding, with cert etc?

Posted in SSO

Deploying to IIS6 an IIS-hosted STS with usernametoken token processing capabilities

So lets go through the (hard to recall IIS6-era) basics using interactive tools (vs a prepp’ed script). The goal is to host in this production-like environment the STS site already working in a developer IIS Express 8.5 server (based on source code from Microsoft’s Claims Aware sample code for active clients. The workstation is to figure what we need in code/config so things work in production (not on a silly developer workstation setup).

We create a website (not a  virtual directory of the default website), to which we assign 2 bindings. Back in IIS6 era, we assign to all the Ethernet adaptors responsibility for port 80 and assign the host header used for the site’s identity exposed at the NAT’ing load balancer. We also see that this site exposes port 8443 (so we can expose a direct SSL (non LB-terminated) path through the firewall/loadbalancer

image

We assign the default application of the website to a non-default active application pool, with a domain identity assigned to on the worker process (and this account has all NT ACL rights and privileges on the source directory (and files) in the file system supporting the website.

 

image

image

We clearly have my testing (but production-real) SSL cert assigned, which establishes authority for the name www.rapmlsqa.info (which is the host-header we assigned).

image

The process identity also has access to the private key (but we cannot show this, back in IIS6 era)

And we see from a browser on the same host (where Fiddler is mapping www.rapmlsqa.info to localhost in its HOSTS file).

image

and from a browser OUTSIDE the loadbalancer, with a single resource server in its pool and a flow policy set for SSL pass through:

image

from outside the loadbalancer (the internet…)

And, in the STS webapp’s web.config we set the following behaviours so WSDL is exposed with external names in the addressing:

image

and for which cert 1 I GUESS is to enable the metadata services to work ,whereas cert 2 in WIF is there for use when encrypting RST-R TO the STS. Anther configuration of a cert in the application properties sets the STS code. Getting these right (and pointing via a CN=XYZ reference to a cert in the MY/Personal store for the local machine), we can read the WSDL – which has the right internal address names:

image

So clearly, our STS has been activated – able to produce metadata about the ws-trust13 port and its message capabilities. Will it now cooperate with a WS-trust client?

It seems so, given the client logging view of the request and response:

imageimage

clearly we see a SAML1.1 response, in a ws-trust “13” era responsecollection, sourced to our resource server (visible to the outside world, for the Americans/British/Chinese to spy on!!). Now that we have no firewall protecting us so SSL works end-end, we will need to be more careful on the (PRODUCTION) web server config…

So there is little about the (almost) production environment itself that is interfering with our production STS built into a production webapp (with real username token processors).

Posted in SSO

Starter simple active STS for Office365

On Windows 2012, install http://claimsid.codeplex.com/releases/view/67606 samples and do your best to configure. Configure the certificates, even though they will appear not to properly configure…they do.

Our simplest project for a Ws-trust client and server (hosted in IIS express) was formed by taking directly from the various samples, either in the ACS sample set for the username token (we stole the client) or the best practice sample set for the username server (we stole the Litware STS).

There is now a minimum of fuss – over SSL an RST is sent, and RST-R returned. The request contains a username token with guess what.. a username and password. A SAML blob comes back in the RST-R.

From here, we can look at what we need to do (probably going back in time) to make this compatible with Outlook when working with Office365 endpoints.

We see (from Ping Federate logs) that Outlook sends:

image

and the (PF) STS sens back for Office365 usage:

 

image

image

We can compare these obviously with the logs of the project’s client/server:

image

RST

image

RST-R

As with the passive STS work, we can assume that the Office365 wants to use the older 2005 profile of ws-trust.

If you use IIS express, don’t forget to set the SSL true property (and find the port). If you use IIS, don’t forget to ensure you add Everyone to the ACL of the (localhost) private keys!

Posted in SAML, SSO

Playing with Realty STS–processing usernametokens

Using Visual Studio 2010 (with WIF SDK installed) create a web site using the Claims Aware SERVICE (WCF). Running with fiddler, browse to the metadata at the likes of https://ssoportal.rapmlsqa.com/SpInitiatedSsoHandler.aspx/VCRD/11

I used Fiddler raw view to see the response in notepad (and I edit away the HTTP response headers).

image

We then correct an error (removing an extra slash after com/) saving the result to a file.

image

In fact we amend even this corrected copy (to use a test SSL-enabled IIS binding with end-end SSL through the load balancer listening on port 8443)

image

We then use the STS wizard for the claims aware service project:

image

Since the endpoints are actually hosted on 8443 (something the static metadata and the mex service are failing to publish) we make a manual adjustment, in the (generated) binding

image

Then we add a ASP.NET web forms website to our solution and to it add a service reference (pointing to our WCF service, discovered in the solution project set);

image

…and we see a client config be generated from looking at the /mex endpoint of the service (which reports on the /mex endpoint of our STS, with the right port even.

image

once we compile the solution, our servicereference class is generated. We can thus create an instance of the client proxy and see what happens!

image

Posted in SSO

Augmenting Realty RETS protocol with vendor extensions – OAUTH AS STS issued RSA-signed JWTs

https://datatracker.ietf.org/doc/draft-ietf-oauth-saml2-bearer/?include_text=1 shows an IETF internet-draft document on the topic of what in realty we would call a rets client talking to a RETS data server login endpoint in order to retrieve such as a MLS member record (or listings). Traditionally, data consumer using RETS client software are issued device/vendor credentials, including passwords and/or RSA signing keys. The vendor record stores the “scopes” that limit what their RETS client can do, having used the login endpoint to get a session-token. The login response message turns the scopes into a list of “capability” URLs on which are activated various data-providing endpoints. If the vendor uses the login transaction to get the session-token and the list of scope/URLs for the session-token, the RETS client software will thereafter present the session-token to the data-service endpoint along with a data request – to retrieve some specified set of entity instances (in XML or some other data format). Typically a browsing-user goes to a VAR site for the MLS – a site built using a webapp which builds-in the data-service-consuming features of the vendor’s RETS Client (as above). Typically, the webapp merges the data with other data sources and presents an enhanced view of the realty data. One incarnation of the webapp would be an Office365 share point website into which the vendor’s “Sharepoint App” has been registered, and whose implementing website has embedded RETS client software and credentials.

OK. All the above is ancient history, deployed globally, and old-hat technologically. So let’s overlay all the new terminology of OAUTH 2.0 so RETS gets a new coat of IETF-colored paint.

The resource “owner” is an MLS member (with a member record…and listing records).

The resource “server” is the set of post-login endpoints exposed by the RETS server, by MLS tenant

The OAUTH “client” is the vendor using RETS client software to pull data, server to server. The OAUTH client has vendorid/vendorpassword and optional RSA signing key.

The STS component of the OAUTH AS (authorization server) is the RETS Login transaction (a realty-specific interface for issuing access-tokens known as lists of capability URLs).

Assume PingFederate is the OAUTH AS, configured to perform the authorization_code authorization grant use cases. The vendor’s webapp invokes the OAUTH AS managed process when the Realtor first visits the webapp (with embedded RETS client). A “persistent grant” of authority is stored by the OAUTH AS recording the individual Realtor’s “Consent” for the vendor of the webapp to use the server-server RETS client data-channel’s “capabilities” (once obtained). If the grant is revoked, or expires, or is otherwise terminated, the vendor’s webapp would normally perform the authorization_code use case again. As the first time, the Realtor will authentication (using websso) and issue consent. In the sharepoint 2013 incarnation, having obtained access to sharepoint list data as consequence of being launched as a “SharePoint App”, the vendors webapp will then use the process described to get authority to access the RETS data, allowing the app to present an enhanced view of both data sets.

The RETS protocol already features both channel-specific and URI-resource specific security mechanisms – making sophisticated use of digest authentication standard. It also leverages a session-token (whose integrity and legitimate use on particular channel instances is secured by means of specific digest authentication countermeasures).

Logically, once the vendor webapp has received the  authorization_code it SHOULD perform within 60s a RETS login transaction, citing the code in an RETS HTTP request extension header along with vendorid/password and RSA signature (in a second RETS HTTP Request extension header). The RETS login response will contain an additional vendor-specific extension field – bearing the RSA_signed JWT issued by the PingFederate OAUTH AS STS component in response to a request made by the RETS login service and which response is consume by the login service and inserted in RETS login response extension header.

No presentation of the signed JWT will be made by RETS Client. However, vendors are required to store the tokens, for audit purposes. Being signed with an asymmetric key, the vendor will be unable to forge the token, during its intended lifetime. Should any dispute being initiated  against the vendor for failing to respect the limits of the MLS data consumer contract on behalf of the data owner (MLS) or the individual Realtor, the vendor has evidence to show compliance and authority to consume data on particular URIs. Failure to present an JWT that shows authority to use endpoints and data for the particular facts of the dispute can be assumed to be evidence of non-compliance.

The scheme needs no inter-vendor agreement, being within the scope of existing per-vendor extension frameworks known to work in the field. The scheme can easily be extended to a multiple-vendor environment, with suitable cooperation.

Now, this description does not yet envision the client replacing the RSA signature on the RETS Login request with a SAML assertion. Since the RSA signature is already an vendor-extension field, one can similarly extend the client’s RETS login request with a SAML assertion.

Concerning implementation, we see from the IETF document

image

Using PingFederate’s existing capabilities, we could require RETS vendors to themselves signup or renew their authority by (i) completing conventional ws-fedp websso from an MLS IDP to the PingFederate OAUTH AS configure NOT to present a consent screen, (ii) presenting the authorization_code obtained from Realtor enrollment (which implies that a Realtor has been through a websso and consent process, too), and (iii) cite the access_token in the RETS login header (rather than the assertion or the RSA signature).

As we already saw featured in Azure AD demos requiring the tenant administrator to “admit” a particular third party webapp (SP) into the site-users’ experience, we see here the OAUTH-version of the infrastructure for vendor apps to be so authorized. We see this process then being extended to link up with the individual user’s authorization grant (communication by the authorization_code mechanism). The web app thus has authority from the site into which it is now a component part and authority to present particular user data from a realty data service.

Posted in oauth, RETS

rsa-sha1 signed simple ws-fedp response–for max interoperability (e.g. WIF/Visual Studio STS to PingFederate)

Steve Syfuhs answered my call for information on how to make a simpler kind of federation response, including the usual signed assertion.

The  project I built is download from here and the desired output is shown below (using the Fiddler tool and a Federation ‘inspector’, all from community members):-

image

To repeat the experiment using school-boy computing apparatus, install visual studio 2010 and the WIF SDK.

Then add the website, using the Claims Aware website template.

On that website project, run the STS wizard and add a (passive) STS project. The former is the relying party of service provider (SP) relying on the signed assertion produced by the latter IDP – the asserting party releasing an identity claim about someone or something. Note the default.aspx.cs file.

image

To change the output message formatting from its defaults, perform the following modifications to the pre-generated code in default.aspx.cs:

image

At 1 declare the serializer classes (for the older message formatting regime that we seek).

At 2, use the serializer in a replacement method call that processes the signin message received from the SP, requesting a response with assertion. You will need to resolve types, which will add package references:-

image

While you are at it, you MAY wish to maximize interoperability (at the cost of introducing “relative” lower strength in your crypto). By using SHA1 (or MD5), it will be now 10 years rather than 20 years before anyone attacking you that you actually care about can spoof the signature checksum – on your message whose noted expiry is 5m from now anyway…). Certain sites upon receiving assertions verify signatures only when signed with the RSA/SHA1 crypto combination and 1024-bit public keys

So that the assertion be signed with RSA/SHA1, alter the constructor of the STS configuration class thus:

image

“http://www.w3.org/2000/09/xmldsig#rsa-sha1”

http://www.w3.org/2000/09/xmldsig#sha1”

image?

We can go further into the maximum interoperability argument and ensure the assertion has an Authentication Statement indicating an authentication instant, authentication method, subject name and a couple of authorization attribute required by Azure AD (in its Office 365 incoming Federation Gateway incarnation).

This requires two steps:

image

Step 1: make claimset (note the Prip UPN (http://schemas.xmlsoap.org/claims/UPN) , particularly) and assign the claimtype of the name(identifier) claim

and in the STS class’ call back method for GetOutputClaimsIdentity() method apply our trick to make Authentication Statements appear in the (SAML 1.1) output assertion:

image

Step 2: ensure the output ClaimsIdentity has isAuthenticated property set true

With these additional changes we get as output on the wire something I’ve sought for 2 years!

image

Now the open question is: is this response compatible with (i) Ping Federate’s ws-fedp SP, (ii) the Shibboleth ws-fedp SP, and (iii) Office 365?

Yes, I’ve gone back a decade and laid out the non-optimal steps in schoolboy programming style – but that’s what maximum interoperability and maximum tutorial value often entails (since highly politicized vendors/academics ensure modern profiles don’t actually interwork in the wild, using some semantic trick or other to help “structure the market”)

Help on getting this far due to : Ping Identity (lots and lots of what-to and how-to), Steve Syfuhs, (how to go back in time in WIF), Dominick Baier (for Fiddler ws-fedp inspector and patience beyond measure, given the likes of me).


So, to testing with Ping Federate! lets create a connect from its SP agent to this STS – intending the websso assertion to play the role of a OAUTH grant:

image

image

Concerning the mapping from SAML assertions to OAUTH persistent grants (one of the advantages of paying for commercial grade servers!)

image

Concerning the mapping from assertion to access token we play a little:

 

image

And of course we import the STS’ certificate into the Ping Federate IDP connection, exporting it from windows thus:

image

makinga complete Ping Federate IDP OAUTH-IDP Connection of

image

When we try it we in the OAUTH context, acting as the phone app doing a redirect via its web browser to get the first authorization_code after a user authentication (via ws-fedp websso interaction)

image

We get an interworking success between (non SAML2P) WIF IDP and Ping Federate (for My first time)!

 

image

image

and RSA-signed JWT of

 

“eyJhbGciOiJSUzI1NiIsIng1dCI6IjcwVzNuUFJDQ3pTZVh1cXdzQlZ5MktNU01QayJ9.

eyJVc2VybmFtZSI6InBldGVyQHJhcG1sc3FhLmluZm8iLCJPcmdOYW1lIjoiQWNtZS

wgSW5jLCIsImV4cCI6MTM2NTgwNDAyOCwic2NvcGUiOltdLCJjbGllbnRfaWQiOiJh

Y19jbGllbnQifQ.TwYaXEZG21No4CeocPK7O_CCzx4PjUFIr6KfcD6d1zI-496vSl1xzS5UCqhjTltNuibUWvRntK77lbwQMfxbSeadvs5HPq0DdAdMKWHxbJkY93

aMbutVj-GzuaFzXxzHGyRn-U2rwWhRzAcJcrHz9Mb242YctiJadf30z9rDZ_7CQNjrFJNRPEgH6SFy9659Cof-ZvkarhqGKRRskv6ZqWBXLbRIO2GZrfhA6UesK_AT60QOin-eARRRv5bJDCwE6yO3OCcd35RiR0jaY4mGFVJmU2Nqy6_FIqPXCqbrx46M-07TMunA9qCUvgAG1CbhYsPyZhkl49uxZxCvy5u4og”

Just one gotcha to note in the original Ping Federation server IDP connection config:  one has to use the SP’s wrealm to give what is normally indicated in the wreply:

image

Perhaps repeat the steps – and let me know what factoid I’ve omitted from the schoolboy’s science project writeup.

P.S.

To make the assertion from the STS indicate am authentication subject name with a particular format,

image

vs

image

one makes a slight modification of the code:

image

Thanks to Steve Syfuhs

 

End.

Posted in pingfederate, SAML, SSO

from WIF ResponseCollection to just Response

By default, a WIF passive STS built from ASP.NET template samples produces a response of the following form:

image

Assume that was a later ws-trust and ws-fedp profile than what we want.

Now, what we want – in practice – is a response of this form:-

image

image

so what do we do in WIF programming terms – within the STS callback class or configuration class to make the latter?

Posted in SAML, SSO