Cypherpunk sexistential tizzy

Cypherpunks have got themselves into a uk communist party style tizzy – destined to create the usual “final schism” that befalls all false religions founded on inconsistent  (and self serving) doctrine.

For at least the last 5 years (if not since 9/11),  cypherpunks have lived with self denial: the desire to perpetuate the myth that anarchy won in full knowledge that it didn’t, actually.

Today we know now that ssl, ipsec or skype was/is compromised not by the (figuratively evil) us federal govt  – that  easy to target amorphous evil (whose evilness magnitude is a function of size) – but by corporate vendors – out to make a buck…while employing and thus feeding one or two folk and their dependents. Of course the very nature of governance means that corporate profit is a function of general compliance with public or secret policy – that stated explicitly, “or otherwise”. Such is a nature of democratic politics (hardly ideal , even in its academic form).

Is  cypherpunks dead …like the corpse of the (academic relic of the) uk communist party I happened to encounter in my first job (at qmc-cs)?

It certainly seems so – from the dynamics of the end game.

Good riddance (though I’ll miss it, not that I read the endless drivel beyond, as with marks first pamphlet, the founding prophecy)

Posted in coding theory

a little civil servant joke

image

http://www.theregister.co.uk/2013/05/24/geeks_guide_gchq/page3.html

originally cited by Cryptome.

Posted in early computing

Reso, odata, oauth2 and nstic

US realty is a political place – and rightly so. And so it was with a non-decision by the so-called RESO group to not endorse the odata approach to data apis. Of course, things had been setup ahead of time so that this non decision will evolve over time into an endorsement. Quite who outside is driving this will be interesting to identify. I already have some ideas. For example, the azure ad graph api comes nicely with oauth and odata, already!

To be honest, the direction being taken is folly; but the least folly of several choices. Competing with the legacy protocol – which is itself a distant ancestor of odata – is going to be hard, however. This will be particularly the case if the legacy protocol, called rets, modernizes its session token and login handshake to now use (quite easily) the oauth2 handshake and its bearer access token. If it also offers a new content type extending those negotiated today to also offer json, oauth/odata is going to have a run for its money – since 80% of what odata advances beyond todays legacy will become available within the rets franework, for a minor refit cost. While others stuggle making odata scale, those using an updated rets+oauth2 might capture the 80% of the interactive, restful uses that really matter. At that point, its game over – once one factors in the re-engineering costs of a native odata query engine doing huge searches.

Of course the simplest oauth server is a gateway, from new to old, for simple, fixed read only search queries – referenced using the odata mechanisms. That toy should take 10m.

Posted in coding theory

Expanders – Lecture 3 – Part 2 – YouTube

http://m.youtube.com/?reload=9&rdm=mn4dxd2xg#/watch?v=cbYcn8o0Jfg&feature=player_embedded&desktop_uri=%2Fwatch%3Fv%3DcbYcn8o0Jfg%26feature%3Dplayer_embedded

Luca explains why the power method works, as an algorithm. In the course he explains the concepts behind tge steps – which are really quite intuitive (once you recognize what certain symbols are modelling). In particular one sees reasoning about orthogonal eigenspaces, preserved by the numerical method.

While its fun to see this explained in matrix centric theory (taught to every 14 year old since 1950), its even more fun too see the same argument made in pure group theory. Orthogonality in 1920s relativity math is more about permutation cycle lengths, letting a cycle (term) act as a generator (in a cayley graph), cycle closure, and self-conjygate subgroups of the symmetric group. Eigenvalues are addressed more geometrically, than in special linear groups. A series of 2k+1 matrix multiples may look more like conjugation of a cycle by a 2k+1 (or 2mk+1, rather) term

Posted in coding theory

Authentication with Windows Azure Mobile Services

http://chrisrisner.com/Authentication-with-Windows-Azure-Mobile-Services

Adding custom (oauth) provider to an ios app, supported by azure mobile services.

Posted in coding theory

Building WebSSO sample projects from Visual Studio for Android

On my kindle’s browser, an azure hosted web forms project offering a google (websso) login didn’t work. It became quickly apparent that the forms auth cookie presented back to the site was invalid in some way  (or perhaps not present!); meaning that SP site would loop back to the IDP. If the IDP was set for auto-logon (perhaps the second time one visits…), one gets an endless of loop of websso requests and responses.

Of course, I just happen to be using ASP.NET 4.0 on DotNet 4.0 (so one can use WIF libraries easily, with websso). This turns out to be a nasty combo…till you fix it thus:

image

http://www.hanselman.com/blog/FormsAuthenticationOnASPNETSitesWithTheGoogleChromeBrowserOnIOS.aspx

Posted in SSO

Lecture Notes | in theory

http://lucatrevisan.wordpress.com/lecture-notes/

Hardly a useless word.

Posted in coding theory

CS359G Lecture 18: Properties of Expanders | in theory

http://lucatrevisan.wordpress.com/2011/03/16/cs359g-lecture-18-properties-of-expanders/#more-2218

Finally! We see a modern presentation of the topics addressed by turing in his unpublished manuscript on permutations. Perhaps the editor, noting the missing title, should have called it on mixing.

Looking at the very specific gccs terminology turing uses, and given the extra context that this stanford professor adds concerning the design of rotor wiring schemes  (and pseudo wiring, by extension, in the spn of the des) based on expander graphs ideas, we can see what both were/are not allowed to say openly … concerning cryptographic design method, per se.

Its nice to see this prof use non pretentious language.

Posted in coding theory

Cryptanalytical Quantum computing circa 1980; building on 1950s theory

One of the interesting things about modern (2013-era) quantum computing mechanism is that it is NOT so different in its conceptual basis to the concepts used experimentally from the 1950s. But shush! …lest folks figure that quantum cryptanalysis has been around “practical” form rather long than folks might realize!

Of course, just as Colossus was a cryptoanlaytical processor rather than a general purpose  computer that followed up the theoretical Turing computing of a decade earlier, assume that quantum crypt-analytical computers of the 1980-era  were not general purpose (and fault-tolerant) computing devices. Rather, they did just one thing – just like Colossus, in the electronic era: they proved a theoretical possibility at high expense.

Back in 1950s crypto circles, folks were very much still thinking about 1930s length functions in topological spaces – searching out those special groups that allow secret or hard to compute “trapdoor” graphs whose difficulty in solution by computation denies an attacker the ability to compute the length function – necessary for solving hard puzzles.

Looking at recent theory, we can look back at what were was probably considered secret cryptanalytical theory – back then.

From the notions associated with the analysis of cayley graphs we get to the heart of the security property known as graph expansion. In the quantum computing world of 1950 and 1980 alike, folks were interested in a related property – the minimum energy state for a quantum superposition. The two concepts are related, when considering optimization functions that link combinatoric properties to algebraic properties.

image

http://www.win.tue.nl/diamant/symposium05/abstracts/quisquater.pdf

When computing mean sets of graphs, under the random walk presumption that underlies mult-particle modeling (such as the bits in a plaintext), we are used to calculating the volume of probability space occupied by the average clique of neighbors. Then, we are interesting in the expansion property – that only for certain cliques of relative size less than alpha.N can we be certain that the expansion ratio from mean-set members to adjacent neighbors is greater than beta.K. Of course, k is related to the minimum weight – the latter being an upper bound on alpha.N.

image

http://www.win.tue.nl/diamant/symposium05/abstracts/quisquater.pdf

If we now see things in terms of the 2 generator terms of A and B (above), we are interested in “that series of basis change” modeled by a particular sequence of generators applied to a start state. Of course, we remember seeing this back in 1954 (or earlier) from Turing, who modeled the cyclic powers in terms of a sequence of enigma rotors, which of whose wirings represented a generator. A sequence of rotors modeled h(M) – such that the sum of the powers would be zero (and thus model the 7-point geometry or hamming code on the hypercube).

image

http://www.win.tue.nl/diamant/symposium05/abstracts/quisquater.pdf

We have to remember that at the end of the day, all cryptanalysis is a search problem.  Thus one is always interested in the computing or alternatively “finding” the minimum energy state of a superposition system – realizing that if one JUST has the right basis sequence one can compute more effectively. This clique search for mean-sets is the “trap door” for certain groups,  of course – likely to apply to DES of course (if only one thinks different). One tends of thinks of Braid groups as the foundational group able producing expressions hard to calculate in a standard computational model, but easy to calculate (if one can figure just the right basis [sequence] in a reasonable time).

Is interesting that to see what Turing understood that Quisquater evidently does not – that for a non-Abelian group, a given sequence of generators (each term being Ci where I runs from 1 to n distance/edges) can be re-modeled in terms of density evolution.  Turing understood that finding the mean-set was equivalent to finding the minimum energy state of a superposition – or the average sampling function for sufficient trials that induce a cauchy sequence.

Looking at the Stanford group (which has long been a NSA math front  for pure theoretical topics):

image

http://lucatrevisan.wordpress.com/2011/01/16/cs359g-lecture-1-overview/

The material http://theory.stanford.edu/~trevisan/cs359g/lecture05.pdf is essentially the same material as Turing discussed in his On Permutations manuscript. What’s more, the well written summary clearly lays out the thinking steps. Turing’s example helps too, being so tied up with early mixers – that also related to quantum mechanics and the “chemistry” of uranium fission, etc.

Posted in crypto

Revising the dotnetopenauth oauth2 plugin for the Realty OAUTH AS

A very long time ago (like a few months), we learned about OAUTH as a protocol to talk to authentication IDPs. We wrote dotnetopenauth plugins to talk to wordpress (as an IDP) and then PingFederate (as a proxying IDP to websso partners). Since then, we have emulated PingFederate’s optional Authorization Server feature, using a Azure ACS namespace as the backend service. Today, we updated our OAUTH2 provider for ASP.NET – enabling a standard dotnetopenauth-aware web site built from the Visual Studio wizard to consume the various assertions, user credentials, and the “directory graph access points”. Of course, this all really just means we obtained an access token having logged onto Google, which gave us access to the token verification API, which duly unpacked the JWT issued by ACS .. .delivering “attributes” to the SP site – which then implemented an account linking experience.

image

So as to do NOTHING but add the plugin to the assertion-consuming web project built by the wizard, we altered our AS to be maximally flexible when interworking. (It is thus somewhat insecure, relatively, on the topic of redirect URIs, state management, and the like.)

Of course, we should now change the name of our plugin – to be “PingFederate-emulation (via Azure ACS)”.

So we are now using OAUTH for authentication, which is not what its for. But its what everyone does with OAUTH. So… so do we.

Posted in oauth, pingfederate, SSO

Adding Multiple Scope support to an Azure ACS based OAUTH authorization server

image

 

image

The pictures above show an ACS namespace and its relying parties for a particular “rule group”, the claim issuer attached to the rule group (and thus all the relying parties associated with it), and the list of associated identity providers similarly (who are also issuers). One notes that the issuer for the rule group is not an IDP (though it is an “issuer”). This distinction is critical.

The first relying party, whose name is in the rule group name, can be considered to be the default scope. The relationship with the issuer is established by our code, at application startup. When one adds other relying parties who associate with this same rule group (and the issuer, therefore), one can consider these to be additional scopes – to be cited possible in messages to/from our AS’s OAUTH endpoint –  and to which scope (more vitally) authorization codes can be minted.

So lets see if now we can make ACS issue a JWT bearing this limited scope. We can! But note that first we have to extend the PingFederate token issuing UI to allow us to specify the scope (since it treated as the audience, in the Azure ACS model):

image

Posted in oauth

Windows Azure Cloud Emulator port 80/443

When creating a standard setup for the local Azure emulator, one typically wants to host the services on port 80 and 443 – aping reality. In this way, we can create in IIS express a “development environment” on production domain names (and ports), in Azure local cloud emulator a “QA” environment (that imposes more realistic firewalls and load balancers, system management etc), a staging push (in the Azure host cloud), and a production service (when one uses the Azure console to flip over the vips of the staging and production deployments.)

But, it doesn’t work unless no one else on the dev/QA machine is listening on port 80, etc. Otherwise, the azure cloud emulator maps ports (making a mess). We use netstat to find if anyone is listening (note the absence of 80 and 443, when noon is listening).

The obvious culprit is IIS (which you need to stop). But, also you need to stop ADFS (if installed). Netstat does a poor job of identifying who owns the processes. Folks also advise killing other processes, such as web deploy (and some SQL Service reporting service, also listening).

image

Posted in SSO

Is Google Identity Management spying on me?

Who has the records? Who gets them? Can I opt out?

How much is already provided to NSA? Having prevented (a perfectly legal signin), why is it asking about it a week later?

What is the Google kickback for making spying a “normal business event” (FBIs stated intent for public policy)? Is it a large federal procurement of Google Apps on the line, or does it just see it as normal (under “American” value system)?

image

 

 

image

Posted in rant

ACS OAUTH2 behavior regarding refresh tokens

In building my emulator of the Ping Federate OAUTH feature, I encountered the same behavior concerning OAUTH2 and Azure ACS as discussed below:

image

http://stackoverflow.com/questions/11265454/errors-using-wif-oauth-extensions-to-get-acs-access-tokens-using-refresh-tokens

Now, I don’t happen to use the library cited by the author of the memo when formulating the requests; but bytes on the wire will be the same.

In the context of the sharepoint 2013 app model, it makes some sense that refresh tokens are intended for a particular way of applying OAUTH (authorizing iframe plugins to sharepoint apps)

image

Note that this is hardly a standard flow! It involves a non-standard service element called a context token (an early signed JWT, in fact). The flow looks a LITTLE like the authorization-code grant in overall structure, note. However, it clearly has proprietary security enforcement semantics – that are not our concern particularly. In particular, one sees the role of the refresh token in the concept. It comes from the JWT rather than from the authorization response message. Its essentially a bearer cookie (from someone who could decrypt the JWT) – something to be delivered to then enable the server get the more classical bearer access token for API calls.

So, how does my stuff do its thing?

Well first , as we have reported before, we use the Ping Federate-hosted OAUTH2Playground app to play the role of the client. We have it all working, as reported below – except for the mentioned multiple refresh token issue.


First, we invoke the authorization_code grant, providing parameters as shown

image

And we get back a rendering of the response message issue by ACS. What Is not shown is that the authorization endpoint is guarded by a forms authentication cookie, to be issued by the dotnetopenauth framework, once it succesfull interacts with an IDP using a plugin. IN our case, we are leveraging our ws-fedp plugin to dotnetopenauth that talks to ACS (as a websso FP). ACS in turn is talking to Google (via openid).All in all, verified assertions are delivered to the websso guard, in much the same way that IDP connections open the OAUTH authorization access point guard in PingFederate.

We first see how a properly handled ACS exception from a failed websso interaction (with ACS/openid)  can be projected back to the client in a conforming manner:

image

PROVIDED THAT we insert some 2+ seconds of timing delay before forwarding the issued authorization code to the client, the client can RELIABLY use it at the token issuing endpoint:

image

This gives us a response to the token request – a standard message format.

image

We then implement a verification action, using the ping custom grant type – which simply returns a custom access token (a JSON object, that was the JWT’s payload)

image

So, perhaps upon receiving a refresh_token we simply replay  it back to the client… along with the new access token? this works nicely (and allows repeat use of the refresh token, as issued originally upon exchange of the authorization_code)

Now, what we do not get from ACS – presumably as we are responsible for enforcing it – is the Ping -Federate style expiry periods on grants.

End.

Posted in oauth

Cryptanalysis reflection

The world of cryptanalysis gets ever easier to see in intuitive terms. The trick is to see the world as Turing saw it: before codes and calculating devices mechanized codes and their cryptanalysis. Folks had to learn to exploit a particular language of mathematics conceived to model the entire space of discrete calculation. Of course particular we know this as the design language of the state machine – that in its cayley graph or wavefunction forms served the early role of a software language for turing machines.

One cannot separate coding, state machines, cryptoanalysis, or turing machines. They are each a manifestation of the same thing: walking the (quantum) random walk.

– Coding takes the 1920s notion of a probability space so cleary applied to mathematical physics in the form of heisenburg or schrodinger theories. It requires the concept of a field to be represented in terms of group theory – in which the vertices, edges and invariants such as meansets of the Cayley “designer’s” graph represents the applied codeword security “rules” of the group, in a highly geometric manner.

– The notion of state require that we recognize that group actions can be expressed in such as the 2d complex plane which better characterizes the rules of possible state transitions better than does the raw presentation of the underying graph.

– Cryptanalysis recognizes that advanced, exploitable geometric properties exist between nodes in the graph when rendered on a suitable plane. There are concepts such as distance, measure, coordinates and weights. More refined and analytic than phase diagrams that express raw possibility limits of state transitions in finite dimensional worlds building upon the very idea of quantization, the weight notion capture the notion of volume of probability space carved out of highly abstract functional analysis spaces. This all builds on and up from the area of space implied by the edge-distance between some node and each of its neighboring nodes and the probability depth capturing the likelihood that a random variable model ling discrete events will have a particular graph-value.

– And turing machines are the meta-spaces  that governed by meta rules focused on the inner product of correlation relations and configurations for asymptotic limiting functions that can take such as the mean set induced by a sample function and tie together the rules of the group, the constraints of the probability field, the nature of the measure or distribution of probabilities and either output a codeword consistent with the rules or analyse a ciphertext codeword to distinguish it from a random event.

To a Turing in 1939 perhaps encountering the production side of the process of spying for the first time what is not new is the theory of codes, ciphers, and cryptanalysis. For he has been studying it in its theoretical manifestation for quite some time, quite evidently, in both the us and uk. All he has to do is apply the formalism theory to the particular nature of the various enigma machines he encounters – which introduces him yet more firmly to an aspect of the puzzle he has hardly encountered to that point: the notion of complexity of the search problem that stresses the relationship between the notion of security and cryptanalysis.

Though Turing surely knows the theoretical relevancy of such as the conjugacy search problem and has embraced the idea that certain coding groups engender reverse-searching problems that so expand the search space that the time required delivers “security” (that it takes longer to search out the key by brute force methods on average than the useful lifetime of the ciphertext) he also knows that the nature of the search problem changes in the calculation complexity sense – if only you move your calculator’s workings into the complex plane supporting phase or state space. The notion of the trapdoor, due to a change of scale or rotation of coordinates is not unknown.

In state space it becomes apparent that one can approximate the “inner nature” of coding functions using frequency analysis to at least guess crypto keys. the 1960s ldpc sum/product decoding is clear a minor variant of cryptanalytica process using centers and meanset theory to reduce the search problem – based perhaps on exploiting collisions within the differentials in the functions underlying ciphers (such as suitably engineered hagelin machines).

So the hunt has to be on for transforms that easily move a problem in discrete space into frequency space – in the days before the FFT, of course. However, let us not be overwhelmed by the FFT, for we know that even back in 1945, from Tunny documentation, that folks already had a discrete form of the FFT – what we now call the WHT. It merely exploits elementary generator sets (plus and negative signs on 1), that special heisenburg relation between parity and the fourier transform, and pairwise counting that reveals kasiski style bulges in distributions/measures, long used to crack indicator systems based on vigenere squares.

And so it falls to Turing and Newman. There topology focuses on the classical case of the cayley graph where the generators are the signs +1 and -1 (so usefully applied to peano arithmetic). These elementary members of the generating set nicely enable one to approach more generally what the old timers in room 40 of the Admiralty had been doing since WWI – when breaking the vigerere square (formulate depth cages for possible codeword lengths finding which auto-correlation measure is closed to the expected value). Having found the overall distance constraint for the unique space represented by the particular ciphertext in what we would these days call either additive or multiplicative auto and cross correlation, then consider the pairwise possibilities within each column of depths. One measures them – in terms of the possibilities that things agree or disagree in terms of sign counts.

As with breaking the codeword of a vigenere square through such as the computing the index of coincidence and then performing a frequency analysis of the depths in a particular column of cipher, more generally folks learn to identify the center of a graph – as manifest by its frequency spectrum. This is that “configuration” – to use a turingesque phrase – of vertices that allows particular nodes to be the center of the distribution of the space – and whose quantization oscillator now accounts for the variance found in ciphertexts that have depths/collisions.

In some sense the (tiny amount of) variance in these particular elements ultimately accounts for all the variance of the sets that can be generated. These foundational-fluctuations are a basis for the graph’s measure, much as generators and the non commutativity properties ultimately account for the evolution and transitions in a particular cayley graph making codebooks.

We tend to think of the turing machine as having started with the intensely discrete mechanism that slowly evolves, post war, into the notion of the indeterministic machine – driven by the probabalistic oracle already outlined (and obviously classified) in Turings phd  thesis. But its not clear to me that this is the correct order. It seems more likely that a Turing, having studied measure theory, started with the notion that certain configurations can, in the limit, control variance – if only the mechanizing graph can be so constrained with the likes of cutsets. And use of free groups can deliver the desired levels of control over variance – giving one an algorithm for designing configurations of graphs (to be explored by turing machine “runtimes”) leveraging underlying groups such as high dimension braid groups that enable the cryptosystem designer to distinguish different types of security protocols, including indicator protocols such as the type Turing spent so much time attacking: naval enigma.

Posted in coding theory, crypto

Kindle Fire HD – UK o2 “inbound” email problem

In absolutely typical UK fashion (i.e. the corporations are contemptuous of consumers) it is hard to make a UK edition Kindle Fire HD device talk to your o2 email servers. This is despite o2 publishing all the technical details! What information the support sites omit is the remedy! From internet comments, the firm just cannot be bothered to tell you what ORDER to do things, given some quirk in the Kindle software.

First make your incoming email work (ignoring any and «all» fields to do with outgoing) email settings. That is the magic. This really means changing the word “pop3” in the suggested domain-name for the incoming email server to the word “mail”. Now make it work, before proceeding.

Later on, once the incoming email is visible as expected, go change the suggested outgoing port to 25. Do not do anything else! Do not fiddle with names, logins, addresses, passwords….

Sigh.

O2 know this, but don’t let on. Keep the English peasants stupid and dependent, apparently. Make money by hiding information. Now O2 can deliver “service”.

Don’t forget, peasant, that when trying to make incoming email work you may be in a world in which you may have already configured your Outlook client to be also downloading incoming emails from O2. And, you MAY WELL have set outlook settings “to delete the emails from the server” upon delivery (to outlook). You need to reconfigure outlook for your kindle – so outlook leaves the emails on the server upon receipt. That is, ensure outlook only deletes the server side copy when you delete a message using some device’s app’s delete button. This way, you will see your incoming test emails in your kindle – since no longer may outlook be having the server delete from the server (before the kindle has a chance to connect)

Oh O2! Real customer service is NOT that hard.

Posted in coding theory

UK internet – a deception based public security policy

Travelling to the UK for the first time in many years I encountered public policy –  as regards the internet security.

Typical public wifi acess points (where occasionally available) require identification – of email address or mobile phone number. That is, one probably commits  a fraud by using anonymous identification. This is not disclosed and is one of thoze american style hangem via a 1000 formal violations pre-arrangements (for when and ever if there be a need or desire to exert pressure, during interrogation). In public policy terms, one notes the design style: load the dice.

On the last day, just before travel, suddently corporate email ceases to work. The point is to isolate, of course: in prep for isolation and the planned delivery of emotional distress. One notes the design style: plan for isolation (so the loaded die have more value).

Despite being a united partner, lufthansa could not emit boarding passes for all legs (unlike the us outbound). This surprised the agent – used of course to normal practice (and having good intuition on what induces variance). One notes the design style: isolate on non-us soil (invoking guantanamo logic) using compliant proxy agents (Germany).

Harassement takes many forms: so ensure that concerning money, lack of money will imply lack of standing. Of course too much cash will mean something else (equally disadvatageous). Cash is not always cash of course as i found out (since my English notes turned out to be no longer “valid tender”). Note the public policy: force electronic money (subject to monitoring).

Now to be fair (not that the sentiment can be expected in return) all the policy goals have sensible intent – in preparation for the 0.0001%). Tax dodging, money laudering of cash for drugs, and surprise: to maximise interrogation advantage (ie find where the damn bomb is located, before it goes off).but what we note is the preparation – applying all the techniques to everyone. And that aint to be anti-discriminatory! Its there to address subversion.

Now also interesting where deception based attacks on Internet encryption were mounted. Phones (vs tablets with wifi-only capability) do not show the issuer of the cert and do not show the ciohersuite in effect. It could be a firewall-issued cert (spoofing the true website by https proxying) and a null ciphersuite for all you know. Remember, both enable cleartext reading of the packets (in real time). It was interesting to experiment in various wifi worlds on which https sites in the us worked or which worked with what type of warnings. One notes the public policy; Internet encryption is there to make you feel good, not enforce any privacy rights (that you probably don’t have to start with, and will be suspended in any case upon mere utterance of the certain magic spell; public safety!

Using an american issued device in europe was also revealing. Some sites (inckuding googlz uj relay site) could detect which kindle device i was using and refuse to complete login for “unrecognized” devices. This did not interfere with web browsing note; but did interfere with any and all subscriptions (that hinged on a Google websso assertion). Of Course google.com was a front for google.co.uk, and i never found a means to bypass uk national controls (built  into googles cloud). Dns,  Certs and redirects thru https connect proxies had all been well massaged to present the illusion of a global web – that is not actually a web of national webs. Of course the public policy is to distinguish the web from nationalky regulated sites (able to withstand better web attacks, having access to higher assurance security credentials). But look at the elaborate deception, coordinated behind the scenes with other governments and the multinational clouds and the identity Providers!

My us subscription to netflix content worked – in the sense that i had now access to uk-licensex cobtent. Presumably, for copyright and censorship reasons, not all us works can be made available to uk subscribers. Things like uk censor ratings might be missing, for example. But it did work. So where is the history of my viewing habits now stored, ready for analysis? Is it in the uk, us, or half and half? Yes i assume i have no expectation of privacy in either place, even should my packets be tagged with notices asserting my expectations. Behaviour based analysis has to be the new public policy norm, one notes, much like the surveillance of ones habits at the public library.

To be fair, nothing in the above was particularly hindering, annoying or even unnatural. When in england, id rather netflix be giving me a change of scenery. Id be happier that netflix.com be a synonym for netflix.co.uk (when accessed from ip addresses in uk-registered isp address allocations). But not always is it so! Of course, i have no choice.

Now im not going to bother bemoaning the phone roaming, the data plan policies or similar. Of course they are spying on me, and of course my location and video and mic are controllable remotely. Its the public phone system, stupid (and nothing has changed public policy wise in a hundred jahr.)

Uk updates of firmware onto the kindles were interesting, too. Different packages were installed. Uk bugging-prep capability was not installed onto the non-euro-distributed kindles. This affected how the different installs of the skype app worked note, built to leverage the different (now bugged/buggable/buggered(?)) platforms. Of course, public policy protects the vendors who knowingly participate in the charade, allowing them to claim they know of no vulnerabilities in their software (as tested on an unreal platform).

So where does this leave the typical uk consumer?

Well, i struggle to deny id do much different – given the nature of the subversive-trained nutter on the bus trying to make a large explosion as it wanders through trafalger square, right infront of a live broadcast. But, i also see a uk failing; why not own up (rather than engage in multi level deception of your own public). This is a failing of oublic policy: cromwells spies being preferred over policy modernization.

Do folks really believe that anti anti-subversion orientation doesn’t include briefings on all the above!

Posted in coding theory

WCF service with DotNet4.5 Claims–with STS

To a solution made with Visual Studio 2012 add a WCF service for the dot 4.5 framework (specifically). Then, use the latest I&A tool to add the system.identity model configuration, having identified your active STS service and its metadata endpoint – as built originally using the tooling from Visual Studio 2010 (with the WIF SDK for dotNet4.0) and running under dotNet4.0, augmented with the WIF libraries.

The net result is a file similar the following which successfully identifies the IDP and its certificate. However you must change the bindings metadata endpoint, as shown. (It defaults to an assumed ADFS path.)

image

Assuming the STS offers 2 endpoints, one for Office365 friendly wstrust of Feb 2005 and one for version 1.3 of WS-Trust, we need the client built and configured using svcutil to use the 1.3 endpoint.

image

We had the change the client config:

image

We had to change the anonymous address from svcutil to the 1.3 endpoint of the STS; and add binding – that we happened to have- identifying the client credential to be sent to the STS.

image

Finally. no cardspace or its strange policy errors; and no weird and wonderful WIF-specific configuration of the service.

Posted in SSO

Forcing Ping Identity to Adopt WIF metadata from IDPs

ITs been  a pain making our otherwise excelleng Ping Federate servers (version a few years ago) talk to modern ws-fedp IDPs built using Microsoft developer-focussed WIF toolkits. There was always “SOMETHING” that didn’t work.

Well, now we understand how to make a passive STS from WIF emit (i) a Feb 2005-serialized response, (ii) include an authentication statement, and (ii) be signed with RSA/SHA1, we know from last weeks report that WIF IDP built in 10m with Visual Studio can now talk to Ping Gederates ws-fedp SP.

But, Ping Federate is a pain to set up, as it does support metadata import emitted by such IDPs!

So we decided to play the game back. Our IDP now dynamically emits a true role descriptor and now a second role description intended for consumption by Ping Federate (pretending to consume a SAML2 IDP in its excellent metadata-driven console.).

Remember, we are just issuing something that Ping Federate can import (so there are no typing errors). SO we take our ws-fedp descriptor and re-export it as a SAML2 IDP descriptor:

image

Then we use Ping Federate to import a SAML2 IDP, as normal. Then, once saved and ready, we change it be a ws-fedp connection simply by editing (when the server is offline) the sourceid.saml2-metadata file. We give it the desired protocol and binding values:

image

So we get around a vendor’s biases (implicit or intended) against WIF IDPs – with only minimal fuss.

Posted in pingfederate

substitution ACS OAUTH for the PingFederate OAUTH AS in the Oauth2Playground

image

As shown above, we took a working Ping Federate installation with OAUTH feature and its working OAUTH2Playground site – that showcases the protocol.  Then we copied the expanded war directory for the playground so we can make a parallel UI use our own AS (rather than Ping Federate’s AS).

Within the copy, we amended the “case1-authorization.jsp” and “token-endpoint-proxy.jsp” – to use the URL paths of our own OAUTH AS – and our own https port.

Then we changed the SSL cert on the IIS express that hosts our debuggable site to use our godaddy cert (using the Windows netsh tools, for the port in question). We also allowed iisexpress to bind to the certified domain name.

image

now, the PingFederate hosted oauthplayground site will work, even proxing token requests (using java chain building). Everything uses the GoDaddy cert. So we add it to the trust point list that Ping Federates manages – also used by the playground site, evidently.

image

We also now alter the settings of the substitute playground to point to our endpoints

image

Next we register the Client (a vendor consuming the API) to have the right callback URI:

image

Thus we can now try the playground, first seeking to get an authorization code:

image

getting

image

image

The consent process returns control to Ping Federate hosted OAuth2Playground:

image

Since we now have enabled the proxy in the playground WAR app to trust our SSL cert, w can pass along the JWT issued by ACS:

image

Posted in oauth, pingfederate

acs signing a jwt ; wstrust token verification

The token we receive from the OAUTH endpoint of our Azure ACS namespace has a (decoded_ header field given below.

{“typ”:”JWT”,”alg”:”RS256″,”x5t”:”70W3nPRCCzSeXuqwsBVy2KMSMPk”}

Using fiddler tools base64 decoder, we change – and _ back to + and /, and add the padding char(s)

image

It’s supposed to be a hash, and probably an SHA1 hash.

image

We see our GoDaddy cert is:

image

Now lets say that the cert has a critical extension. And it’s a URL, say, that demand that the verified contact a given OCSP responder.

If we now receive the JWT over a ws-trust channel, will the seucrity token resolvers pick up the JWT’s reference, locate the cert AND verify the cert chain?

Posted in SSO

ACS-based OUTH2 AS augmenting Google IDP (via ACS)

Let’s see how far we have got in emulating Ping Federate’s OAUTH engine (for the authorization grant).

We start out as will any vendor/client, making a request for the grant type:

image

https://localhost:44302/as/authorization.oauth2.aspx?client_id=FabrikamClient2&response_type=code&client_secret=FabrikamSecret
&redirect_uri=https://localhost:44302/as/authorization.oauth2.aspx&scope=&
state=123&idp=Google&pfidpadapterid=

At 1 we see our AS consuming  our ACS namespace’s store of “service identifiers,” checking up on the redirect URI given on the query (query string, not FORM POST for now).

Since there is no user session at the AS, the Session creation mechanism kicks in and also shows a list of “IDP Connections” – including Azure AD. Remember, Selecting Google here really means send a request to ACS…which will chain/gateway the request to Google’s WebSSO Cloud…which will chain the request Ping One, which will… Anyways, at the end of the day ACS issues us a SAML assertion that we verify, pulling the metadata on the fly.

our AS is in account linking mode for this IDP connection, and thus we get a challenge – to logon as the local IDP that governs linking.

image

image

The net result of this one time linking, for an inbound Google identity, is some UI (remember this is a prototype…!)

image

Using the Return option, we resume the OAUTH AS processing, that consumes local (rapstaff) account whose profile is driven by the SAML assertions relayed by ACS from Google. We get the consent screen – that which guards release of the authorization code and storage of the persistent grant in ACS.

image

This gets us back to our simulated vendor site, ready for the vendor to call the access token (STS) service to convert the “code” into an longer-lived access token.

image

Ok, we see lots of the pieces coming together! When we now use the postback of the token minting form to actually talk to the ACS access token minting service, we get as result:

image

I suppose the thing to do next is the refresh token, swap from symmetric-signed SWT to RSA-signed JWT and then verify the signature as a “client”, pulling the public key from ACS, somehow.

But, even now, if put this site behind the Ping Identity Oauth2Playground UI for developers of client sites, I suspect it will work reasonably well as is.

To expose our old AS available by button pressing as a protocol engine for real clients, all we really added was a handler for the authorization and token endpoints:

image

authorization endpoint

image

token endpoint

The trace on the wire tells the whole story:

image

if we swap the token for JWT and assign a signing key

image

we get now a JWT

image

Since the DateTimes are not in a friendly dotNet syntax, one converts (from Microsoft sample code for SWT notbefore and expiry times)

image

End.

Posted in oauth, pingfederate

virtal machine’s hosting DC and SAML assertion testers (showing time validation issues)

Remember, when the DC is hosted on a virtual machine using hyper-V, the host of the VMs induces the DC to change its time to sync with the host. Being a DC, it then updates all its domain hosts – which get the same (wrong) time as the hyper-v host.

image

You might think the DC host’s time as set by the domain-admin was authoritative – but its NOT!

image

Nice military attack vector, here. Assume your DC is hosted in Azure VMs = and thus a “request” to Microsoft (Azure) to re-set the time on a given DC VM could induce all sorts of “nice” effects.

Posted in Computer and Internet

Building an Azure ACS OAUTH AS–emulating the AS of Ping Federate–part #1 getting to consent

The goal is to use the OAUTH2Playground website of Ping Identity’s Ping Federate server (evaluation edition) to talk no longer to the OAUTH2 AS feature that are built in but to an AS we build ourselves — in an ASP.NET website – supported by the Azure ACS. We know then we have largely emulated Ping Federate’s own AS behaviour using its own test site – and we have done this at a good enough level for getting a first OAUTH project off the ground (by learning from the best).

Using Visual Studio 2012 create a web forms project, internet type. Register Google as an IDP, leveraging the built-in OAUTH provider that comes with ASP.NET Now add a class with our OAUTH provider; and register it alongside google. This allows us to send OAUTH2 messages to an Azure ACS namespace’s token issuing endpoint. It also allows the provider to leverage other “support”. interfaces that allow our AS to delegate to ACS much of the work of implementing an AS service.

The account linking model of the default website thus build allows to show that we have logged on to a locally-managed account (called rapstaff) having used an Google-asserted identity. IN other words, we have a user session on our AS, induced by an inbound ws-fedp SAML1.1 assertion received from ACS – since this is normally how we talk to Google’s IDP. We can now press the CIS button, which emulates an OAUTH client (called Fabrikam2) invoking the AS and its token issuing endpoint. Obviously, we will make the URLs look and respond like the equivalents in PingFederate’s implementation.

image

What we have just done with button clicks is the equivalent of use this screen from the oauth2playground:

image

image

Now typically, when we want a URL to specify which IDP to use, we invoke our particular pattern (which sends off an ws-fedp request to ACS, requesting further gatewaying with google’s openid endpoints).

https://localhost:44302/v2/wsfederation/Default.aspx/google

Now we can accomplish some but not all of what the Ping URL does from the start, by having a page class that redirects

image

If no SAML session exists, this lands on a IDP chooser page. Upon the return of the assertion from Google, say, the CIS  button is logically pressed … continuing onto the AS phase of the flow (invoked also by the CIS button). If a local SP-side session exists, the chooser and authentication process is omitted – passing straight to Go!

image

Once invoked, we see the (CIS) provider pass control to our implementation of the  authorization_code handling component of our AS (which delegates much of its implementation to Azure ACS). At the point, non of the parameters show are configurable from the ping emulating URL.

So, now to get rid of button pressing and make the flow be driven from the formal interface of an OAUTH AS endpoint. We implement a form handler

image

 

With an existing user session (for a local user “rapstaff” previously linked from google websso), a user is directed to the consent page – that will authorize creation of an “authorization_code” persistent grant.

image

If there is no user session, we get a ping federate like experience. The OAUTH AS site invokes an “IDP Connection”, as selected by the user:

image

Next, we see the storage of an authorization grant – using the (OAUTH-guarded) ACS service – which also mints the authorization code to be issues to user devices.

image

image

from the relying party NAME, the token issuing service will require that the party seeking to swap the code for a token must cite the realm of the named party – acting as a scope.

Posted in oauth, pingfederate

spying on the OAUTH interaction with ACS; limits to fiddler

the Azure ACS offers an OAUTH endpoint – for access tokens. One gives the parameters of the authorization grant and it gives back an access token. Or rather it does if you know what to send, specifically.

To spy on a working user of the token endpoint (to learn the magical parameters that work), turn off https. Then install and use fiddler.

image

To make fiddler capture a server–initiated http request,  we added the system proxy:

image

Since the endpoint is working over https, we had expected to be able to leverage the https MITM of fiddler. But, the client code is designed to detect MITM https (being a token endpoint consuming service, after all). Thus, it will not accept Fiddler’s spoofing certs ; they are always invalid.

But, we were luck enough to be able to send things over http (just so we could learn).

image

We can now see the name/value pairs sent, with which encoding, etc.

image

When we use our own code, we can now see the handshake with Azure ACS when depositing the authorization grant – and getting the authorization_code. So we know that the issuing criteria are satisfied (using a setup from some Microsoft sample code).

image

On the code consuming side, we see our request (which then we compare with the sample)

image

vs sample…(allowing for change of redirect value, when using our apparatus):

image

Posted in oauth, SSO

Building an ACS-based OAUTH2 Authorization Server instead of using Ping Federate OAUTH AS

This is a difficult post to write – if only because sales folks can be so ham-fisted at their negotiation art at times – not realizing  THAT – perhaps because of poor support from their marketing department – competition changes with time. In particular, for Ping Identity, the Azure ACS – at its ZERO DOLLAR price point – makes it cost effective to program against that service to create an OAUTH2 authorization server yourself. Furthermore, with the general availability of Azure AD and its complementing nature with ACS namespaces (plural), yet another feature of Ping Federate has been commoditized: that of exposing conforming SAML2 IDP endpoint (supported by simpler WIF/ws-fedp endpoints in your own software set). The focus of Azure AD on creating one own app-communities (those authorized to plugin to one’s own APIs) and enforcing these “communities of interest” should give even Ping Identities 7.0 series features of Ping Federate a run for their money (Im guessing)!

Now, I can see Ping sales engineers arguing that the Azure AD/ACS commodity support is incomplete. But, having used Ping software for years as a customer with a perpetual license, I can also attest (and I known the firm concurs with me, in semi-private) that no one in the world uses the high-end features of SAML2. We are in the COMMODITY phase of the market (where lots of folks use the bottom 80% of a standard’s features, as designed 10 years ago…and when interworking is largely error and hassle-free). This is clearly the case for SAML2 asserting parties supported by now by a directory graph endpoints – with which to get additional claims about the user, including claims from the VAR chain that go beyond the scope of what information Azure AD itself manages.

Now we have all been actually using OAUTH2 in production for a while now – simply when using the management service API of ACS itself to provision IDP and RP records. We have libraries for the client that exchange a resource-owners credential set using the resource-owner grant authorization mode of the standard. That is, a username/password is exchanged for an SWT formatted access token – that is used in API call request header thereafter to drive the ODATA data interface exposed by ACS with which to manage the rest of the management entities (including IDPs, RPs, claim transformation rules and even client credential set).

image

This is how, for example, normally one populates clientids/passwords. It is also how one creates “delegation records” – a management entity which enables one to store the persistent grants – such as the “authorization_code” grant type. Apparently Windows Azure Marketplace and Sharepoint Online are based on these capabilities – so Im in good company by relying on it!

Now I feel sorry for pPing Identity in losing a sale – in that their education of ME in the last month was just superb; with their OAUTH2Playground site doing a really good job of laying out the case for what OAUTH does, generally. It becomes really quite simple. Furthermore, on a sales eval license the firm exposed me to all the features of their engine (PingFederate). Learning to configure at “certification grade” all the advanced modes was itself educational – forcing one to realize how an complete authorization server really works – when supporting delivering the “decision” service BEHIND the protocol endpoints. What the server would allow and not allow (by design) in configuration terms was educational in and of itself – since the limits imposed reinforced certain architectural patterns. It would not let me “abuse” OAUTH  – thus ensuring Ill be reasonably compliant with others when interworking.

But, its not to be. We are back to using the authorizationserver support library from ACS, which allows us in ASP.NET to build our own authorization server logic. It has taken be about 8 hours to remember what I did last time (in pure prototyping mode), adding a consent handler to an ASP.NET web forms project already armed with the ability to do ws-fedp and OAUTH2 interactions with IDPs doing user authentication. The authorizationserver support library from Microsoft provides the guts for buildings ones own AS – enabling one to handoff to ACS, via the OAUTH2-powered API the service offered to your/my “3rd party Authorization servers”, to do the hard work an security critical elements of doing “persistent grant management” – and enforcing the so-called authorization_code rules. While I already miss the “polished” services of PingFederate that added SO MUCH VALUE over and above this raw capability, I got something working fast with Microsoft services and software. So well done Microsoft.

And, for OUR project, that’s all I need. And its all I can afford.

Now that Microsoft have distributed the official JWT securitytoken handler, an access_token endpoint handler be able to mint signed JWTs, too.

Finally, we see that the WSDL for the ACS ws-trust version 1.3 (but no 1.5) interface enables us to issue and renew tokens, given a usernametoken – and we have seen how this can issue us JWTs. Now that we understand how a thick client using WCF should be managing its own client-side token cache so a JWT so minted by one ws-trust interaction can be used across multiple client channel endpoints  and contracts/interfaces (while still allowing the underlying WCF token provider to invoke the issue/renew token trust delivered by ACS as required), we even understand well what Ping was offering in the ws-trust area – when trying to support a WCF installation. BOth Microsoft and Ping are lacking documentation here, note.

We also got to play with the Ping Federate endpoint that implemented the “SAML authorization grant” service – swapping  a SAML token for an access token. Unlike the integration of SAML via websso with the user-centric grant mechanism, with the full SAML authorization grant mechanism we could map ANY claim from SAML blob into the access token. I suppose this might help us in the ACS-mediated websso case, where we want to swap the SAML blog with claims mapped from Azure AD into a JWT that the directory API will accept, even if signed by third parties.

So thanks Ping Identity for a superb evaluation and a first class and superb training experience. I’m sorry I cannot even repay that – except by recommending others (customers in cash-richer markets) and contrasting what a “value-adding” authorization server really does over the barebones infrastructure of Microsoft Azure ACS AND NOW AZURE AD offers – to the likes of us living with government defense-subsidies …on cash-poor main street.

Posted in oauth

first debugging trial of PingFederate (modern) to Azure ACS SAML2P endpoints

image

image

https://www.rapmlsqa.info:9031/sp/startSSO.ping?PartnerIdpId=https://sts.windows.net/04fb5be2-29ac-48b8-ad54-327e4ff243a0/

Ping Federate (modern) imports the  metadata, producing the local configuration as shown above.

image

https://login.windows.net/04fb5be2-29ac-48b8-ad54-327e4ff243a0/federationmetadata/2007-06/federationmetadata.xml

For Microsoft engineering (doing final debugging). this was the result:

image

Well, obviously I’m missing the step of importing the IDPSSO descriptor listed by the SP, into Azure AD.

So how do I do that?

well lets configure PingFederate SP to both be named as an entity using a http URI in the validated namespace and also listen on an endpoint in the same. Also, we logically add an SP record to the IDP authorizing information flow via assertion:

image

image

The result is slightly better than before:

image

image

Lets now simplify, and ensure we are using a managed account, first. With it we can with our existing setup get a SAMLP response:

image

So, lets now change the name/address of the APP  and make it the ACS endpoint of the SAML SP server awaiting the response:

image

We get at least the processing of the response, now!

image

Clearly ,we can now process the result, but it violates an SP policy. We see a missing subject name in the authentication statement (which probably upsets the traditionalists):

image

 

image

We can extend the time period allows by PingFederate (globally), thus:

image

Suggests Ping Identity needs to make the time allowance configurable by IDP CONNECTION – not that my opinion is probably of any consequence.

This allows a first end-end trial:

image

image

image

Some Request options cause the Azure Endpoints to crash (e.g. force authn)


Also, since Aure AD is using RSA with SHA256 I wonder if it will work with our 2 year old SAML server (that may not support that ciphersuite)?

Posted in SAML, SSO

A US privacy fantasy–based on OpenID Connect

A certain gentleman who once worked at BBN on an NSA contract for key management system production had the unfortunate responsibility to refuse to answer simple technical questions – since he didn’t know whether the answer had been classified or not. Often, it’s the terminology that is classified (not the method) – meaning one violates laws even by using the codewords or even specialized technical terms with non-cleared folks. Finding this all rather ridiculous (but doing it anyway), he would listen to non-cleared folks ‘conjectures” about the meaning of or operation of some technical widget – particularly if it was transitioning from military to civilian applications. They would often quite wild and overly ambitious (not that he could deny or confirm this). But, they were often humerous – as folks found “inner meaning” in glimmers of interpretation of this or that.

So let’s play the game.

Let’s imagine that the US is committed to privacy (finally), and this means enforcing it. (It also means everyone ELSE has to enforce it similarly, so the US doesn’t suffer harm through it taking on more responsibility than others; everyone must suffer the pain equally!)

To take such a political hit (since this means the last 20 years of “market-led doctrine” failed, with the final end of Thatcherism and the last 10-years of gunboat diplomacy and wars of humiliation), the US has to get something else it wants – more desperately than giving up its “free market” privacy dogma. Of course, it cannot ADMIT it wants what it wants… (since it classified! to prevent folks getting a negotiating advantage!)

And of course that is the centralized gathering and scanning of “cybersecurity logging” records (purportedly to measure attack patterns by having lots of sensor nodes out there…). BUt, in the space of consumer privacy, this means having a mostly centralized directory service – that LIMITS who gets the directory record of users as the they wander out to some half-baked SP app site (that may not really give a damn about how it handles your privacy). The directory operator becomes the “privacy-policy” enforcement point – limiting who gets what, of personal identity attributes.

One sees in the Azure AD rollout EXACTLY this element of policy control, though Microsoft are going to some pains to hide the bigger picture. (Remember the palladium and Passport scandals!)

Now I cannot say that the bigger picture is exactly unwanted or socially undesirable; and is clearly no longer technologically difficult using webby technology. Actually it never was particularly difficult, as we proved in the Allied and US-internal-services shared military Directory(s) world 25+ years ago  – using earlier forms of the signed-tokens now being contemplated in the world of OpenID Connect.

privacy – security – trust (by spying on the logs) :– the eternal braid group.

Posted in rant

Azure AD (IDP proxy) and ADFS/PingFederate IDP

To verify a domain in AAD, first remove it from office 365! Sigh!

My advice is NOT to use the console – which has a particular verification procedure (based on adding a TXT record). In powershell create a federated class domain, get the (other particular style of ) validation information done, and then and only then verify the site. At the same time, one gets to setup the endpoint that allows Azure AD to talk back to ADFS for passive and active reasons.

Using the managed account to login to the AAD via the powershell tool, now add a user (and their unique immutableid/UPN).

Posted in SSO

Connecting Azure AD IDP to an ACS-mediated SP

Using an MSDN-Azure subscription, create an Azure AD instance – alongside an ACS instance/namespace. my AAD is called pingtest1 and the ACS is called homepw. Typically, one authenticates to the Azure portal using a live.com identity, which populates the first administrator.

Add a second administrator, whose identity claim is in the namespace of the AAD instance (Administrator@pingtest1…) in my case.

image

The goal is now to hook up ACS with AD – by importing the IDPs metadata into the ACS SP. First we add a throwaway “integrated app” (on “https:/localhost/throwaway” URLs) into AAD which exposes all the endpoints:

image

Then we do the usual ACS screen to the the IDP:

image

Then we use Visual STudio 2012 to which we have added the identity and access tool (from extensions). To the web form project we add the passive STS, using the wizard (all as conventional). WE selected the ACS option (homepw) which showed us we could bind AAD now to our new RP (to be provisioned in ACS). A login at the App induces a flow  to choose an IDP form choises we made as provisioning administrator:

image

image

image

Then we use our temporary password in a screen flow that sets the permanent password. The assertion chain then starts back to our RP:

image

And this is as far as we can get … until now we use the powershell for AAD to make a service principal (with redirect) for ACS homepw. And that we try next – since now the Administrator@pingtest1.onmicrosoft.com user is evidently well provisioned and fully live.

image

shows our RP.. and no entry for ACS!

Now according to ACS, for our namespace, our RP endpoint expecting an assertion is

image

https://homepw.accesscontrol.windows.net/v2/wsfederation

So we change our make serviceprincipal script thus:

image

We run this in the AAD console (having installed it, yada yada):

image

image

We see the final record:

image

Having generated some claim mapping rules in ACS for this IDP previously, we try a run of the SP again.

It STILL doesn’t work .. as the name presented by ACS is https://…. whereas we were able to register only a name of form homepw/homepw.access…

So, lets try to use the IA tool… spoofing ACS – since this evidently invokes a different means of registering service principals. First we delete the principals we added above.

 

image

image

And this doesn’t work either…

ok. so lets try something else – NOT using the formulaic name form. Lets simply use the (https) of the ACS URL

image

Hurray. we got from MVC SP to ACS, to Azure AD (to MicrosoftONline Login authentication service) back to ACS and to our MVC App.

image

You just have to know to make your script behave a bit more like the I&A tool in Visual Studio (not that this worked for me, by itself).

It may or may NOT be relevant that I have added to the list of “integrated apps” using the console

image

Posted in AAD, SSO