This link is shared by the windows 8 wordpress app, having “connected” to the wordpress.com using the oauth2 authorization mechanisms.
It’s been my assumption for a while that the main mission of the cloud – in public policy – was to change how the internet is governed. First, the policy would be to nationalize it; and second to ensure that a few TTPs could seize control (when needed). Only by this means can the internet – as the core of a service economy – allow post-industrial powers like the UK (and US) grow (and thus keep up). Otherwise, as manufacturing job transfer from high cost economies to low cost economics (with 300 million recent middleclass joiners to embrace), one loses.
And its in that context we write this blog – mostly to play with all the tools: since tool making is actually a service!
Now, blogging is still evolving (as shown by the evolution of the toolware). And slowly, Im easing off the PC-era blogging tools and onto the newer paradigms – built into windows 8 and related devices (my windows phone, and android kindle). And so we get to play with “jetpack” – to understand where wordpress.com (that evil cloud player, now) is going with its OAUTH2 strategy for a post-imperial US role in world domination.
Hmm. Starting to sound like Cryptome, now. Best get off that shared joint, before I catch the American disease of gushing continual depressed rantings as a (middle-aged – actually old-aged) “geezer”. Oh well. Lets not rant but merely bloviate a bit (and forgive me if its has insufficient bloviation; I’m still learning how to be a depressive operating at the end of the American empire – one of the shortest-lived history will note. But fortunately, having caught the tail end of the end of the British empire, I have *Something* to guide me…).
So on installing wordpress app onto my windows 8 PC, I used what I learned from my android tablet: the share button, available after a suitable gesture. Up pops the wordpress “addon” to the browser. ANn, Im invited to “connect”” to the wordpress.com services – I.e. my (hosted) blogsite. I get to use either my (old and boring) wordpress.com account at the (hosted) login screen or use “my jetpack” – whatever that is.
Well, it turns out that jetpack is a hookup between my self-hosted blog (that I will now create) and wordpress.com – the evil “cloud vendor”, recall – who will support my “self-hosting”. Presumably, it will exercise “Continuing governance”, so that “self-hosting’ won’t delegate too much power (to me, the evil, non-exceptional foreigner playing the role of the visigoth, at the fall of Rome). Of course, since all this “Connection” information is “metadata” – one assumes wordpress.com will give it straight to NSA!
But lets get back to computers (since they are just as much fun as annoying folk by proxy).
I have an MSDN paid up subcription in Azure bound to me home_pw@Msn.com login account. But, I also have a 3 month subcription (free) bound to email@example.com (to which I made firstname.lastname@example.org a co-administrator). Thus, from email@example.com’s console, I can provision an azure website with wordpress installed into “my” free subscription. (doesn’t this sound like a loophole!?). This is the first step before I get to use “jetpack” – I assume.
So, from the scaffolding, we initialize wordpress itself;
once I get a site (and logon classically to the self-hosted site) I access the site admin area, which allows me to install jetpack:
Note what Im going to get (in addition the things highlighte above) is the avility in the windows 8 to use the “share button” between IE and my blogging tool (an app loaded onto the PC) that in turn will use my Hosted site (jetpack.azurewebsites.com) membership system and associated login processes. If I understand the billing properly, this will connect up to the wordpress.com’s oauth for-cloud-subscriptions endpoints – that will enable my hosted site to expose APIs (guarded by tokens minted by wordpress.com).
That is… this is wordpress.com equivalent to the Google IDPs and Azure ACS namespaces! Anyone, enough theory. Let’s see what works when we “activate” the plugin:
Not surprisingly, we have to bind our site immediately to the equivalent of a new Azure ACS namespace… and hopefully learn what wordpress.com call such OAUTH2 tenants.
So let’s get passed the first hurdle (ssl certs):
Now, the whole reason I selected azure websites was so that the SSL provisioning problem got solved as in:
So, what’s up (and what induces wordpress.com to object to the chain above)? Well my conclusion is that the host environment of azure websites cannot process the wordpress.com trust chain (and never will, be a shared hosting platform). So we turn off https:
getting us to a “tenant authorization” screen:
Using some SSL magic id get shot if I tell you about, we get passed a blockage and onto:
…after a classical sso handshake and request for and delivery and then use of an authorization_code:
We see that the JSON API component (of our self-hosted site) is now active, due to jetpack, and that wordpress.com offers a client-id/password registration site (for third party apps wanting to connect up to my new hosted site, courtesy of the offloading between jetpack and the wordpress.com cloud):
Let’s play more , once I’ve stopped laughing about an NSA contractor’s “pillow countermeasure” – the all-American grandmothers’ defense of the right to think only about apple pie.
OK. Back to something important (like making an authenticated internet work right):
Let’s create an OAUTH client record (intending to authorize some webapp-client (foo) – trying to access the new wordpress server at jetpack.azurewebsites.net:-
giving an authorize URI of:
using this authorize URI at a browser gives
after selecting “jetpack” (from my choice of this “connected” site and my hosted wordpress site “yorkporc”):
we get a challenge (to logon at the server’s IDP – which is the server itself):-
delivering a code (ready for a token client to use at the wordpress cloud’s token issuing endpoint).
ok. I think we see how things work. What we need to do is try again, later, now with a decent client (perhaps built into a second install of wordpress). It can consume this API.
Anyways, where we started was merely using a share button… that invokes the wordpress IDP (that we just proved work.)
giving us a sharing link:
Americana is what it is – and its right the one finds americana where else but in the american (internet) heartland. These days, its vital that the spying, surveillance, interception and stored data acess be terms applied to the Americana seen and now felt by Americans themselves – and not just those on the end of the big stick weilded by some us-backed dictator.
Note the terms of art: interception, distinct from surveillance, distinct from stored data.
The pop in facebook and google (and any other firm with a broadband isp license) is there to facilitate surveillance (not interception). Surveillance is scanning (for defined terms to be found in sans and lans). Interception is blanket capture, usually by tapping a cable or a relay switch operated by a carrier (vs a “service provider”).
Finally, note that surveillance does not need access to the sans and lans depositing your email bytes onto a remote disk drive say. It may more simply survey the (lan carrying the) audit logs generated by the process, that trace the steps. For within is the juicy stuff (being even more juicy than the message and its “public” metadata). One scans or samples the audit data… dummy! A cleared employee (authorized to lie to the ceo) operates the “capability”.
Spying is compartmentalized by its nature – so spys can spy on each other (including your own side, ex presidents (to be) with an apparently fatuous comprehension of what little the us constitution is actually worth, lawyerly presidents who didn’t have “sex” with “that woman”, camelot presidents who sleep with movie stars, and presidents who attempt to maintain the compartments by rationalizing about what the meaning of ‘”is” is’). The rediculousness is a function of the need to lie.
Assume the silicon valley ceo hires folks with clearances in order that s/he can disclaim knowledge of “secret ongoings” in the firm – since the ceo has no clearance. This is a typical corporate trick (that the typical board should be looking for). Why not ask about it? Or dont, if you want to keep your board perks (the bribe).
Assume that its a secret that what is “against policy” for the us to do in the us when targeting us persons is not illegal for the uk to do (and share back with the us). The secret is the bilateral agreement to mutually skirt public policy, with the uk receiving reciprocal benefits, including getting the good story line that appears to do other than what is is the intent of the (secret protocols of the) policy. Why not ask about the history of such practices (and the intent to mislead parliament). Forget your knighthood if you do.
Follow the money on the funding of the “semantic web”, and note where most of it came from. Why! Who in the standards setting knew the ulterior motives, or should have known? How deeply involved are engineering groups in “facilitating” spying?
Ask why intel is on the list of firms to receive special attention from the fbi. Goes to the heart of the pc (technical) trust model! Crypto is only as good as the machine doing the computation, remember.
Just reading between the lines.
Nsa indoctrination takes years, and openid connect folk seems to have fallen for it hook, line, and sinker.
Openid connect was born in secret, and retains secret components. Vote against until folk apologise for that legacy.
Sample codes for dotNet 4.5 show how to insert a delegating handling for webAPI controllers into webAPi projects. FUrthermore, the handler then validateds the token, inducing appropriate http errors when things go wrong at the guard.
because our projects uses lots of WIF, we cannot move to dotNet 4.5 – and thus cannot use the sample code (or the JWT securitytokenhandler). So, we have to approximate the right way of doing things (for now). Our delegating handler thus far looks like this – and at least it compiles, and guards Just the webAPI calls (in our web forms project);
Obviously, now one adds a roll your own token validation routine….
but at least we have a basic guard, without having to insert a IIS handler.
Note, Ive absolutely no idea what I’m really doing with parallel tasks, the APIs, etc. So there may be lots of better ways of doing what is shown.
We are now taking 3 sample apps from Micosoft and using them to tell our local OAUTH AS story.
That is, FIRST, we have managed to repurpose the XAML client that previously showcased popping up a web browser into order to invoke websso and thus get a hold of the redirect URI issued by the ACS namespace serving the windows data marketplace; upon which event the code show a background task converting the code to an first access token (using the token issuing endpoint of ACS).
Second, we altered the XAML code’s settings so that client and its web browser login process now points away from the marketplace endpoints and now points at our own OAUTH AS endpoints. (Recall, that our own AS wraps an ACS namespace whose management service and token issuing endpoint do most of the real work).
Third, we added the todoList webAPI controller to our dotNet4.0 windows forms project – necessarily dotNet 4.0 since we make heavy use of the WIF extension libraries. The point is that this is the API just happens to that which that a certain demo Windows Phone app wants to talk to, having talked to an OAUTH AS. While the original sample wanted to showcase the app handling the JWT from the OAUTH STS using the JWT Securitytoken handler (available for dotNet 4.5 only) , in our case we will be content to simply manually parse the JWT – without verifying its RSA signatures etc.
So there seem two steps:
1. finish up the XAML client so it uses the todoApp API (Rather than a much more complex Zillow API). We want to see it pass across the JSON token that our own OAUTH AS/STS has minted, and for the App to handle the token. It should also use the userid from the token to help select a subset of the todo Items.
2. replace the role of the XAML client with the windows phone app, doing essentially the same thing. This project showcases the windows phone app using an embedded browser.
Now we have seen some design points occur already. First, the browser pop in the XAML case shows a kind of address bar (so there is webby feedback about trust). We need to see how the model is continued in the phone app case. Second, the embedded app may not send a useragent header (as in the XAML case) triggering bugs…in code that failed to assume that it could be a null string.
We have moved from using dotnetopenauth framework for oauth2 providers (that have to date allowed us to emulate Ping Federate AS, using Microsoft ACS OAUTH endpoint and management services)). Our plugin is still compatible with the dotnetopenauth ASP.NET pattern, but we now use objects from the “dallas” sample code to talk to the token issuing endpoint of ACS.
The reason is simple – we needed our own token issuing endpoints, delegating to the token issuing point of ACS remember, to fully handle expiry dates of refresh tokens – or access tokens when no refresh token is signaled.
Having done that, two benefits accrue: (i) the ping federate demo client correctly shows expiry fields, and (ii) the dallas API sample works to get authorization headers. The latter is the demo of a thick XAML client firing up a web-browser window so as to complete websso and interact with (our) OAUTH2 AS and token issuing endpoints, when then accessing a data API in the Azure data marketplace (showing zillow mortgage data, as it happens)
using a better client to talk to ACS token issuing endpoint
The ping federate page showing the authorization response now handles delivering the (renewal) expiry issued by ACS:
validation by our own endpoint then shows the remaining time to expiry of the access token which can be longer than the renewal token expiry time (when the clock skew between ACS and the token issuing computer works against us!)
Building on seberry’s paper on bent functions and the math background of unitary processes, permutation matrices, spectral gaps, trace eigenbases etc, we get to glimpse material and application topics withheldfrom any such paper: cryptographic method for wheel wiring and mixer analysis. The overlap between maximal quatum mixing states, supercomputers with hammng weight seiving capabilities augmented by 1980 era prabalistic search capabilities (using actual nuclear chain reactions to cover the combinatorics), and more modern attempts to now define a reliable multi qbit processor array for indetermistic processes with 1950 era wheel wiring . Is fascinating.
One sees why even the topic of rotor wiring and analysis (of what cobstitues a good set) is even today so highy controlled, with the usual academic deceptions and participation.
Cryptographic design method for sigaba index rotors?
Well within rowletts capabilities.
So once I had 19 login accounts, to each of which was bound some authorization information. it was a pain, and sso was born. Now I have one login account and life is good.
Or it would be if now I did still have 19 authorization licenses bound to the one login.
The point is that the mess didn’t go down with NISTIC-inspired nationally-coordinated login – it went UP. Folks just moved the complexity point along the chain a little. Now there are just lots MORE places where control is exercised – any one of which can go wrong.
So, now I have an MSDN account , bound to firstname.lastname@example.org – which issues various codes that that can pay for authorization liceneses – such as be a developer/publisher of store apps and/or phone apps (which are different!). I have azure subscriptions tied to email@example.com and others tied to firstname.lastname@example.org. Logging on to the azure portal can also happen via the integration with Office365 (as an IDP), which can delegate to a corporate ADFS – so logon as that ADFS issues the office365 account’s name. When I “deploy” to the Azure hosting world, however, anyone who owns a .publisher credential can do so. When I want to RDP to the resulting site, I have to use a username/password (that is not tied in anyway to microsoft accounts, Office 365, ADFS, or anything else);;
To make visual studio 2012 local debugging work properly, I have to run it as Administrator – except when running store apps (which don’t allow the process to run in a group tied to the local Administrator). The windows accounts are actually Microsoft Accounts (formally known as LiveIDs) in which Adminsitrator@domainAdmin@localAdmin –> email@example.com and store@domainAdmin –> firstname.lastname@example.org. While store apps running on the windows server box MAY leverage the SSO that comes from logging onto the PC with a microsoft account, this is not true for webapps – which constantly prompt me for live credentials . such as doing all the above.
of course, I have absolutely no interest in the above – except that I have to do it to get anything done. What Im really interested in is having my own SSO network projected through ACS which is NOT “governed” by any of all the above. its offensive, its in the way, and its Very American (a giant mess). All that happened was the there is now even greater granularity of governance control – for absolutely no benefit to me. Whatever TRUST the above may be intending to confer upon me in the eyes of consumers is wasted (since their trust in me has nothing to do with Microsoft’s proxy-NSTIC governance story.) The number of third party apps that need to connect up to the data service than mine conencts up to is zero. I am not facebook, and not attempting to be a facebook. I just want to present an app on a device .. so it goes klunk – just like on the PC.
The problem with the OpenID Connect style “device” story is that it seems to be MOSTLY about governance – attempting to have a few TTPs (acting as proxys for regulators) run stores – that project national social policies on the “trusted space” of devices – vs those evil PCs that are an open platform.
Until the devices behave more openly, I think we avoid their fancier “integration” features. Something about free lunches not being free…
The picture above shows the AD client recently added to the ASP.NET “component” in dotnetopenauth open source project. One sees, by default, that this provider is set up to protect the graph API endpoint (the “resource”).
We see that the endpoint supports the websso-friendly authorization code grant type:
We see that the token endpoint has a few quirks:
and we see that the provider client retains the tenant ID and the user OID in instance variables.
One notes how the cert handling fails to conform to PKIX…and one notes the imposition of SHA256.
We see now handling of the refresh token (assuming there is one). We also see no evidence of “id tokens” either.
Now, what we do NOT see is how the provider and SID variables would be handled on the redirect from the authorization endpoint (since proper use of the OAUTH2 state mechanism is not in evidence). Evidently, the service principals redirect is not configured to go back to the ASP.NET template’s ExternalLogin page but to the AbsoluteUri of the page hosting the requestAuthorization handler (normally login.aspx). This will be interesting to track, to see how it was done PROPERLY. Login must maintain state somehow (and presumably it will be via viewstate).
We also see the article at http://www.cloudidentity.com/blog/2013/04/29/fun-with-windows-azure-ad-calling-rest-services-from-a-windows-phone-8-app/ that also consumes the (Azure AD OAUTH2) endpoints directly, too. The latter looks like the next thing to emulate, if only because I can now so trivially swap out the native use of Azure AD endpoints and substitute my own (which happen to delegate to Azure ACS’s OAUTH2 endpoint “support”).
The job code article series nicely lays out how to think about nodeJS scripts in Azure Mobile Services and their interaction with the OAUTH2 protocol. IN one case ,show below, we see that perhaps our own JWT could be used, as already being minted by our Ping-Federate OAUTH AS emulator site (itself hosted as an Azure cloud service supported by the Azure ACS OAUTH2 and management service endpoints).
So the steps to get here would seem to be:-
- Take our working smarteragent IoS Application and build our own equivalent working on an iphone – using the Azure Mobile Services starter project for IOS apps. Of course, this means we need a web site delivering json services to the app, too – a role that can be played by mobile service site.
- Do we really want the app to then to use the PingFederate-AS-emulator site to convert the code into a JWT …
- …which is then used as give in the article above?
This time around im understanding most of it.
Different in tone to the stanford prof. Here, its a mix of textbook and commentary (on writing textbooks). Its quite effective style – focussing on drilling the connectives of the brain to align with those found in the material.
Cypherpunks have got themselves into a uk communist party style tizzy – destined to create the usual “final schism” that befalls all false religions founded on inconsistent (and self serving) doctrine.
For at least the last 5 years (if not since 9/11), cypherpunks have lived with self denial: the desire to perpetuate the myth that anarchy won in full knowledge that it didn’t, actually.
Today we know now that ssl, ipsec or skype was/is compromised not by the (figuratively evil) us federal govt – that easy to target amorphous evil (whose evilness magnitude is a function of size) – but by corporate vendors – out to make a buck…while employing and thus feeding one or two folk and their dependents. Of course the very nature of governance means that corporate profit is a function of general compliance with public or secret policy – that stated explicitly, “or otherwise”. Such is a nature of democratic politics (hardly ideal , even in its academic form).
Is cypherpunks dead …like the corpse of the (academic relic of the) uk communist party I happened to encounter in my first job (at qmc-cs)?
It certainly seems so – from the dynamics of the end game.
Good riddance (though I’ll miss it, not that I read the endless drivel beyond, as with marks first pamphlet, the founding prophecy)
originally cited by Cryptome.
US realty is a political place – and rightly so. And so it was with a non-decision by the so-called RESO group to not endorse the odata approach to data apis. Of course, things had been setup ahead of time so that this non decision will evolve over time into an endorsement. Quite who outside is driving this will be interesting to identify. I already have some ideas. For example, the azure ad graph api comes nicely with oauth and odata, already!
To be honest, the direction being taken is folly; but the least folly of several choices. Competing with the legacy protocol – which is itself a distant ancestor of odata – is going to be hard, however. This will be particularly the case if the legacy protocol, called rets, modernizes its session token and login handshake to now use (quite easily) the oauth2 handshake and its bearer access token. If it also offers a new content type extending those negotiated today to also offer json, oauth/odata is going to have a run for its money – since 80% of what odata advances beyond todays legacy will become available within the rets franework, for a minor refit cost. While others stuggle making odata scale, those using an updated rets+oauth2 might capture the 80% of the interactive, restful uses that really matter. At that point, its game over – once one factors in the re-engineering costs of a native odata query engine doing huge searches.
Of course the simplest oauth server is a gateway, from new to old, for simple, fixed read only search queries – referenced using the odata mechanisms. That toy should take 10m.
Luca explains why the power method works, as an algorithm. In the course he explains the concepts behind tge steps – which are really quite intuitive (once you recognize what certain symbols are modelling). In particular one sees reasoning about orthogonal eigenspaces, preserved by the numerical method.
While its fun to see this explained in matrix centric theory (taught to every 14 year old since 1950), its even more fun too see the same argument made in pure group theory. Orthogonality in 1920s relativity math is more about permutation cycle lengths, letting a cycle (term) act as a generator (in a cayley graph), cycle closure, and self-conjygate subgroups of the symmetric group. Eigenvalues are addressed more geometrically, than in special linear groups. A series of 2k+1 matrix multiples may look more like conjugation of a cycle by a 2k+1 (or 2mk+1, rather) term
Adding custom (oauth) provider to an ios app, supported by azure mobile services.
On my kindle’s browser, an azure hosted web forms project offering a google (websso) login didn’t work. It became quickly apparent that the forms auth cookie presented back to the site was invalid in some way (or perhaps not present!); meaning that SP site would loop back to the IDP. If the IDP was set for auto-logon (perhaps the second time one visits…), one gets an endless of loop of websso requests and responses.
Of course, I just happen to be using ASP.NET 4.0 on DotNet 4.0 (so one can use WIF libraries easily, with websso). This turns out to be a nasty combo…till you fix it thus:
Finally! We see a modern presentation of the topics addressed by turing in his unpublished manuscript on permutations. Perhaps the editor, noting the missing title, should have called it on mixing.
Looking at the very specific gccs terminology turing uses, and given the extra context that this stanford professor adds concerning the design of rotor wiring schemes (and pseudo wiring, by extension, in the spn of the des) based on expander graphs ideas, we can see what both were/are not allowed to say openly … concerning cryptographic design method, per se.
Its nice to see this prof use non pretentious language.
One of the interesting things about modern (2013-era) quantum computing mechanism is that it is NOT so different in its conceptual basis to the concepts used experimentally from the 1950s. But shush! …lest folks figure that quantum cryptanalysis has been around “practical” form rather long than folks might realize!
Of course, just as Colossus was a cryptoanlaytical processor rather than a general purpose computer that followed up the theoretical Turing computing of a decade earlier, assume that quantum crypt-analytical computers of the 1980-era were not general purpose (and fault-tolerant) computing devices. Rather, they did just one thing – just like Colossus, in the electronic era: they proved a theoretical possibility at high expense.
Back in 1950s crypto circles, folks were very much still thinking about 1930s length functions in topological spaces – searching out those special groups that allow secret or hard to compute “trapdoor” graphs whose difficulty in solution by computation denies an attacker the ability to compute the length function – necessary for solving hard puzzles.
Looking at recent theory, we can look back at what were was probably considered secret cryptanalytical theory – back then.
From the notions associated with the analysis of cayley graphs we get to the heart of the security property known as graph expansion. In the quantum computing world of 1950 and 1980 alike, folks were interested in a related property – the minimum energy state for a quantum superposition. The two concepts are related, when considering optimization functions that link combinatoric properties to algebraic properties.
When computing mean sets of graphs, under the random walk presumption that underlies mult-particle modeling (such as the bits in a plaintext), we are used to calculating the volume of probability space occupied by the average clique of neighbors. Then, we are interesting in the expansion property – that only for certain cliques of relative size less than alpha.N can we be certain that the expansion ratio from mean-set members to adjacent neighbors is greater than beta.K. Of course, k is related to the minimum weight – the latter being an upper bound on alpha.N.
If we now see things in terms of the 2 generator terms of A and B (above), we are interested in “that series of basis change” modeled by a particular sequence of generators applied to a start state. Of course, we remember seeing this back in 1954 (or earlier) from Turing, who modeled the cyclic powers in terms of a sequence of enigma rotors, which of whose wirings represented a generator. A sequence of rotors modeled h(M) – such that the sum of the powers would be zero (and thus model the 7-point geometry or hamming code on the hypercube).
We have to remember that at the end of the day, all cryptanalysis is a search problem. Thus one is always interested in the computing or alternatively “finding” the minimum energy state of a superposition system – realizing that if one JUST has the right basis sequence one can compute more effectively. This clique search for mean-sets is the “trap door” for certain groups, of course – likely to apply to DES of course (if only one thinks different). One tends of thinks of Braid groups as the foundational group able producing expressions hard to calculate in a standard computational model, but easy to calculate (if one can figure just the right basis [sequence] in a reasonable time).
Is interesting that to see what Turing understood that Quisquater evidently does not – that for a non-Abelian group, a given sequence of generators (each term being Ci where I runs from 1 to n distance/edges) can be re-modeled in terms of density evolution. Turing understood that finding the mean-set was equivalent to finding the minimum energy state of a superposition – or the average sampling function for sufficient trials that induce a cauchy sequence.
Looking at the Stanford group (which has long been a NSA math front for pure theoretical topics):
The material http://theory.stanford.edu/~trevisan/cs359g/lecture05.pdf is essentially the same material as Turing discussed in his On Permutations manuscript. What’s more, the well written summary clearly lays out the thinking steps. Turing’s example helps too, being so tied up with early mixers – that also related to quantum mechanics and the “chemistry” of uranium fission, etc.
A very long time ago (like a few months), we learned about OAUTH as a protocol to talk to authentication IDPs. We wrote dotnetopenauth plugins to talk to wordpress (as an IDP) and then PingFederate (as a proxying IDP to websso partners). Since then, we have emulated PingFederate’s optional Authorization Server feature, using a Azure ACS namespace as the backend service. Today, we updated our OAUTH2 provider for ASP.NET – enabling a standard dotnetopenauth-aware web site built from the Visual Studio wizard to consume the various assertions, user credentials, and the “directory graph access points”. Of course, this all really just means we obtained an access token having logged onto Google, which gave us access to the token verification API, which duly unpacked the JWT issued by ACS .. .delivering “attributes” to the SP site – which then implemented an account linking experience.
So as to do NOTHING but add the plugin to the assertion-consuming web project built by the wizard, we altered our AS to be maximally flexible when interworking. (It is thus somewhat insecure, relatively, on the topic of redirect URIs, state management, and the like.)
Of course, we should now change the name of our plugin – to be “PingFederate-emulation (via Azure ACS)”.
So we are now using OAUTH for authentication, which is not what its for. But its what everyone does with OAUTH. So… so do we.
The pictures above show an ACS namespace and its relying parties for a particular “rule group”, the claim issuer attached to the rule group (and thus all the relying parties associated with it), and the list of associated identity providers similarly (who are also issuers). One notes that the issuer for the rule group is not an IDP (though it is an “issuer”). This distinction is critical.
The first relying party, whose name is in the rule group name, can be considered to be the default scope. The relationship with the issuer is established by our code, at application startup. When one adds other relying parties who associate with this same rule group (and the issuer, therefore), one can consider these to be additional scopes – to be cited possible in messages to/from our AS’s OAUTH endpoint – and to which scope (more vitally) authorization codes can be minted.
So lets see if now we can make ACS issue a JWT bearing this limited scope. We can! But note that first we have to extend the PingFederate token issuing UI to allow us to specify the scope (since it treated as the audience, in the Azure ACS model):