first debugging trial of PingFederate (modern) to Azure ACS SAML2P endpoints

image

image

https://www.rapmlsqa.info:9031/sp/startSSO.ping?PartnerIdpId=https://sts.windows.net/04fb5be2-29ac-48b8-ad54-327e4ff243a0/

Ping Federate (modern) imports the  metadata, producing the local configuration as shown above.

image

https://login.windows.net/04fb5be2-29ac-48b8-ad54-327e4ff243a0/federationmetadata/2007-06/federationmetadata.xml

For Microsoft engineering (doing final debugging). this was the result:

image

Well, obviously I’m missing the step of importing the IDPSSO descriptor listed by the SP, into Azure AD.

So how do I do that?

well lets configure PingFederate SP to both be named as an entity using a http URI in the validated namespace and also listen on an endpoint in the same. Also, we logically add an SP record to the IDP authorizing information flow via assertion:

image

image

The result is slightly better than before:

image

image

Lets now simplify, and ensure we are using a managed account, first. With it we can with our existing setup get a SAMLP response:

image

So, lets now change the name/address of the APP  and make it the ACS endpoint of the SAML SP server awaiting the response:

image

We get at least the processing of the response, now!

image

Clearly ,we can now process the result, but it violates an SP policy. We see a missing subject name in the authentication statement (which probably upsets the traditionalists):

image

 

image

We can extend the time period allows by PingFederate (globally), thus:

image

Suggests Ping Identity needs to make the time allowance configurable by IDP CONNECTION – not that my opinion is probably of any consequence.

This allows a first end-end trial:

image

image

image

Some Request options cause the Azure Endpoints to crash (e.g. force authn)


Also, since Aure AD is using RSA with SHA256 I wonder if it will work with our 2 year old SAML server (that may not support that ciphersuite)?

Posted in SAML, SSO

A US privacy fantasy–based on OpenID Connect

A certain gentleman who once worked at BBN on an NSA contract for key management system production had the unfortunate responsibility to refuse to answer simple technical questions – since he didn’t know whether the answer had been classified or not. Often, it’s the terminology that is classified (not the method) – meaning one violates laws even by using the codewords or even specialized technical terms with non-cleared folks. Finding this all rather ridiculous (but doing it anyway), he would listen to non-cleared folks ‘conjectures” about the meaning of or operation of some technical widget – particularly if it was transitioning from military to civilian applications. They would often quite wild and overly ambitious (not that he could deny or confirm this). But, they were often humerous – as folks found “inner meaning” in glimmers of interpretation of this or that.

So let’s play the game.

Let’s imagine that the US is committed to privacy (finally), and this means enforcing it. (It also means everyone ELSE has to enforce it similarly, so the US doesn’t suffer harm through it taking on more responsibility than others; everyone must suffer the pain equally!)

To take such a political hit (since this means the last 20 years of “market-led doctrine” failed, with the final end of Thatcherism and the last 10-years of gunboat diplomacy and wars of humiliation), the US has to get something else it wants – more desperately than giving up its “free market” privacy dogma. Of course, it cannot ADMIT it wants what it wants… (since it classified! to prevent folks getting a negotiating advantage!)

And of course that is the centralized gathering and scanning of “cybersecurity logging” records (purportedly to measure attack patterns by having lots of sensor nodes out there…). BUt, in the space of consumer privacy, this means having a mostly centralized directory service – that LIMITS who gets the directory record of users as the they wander out to some half-baked SP app site (that may not really give a damn about how it handles your privacy). The directory operator becomes the “privacy-policy” enforcement point – limiting who gets what, of personal identity attributes.

One sees in the Azure AD rollout EXACTLY this element of policy control, though Microsoft are going to some pains to hide the bigger picture. (Remember the palladium and Passport scandals!)

Now I cannot say that the bigger picture is exactly unwanted or socially undesirable; and is clearly no longer technologically difficult using webby technology. Actually it never was particularly difficult, as we proved in the Allied and US-internal-services shared military Directory(s) world 25+ years ago  – using earlier forms of the signed-tokens now being contemplated in the world of OpenID Connect.

privacy – security – trust (by spying on the logs) :– the eternal braid group.

Posted in rant

Azure AD (IDP proxy) and ADFS/PingFederate IDP

To verify a domain in AAD, first remove it from office 365! Sigh!

My advice is NOT to use the console – which has a particular verification procedure (based on adding a TXT record). In powershell create a federated class domain, get the (other particular style of ) validation information done, and then and only then verify the site. At the same time, one gets to setup the endpoint that allows Azure AD to talk back to ADFS for passive and active reasons.

Using the managed account to login to the AAD via the powershell tool, now add a user (and their unique immutableid/UPN).

Posted in SSO

Connecting Azure AD IDP to an ACS-mediated SP

Using an MSDN-Azure subscription, create an Azure AD instance – alongside an ACS instance/namespace. my AAD is called pingtest1 and the ACS is called homepw. Typically, one authenticates to the Azure portal using a live.com identity, which populates the first administrator.

Add a second administrator, whose identity claim is in the namespace of the AAD instance (Administrator@pingtest1…) in my case.

image

The goal is now to hook up ACS with AD – by importing the IDPs metadata into the ACS SP. First we add a throwaway “integrated app” (on “https:/localhost/throwaway” URLs) into AAD which exposes all the endpoints:

image

Then we do the usual ACS screen to the the IDP:

image

Then we use Visual STudio 2012 to which we have added the identity and access tool (from extensions). To the web form project we add the passive STS, using the wizard (all as conventional). WE selected the ACS option (homepw) which showed us we could bind AAD now to our new RP (to be provisioned in ACS). A login at the App induces a flow  to choose an IDP form choises we made as provisioning administrator:

image

image

image

Then we use our temporary password in a screen flow that sets the permanent password. The assertion chain then starts back to our RP:

image

And this is as far as we can get … until now we use the powershell for AAD to make a service principal (with redirect) for ACS homepw. And that we try next – since now the Administrator@pingtest1.onmicrosoft.com user is evidently well provisioned and fully live.

image

shows our RP.. and no entry for ACS!

Now according to ACS, for our namespace, our RP endpoint expecting an assertion is

image

https://homepw.accesscontrol.windows.net/v2/wsfederation

So we change our make serviceprincipal script thus:

image

We run this in the AAD console (having installed it, yada yada):

image

image

We see the final record:

image

Having generated some claim mapping rules in ACS for this IDP previously, we try a run of the SP again.

It STILL doesn’t work .. as the name presented by ACS is https://…. whereas we were able to register only a name of form homepw/homepw.access…

So, lets try to use the IA tool… spoofing ACS – since this evidently invokes a different means of registering service principals. First we delete the principals we added above.

 

image

image

And this doesn’t work either…

ok. so lets try something else – NOT using the formulaic name form. Lets simply use the (https) of the ACS URL

image

Hurray. we got from MVC SP to ACS, to Azure AD (to MicrosoftONline Login authentication service) back to ACS and to our MVC App.

image

You just have to know to make your script behave a bit more like the I&A tool in Visual Studio (not that this worked for me, by itself).

It may or may NOT be relevant that I have added to the list of “integrated apps” using the console

image

Posted in AAD, SSO

faster webservice (Saml) authn to office365 endpoints

It takes just a second to access the web service endpoint of Exchange Online setup with federated security. From a windows form,

image

The code from the sample takes 20s (as it goes about discovering a URL that is actually pretty static in Office365 land and only then bothers to hit the IP-STS):

image

Slow original sample

Obviously, we are supposed to discover at “connection time” and locally persist the URL – for use the second time!

Posted in office365, SSO

Client side WCF “token-providers”

From http://weblogs.asp.net/cibrax/archive/2006/03/27/441227.aspx we get to learn about the client side security token managers – and using a SAML token intended for a logically-named entity across several ports:-

image

As on the server-side, we see that the key to customizing the security model of the client is through subclassing the ClientCredentials class:

image

On the server side we saw that it was the the token manager to take a look at the URI identifying potential token types and hook up the authenticator classes for the token. On the client we see the opposite duty of  the manger: to create a token provider class (for classes of token type). In this case, it’s the saml token type (a member of the issuedtoken class). The custom mostly borrows functionality from the standard apparatus for such tokens.

image

image 

But it does do a little specialization allowing us to control when we take the token from cache or when we talk to the STS to get a new one (that is then stored).

image

This all makes a lot of sense!

Posted in SSO

Understanding securitytokenmanager and bindings in WCF

I never had a good mental model of how the security apparatus works in WCF.

We see first how the main control point on the server for the developer is the service credential. First configure the cert on the default behavior and then substitute one’s own behavior. It has the same cert assignment but now one gets to set your own securitytokenmanager class.

image

image

The token manager is a recognizer of tokentypes – presented as host opening time. The tokentypes presented depend on the binding assigned to the endpoints of the service (as read from the service definition in web.config). For each token type as part of the opening process, essentially one attaches an authenticator class for that token type

image

image

Because we added the  establishSecurityContext=false property to the message requirements declaration, the host opening protocol no longer asks the manager to bind an authenticator to the “sessiontoken” type of token. Normally, a library authenticator would look after it, in any case. Of course, we still get asked to recognise “http://schemas.microsoft.com/ws/2006/05/servicemodel/tokens/AnonymousSslnego” because we have yet to turn off the “service cert negotiation feature” trying to make things as simple as possible!

image

Ok, so we see how via servicecredentials behaviour we get to control the configuration of tokentype authenticators. Now, the URI for the username token used in the WCF client is NOT the same as the URI found in the WSSE namespace for a usernametoken to be presented in a ws-trust RST, normally. Small gotcha, note: “http://schemas.microsoft.com/ws/2006/05/identitymodel/tokens/UserName”.

To react to WSSE namespace username tokens we would have to recognize

image

But we are learning! (or rather, we are learning what folks who learned in the Microsoft WSE era work already know!)

Ignoring stuff about authorization policies (which we can think of as just a container for passing claim sets between the token handers/generators and our service implementation), at the heart of the matter lies the ability to register a class that confirms whether the username/password combo is good (and list some claims as a result). Most of the work is inherited from the windows account checker.

image

Posted in SAML, SSO

Trulioo and UK IDAP

image

Wonderful. The company doesn’t even feature SSO itself…

Yet another company that wants to sell SSO “services” to others who rely on IDPs – but it itself will not rely. There is no “case” for it, evidently. If there is no case here, why elsewhere?

So apart from the mythical logon to government websites by contractor employees (which is just as easily solved with smartcards, client certs and urr, what the US calls CAC cards) what is the use of all this?

It seems everyone wants to be equifax – providing the “evidence” upon which YOU make trust judgements. Its just fascinating to see how little even then-industry-insiders today understand what VeriSign ADDED back then to its equifax data feed – and how this directly impacted its valuation as a “owner of **new**-generation of telco-class infrastructure”.

Posted in SSO

Making the RST-R and Assertion from Active STS to Office365

Using our knowhow in WIF generated from the passive STS effort, its trivial now (mostly) to make the kind of assertion and response message required by Office 365 (to authenticate outlook or other thick clients).

image

Assuming one has a raw username token processor class and a token generator from the codeplex best practices samples, make the following changes to get closer to Office 365 compatibility!

To the username token processor we add the authentication statement (and some claims that perhaps arguably ought to be better added later, in the GetOutputClaims method!)

image

And then, the usual RSA/SHA1 signing method is required:

image

And to ensure the authentication statement actually gets minted (and the claims associated with authentication of the username token get populated in the authorization statement) we pass through the authenticated status (from red 1, above) via the trick shown as blue.

image

End.

Posted in office365, SSO

looking at chained openid connect, given oauth2.0 authorization code grant

I now understand enough about OAUTH2.0 in its modern incarnations to have a look at openid connect. What does the latter do?

The answer is… I don’t know (but I have every reason to not trust the folks involved, if only because of how the phone companies lied and spied on folks – setting the precedent for how its likely OpenID connect will work, socially, too.)

When we look at http://openid.net/specs/openid-connect-basic-1_0.html, let imagine we are now working with the 25+ year-old architecture of the secure x.500 directory. Assume the old concept of the universal directory and openid connect are one and the same – and the core features are common. What MIGHT they be? Assume that technology changes (as bits, bytes and blobs shift between binary, XML, text and back to binary…); and that such changes are essentially irrelevant.

Well! We know from the write-up that an SP might receive a web visit from some consumer with a purported name claim (e.g. peter@rapmlsqa.info) – perhaps in the Google namespace rather than my namespace, that of the Windows Identity land or Yahoo land. The visit is to the likes of one’s Office365 share point site – rather than a simple website.

Of course, let’s say we are registered with the Windows world (being marginally less evil than Google). Assume the politics evolves (quite quickly) such that one MUST use a TTP to mediate ones web presence with consumers (rather than use the web concept from TBL) Thus we can perform the OAUTH2.0 authorization_code handshake – either for consumer consent, or for tenant admins to consent to the presence and value-add of data-consuming APPs (from third party vendors).

Through our Azure ACS instance, we have access to Google Microsoft and Yahoo IDPs of course – who we assume (under USG “prompting”) are decided to allow reciprocal access to each others’ directory access points, for any tenants of each _other’s_ cloud. To authorize the other cloud’s SP to access the IDP’s directory graph API endpoints of course requires an OAUTH-2.0 dance – in which the authorization server serving the SP (this being a Windows world AS, when helping out ACS supporting a Windows SP website and web services) issues a signed JWT – that is “viable” in a cross-vendor world because the certificate/signature on the JWT can be evaluated at the directory graph endpoint(s) of the two other IDPs. After all, its just, a signed blob, supported by a cert – giving it mobility.

So is this what OpenID Connect REALLY is – that mere OAUTH 2.0 is not?

Ill guess it’s the fact that certain OAUTH AS/STS issuing the JWTs as “identity tokens” (subtly distinct from the authorization token within the same access token response) carry political-privileges such that the SP web app “governed by ” one cloud vendor can make web service calls with it to any of the “national infrastructure” directory endpoints.

If I think like X.500, folk in the Windows SP world will likely STILL not make direct access to a Yahoo directory endpoint. Rather its more likely that a chained directory operation will be performed, with the chaining being orchestrated by the “cooperating clouds”. (And you can be assured that NSA/DHS/CIA/FBI are interested in JUST such a collection/collation point, since its now EASY to get the “consumer-centric” trap/trace records for the “cybersecurity” mission).

If this is what’s going on, this is JUST AS WAS IN secure X.500 concept of operations (as practiced in the Allied military Directory, for one). One issued (as a “governed” DUA/SP/App) a signed request to a local DSA (e.g. Windows SP/SharePointApp  to Windows DSA/ACS) and Windows Cloud will presumably now go chain off the (signed by JWT bearer) request to Yahoo’s graph  API say (and proxy the response from Yahoo back). It would also natrually resign the response – as required for the local security domain but only with symmetric keys that authorize and LIMIT the use of the (yahoo-managed) directory result at the particular SP’s _governed_ webservices (the APPs on particular registered webservice-endpoints, supporting such as Sharepoint Online at participating tenants)..

Now, this is 100%  conjecture. But, its what I would do (with 25 year old directory technology, rebuilt with webby blobs). AS with the Directory of 30 years ago, its the POLITICAL power that drives it – and of course it’s the politics that MAY WELL UDNERMINE it (once folks see it). What I can say, given its authorizing apps as well as access to the directory naming context is that its MORE than X.500 – which may induce greater social acceptance .. since “there is more” value.

If this concept were to be classified (so noone has much say, with this being the true point of the classification…), it would have merely a NATO Confidential label. In the US, it would be (stupidly) classified at top-secret (to give it an allure of importance within the million government and contractor employees entitled to see it…).

Posted in oauth, OpenID

Making a Realty Active STS for Office365 be visible to the world…

I’ll document this only because it took 16 hours of head banging to do it; what should have take an hour and DID take an hour with PingFederate. Its quite fascinating to see the gotchas I overcame (once one is makes “non-developer” assumptions; as found in sample code).

image

So now we have proved that we can expose an IIS-hosted STS in our data center concept – though truth be told so far it’s sample code running as pages/services co-resident with our main ASP.NET webapp.But, something works; end-end.

Obviously, the next fix is to apply our programming knowhow to make a response that Office 365 would want to see (with Authentication Statement, etc); and test that against the Office SP. Then we figure how to apply our own username token validation class (rather than the sample’s class – which actually checks nothing right now.).

Wow. Raw persistence counts for everything.

Some notes:

if you are going to host such as a sample in a subdirectory of a webapp (with its own web.config distinct from that of the sample in the subdir), ensure Application settings are in the parent directory. Similarly, replace securitytokenhandlers (used by the subordinate path pages/services) in the parent too; registered token handlers in the bin/ directory of the parent (though logically used by the subordinate path pages).

image

At the same time, the WCF settings have to the in the web.config within the subdirectory, being tied to the path denoting the address of the service.

Posted in SSO

Changing the default Visual Studio WCF STS to be Office365-compatible

What few changes do we have to make to our working ws-trust client/server to make it use the odler profile of ws-trust (as used by PingFederate and Outlook/Office365, evidently)?

Let’s state the desired output, as shown by client-side traces:

image

request

image

and response

To make the client, we simply made a few obvious changes of constants:

image

And the server config file was not much harder:

image

based on these trials, its obvious trivial to alter the config of our pseudo-production STS to offer the right version of ws-trust for Office 365 purposes:

image


PS

Note how with the custom Binding we started to play with the idea that the SSL load balancer might continue to terminal the SSL session (leaving the hop between LB and resource server not secured by https). Does this affect binding config  in IIS6? do we still  expose the https binding, with cert etc?

Posted in SSO

Deploying to IIS6 an IIS-hosted STS with usernametoken token processing capabilities

So lets go through the (hard to recall IIS6-era) basics using interactive tools (vs a prepp’ed script). The goal is to host in this production-like environment the STS site already working in a developer IIS Express 8.5 server (based on source code from Microsoft’s Claims Aware sample code for active clients. The workstation is to figure what we need in code/config so things work in production (not on a silly developer workstation setup).

We create a website (not a  virtual directory of the default website), to which we assign 2 bindings. Back in IIS6 era, we assign to all the Ethernet adaptors responsibility for port 80 and assign the host header used for the site’s identity exposed at the NAT’ing load balancer. We also see that this site exposes port 8443 (so we can expose a direct SSL (non LB-terminated) path through the firewall/loadbalancer

image

We assign the default application of the website to a non-default active application pool, with a domain identity assigned to on the worker process (and this account has all NT ACL rights and privileges on the source directory (and files) in the file system supporting the website.

 

image

image

We clearly have my testing (but production-real) SSL cert assigned, which establishes authority for the name www.rapmlsqa.info (which is the host-header we assigned).

image

The process identity also has access to the private key (but we cannot show this, back in IIS6 era)

And we see from a browser on the same host (where Fiddler is mapping www.rapmlsqa.info to localhost in its HOSTS file).

image

and from a browser OUTSIDE the loadbalancer, with a single resource server in its pool and a flow policy set for SSL pass through:

image

from outside the loadbalancer (the internet…)

And, in the STS webapp’s web.config we set the following behaviours so WSDL is exposed with external names in the addressing:

image

and for which cert 1 I GUESS is to enable the metadata services to work ,whereas cert 2 in WIF is there for use when encrypting RST-R TO the STS. Anther configuration of a cert in the application properties sets the STS code. Getting these right (and pointing via a CN=XYZ reference to a cert in the MY/Personal store for the local machine), we can read the WSDL – which has the right internal address names:

image

So clearly, our STS has been activated – able to produce metadata about the ws-trust13 port and its message capabilities. Will it now cooperate with a WS-trust client?

It seems so, given the client logging view of the request and response:

imageimage

clearly we see a SAML1.1 response, in a ws-trust “13” era responsecollection, sourced to our resource server (visible to the outside world, for the Americans/British/Chinese to spy on!!). Now that we have no firewall protecting us so SSL works end-end, we will need to be more careful on the (PRODUCTION) web server config…

So there is little about the (almost) production environment itself that is interfering with our production STS built into a production webapp (with real username token processors).

Posted in SSO

Starter simple active STS for Office365

On Windows 2012, install http://claimsid.codeplex.com/releases/view/67606 samples and do your best to configure. Configure the certificates, even though they will appear not to properly configure…they do.

Our simplest project for a Ws-trust client and server (hosted in IIS express) was formed by taking directly from the various samples, either in the ACS sample set for the username token (we stole the client) or the best practice sample set for the username server (we stole the Litware STS).

There is now a minimum of fuss – over SSL an RST is sent, and RST-R returned. The request contains a username token with guess what.. a username and password. A SAML blob comes back in the RST-R.

From here, we can look at what we need to do (probably going back in time) to make this compatible with Outlook when working with Office365 endpoints.

We see (from Ping Federate logs) that Outlook sends:

image

and the (PF) STS sens back for Office365 usage:

 

image

image

We can compare these obviously with the logs of the project’s client/server:

image

RST

image

RST-R

As with the passive STS work, we can assume that the Office365 wants to use the older 2005 profile of ws-trust.

If you use IIS express, don’t forget to set the SSL true property (and find the port). If you use IIS, don’t forget to ensure you add Everyone to the ACL of the (localhost) private keys!

Posted in SAML, SSO

Playing with Realty STS–processing usernametokens

Using Visual Studio 2010 (with WIF SDK installed) create a web site using the Claims Aware SERVICE (WCF). Running with fiddler, browse to the metadata at the likes of https://ssoportal.rapmlsqa.com/SpInitiatedSsoHandler.aspx/VCRD/11

I used Fiddler raw view to see the response in notepad (and I edit away the HTTP response headers).

image

We then correct an error (removing an extra slash after com/) saving the result to a file.

image

In fact we amend even this corrected copy (to use a test SSL-enabled IIS binding with end-end SSL through the load balancer listening on port 8443)

image

We then use the STS wizard for the claims aware service project:

image

Since the endpoints are actually hosted on 8443 (something the static metadata and the mex service are failing to publish) we make a manual adjustment, in the (generated) binding

image

Then we add a ASP.NET web forms website to our solution and to it add a service reference (pointing to our WCF service, discovered in the solution project set);

image

…and we see a client config be generated from looking at the /mex endpoint of the service (which reports on the /mex endpoint of our STS, with the right port even.

image

once we compile the solution, our servicereference class is generated. We can thus create an instance of the client proxy and see what happens!

image

Posted in SSO

Augmenting Realty RETS protocol with vendor extensions – OAUTH AS STS issued RSA-signed JWTs

https://datatracker.ietf.org/doc/draft-ietf-oauth-saml2-bearer/?include_text=1 shows an IETF internet-draft document on the topic of what in realty we would call a rets client talking to a RETS data server login endpoint in order to retrieve such as a MLS member record (or listings). Traditionally, data consumer using RETS client software are issued device/vendor credentials, including passwords and/or RSA signing keys. The vendor record stores the “scopes” that limit what their RETS client can do, having used the login endpoint to get a session-token. The login response message turns the scopes into a list of “capability” URLs on which are activated various data-providing endpoints. If the vendor uses the login transaction to get the session-token and the list of scope/URLs for the session-token, the RETS client software will thereafter present the session-token to the data-service endpoint along with a data request – to retrieve some specified set of entity instances (in XML or some other data format). Typically a browsing-user goes to a VAR site for the MLS – a site built using a webapp which builds-in the data-service-consuming features of the vendor’s RETS Client (as above). Typically, the webapp merges the data with other data sources and presents an enhanced view of the realty data. One incarnation of the webapp would be an Office365 share point website into which the vendor’s “Sharepoint App” has been registered, and whose implementing website has embedded RETS client software and credentials.

OK. All the above is ancient history, deployed globally, and old-hat technologically. So let’s overlay all the new terminology of OAUTH 2.0 so RETS gets a new coat of IETF-colored paint.

The resource “owner” is an MLS member (with a member record…and listing records).

The resource “server” is the set of post-login endpoints exposed by the RETS server, by MLS tenant

The OAUTH “client” is the vendor using RETS client software to pull data, server to server. The OAUTH client has vendorid/vendorpassword and optional RSA signing key.

The STS component of the OAUTH AS (authorization server) is the RETS Login transaction (a realty-specific interface for issuing access-tokens known as lists of capability URLs).

Assume PingFederate is the OAUTH AS, configured to perform the authorization_code authorization grant use cases. The vendor’s webapp invokes the OAUTH AS managed process when the Realtor first visits the webapp (with embedded RETS client). A “persistent grant” of authority is stored by the OAUTH AS recording the individual Realtor’s “Consent” for the vendor of the webapp to use the server-server RETS client data-channel’s “capabilities” (once obtained). If the grant is revoked, or expires, or is otherwise terminated, the vendor’s webapp would normally perform the authorization_code use case again. As the first time, the Realtor will authentication (using websso) and issue consent. In the sharepoint 2013 incarnation, having obtained access to sharepoint list data as consequence of being launched as a “SharePoint App”, the vendors webapp will then use the process described to get authority to access the RETS data, allowing the app to present an enhanced view of both data sets.

The RETS protocol already features both channel-specific and URI-resource specific security mechanisms – making sophisticated use of digest authentication standard. It also leverages a session-token (whose integrity and legitimate use on particular channel instances is secured by means of specific digest authentication countermeasures).

Logically, once the vendor webapp has received the  authorization_code it SHOULD perform within 60s a RETS login transaction, citing the code in an RETS HTTP request extension header along with vendorid/password and RSA signature (in a second RETS HTTP Request extension header). The RETS login response will contain an additional vendor-specific extension field – bearing the RSA_signed JWT issued by the PingFederate OAUTH AS STS component in response to a request made by the RETS login service and which response is consume by the login service and inserted in RETS login response extension header.

No presentation of the signed JWT will be made by RETS Client. However, vendors are required to store the tokens, for audit purposes. Being signed with an asymmetric key, the vendor will be unable to forge the token, during its intended lifetime. Should any dispute being initiated  against the vendor for failing to respect the limits of the MLS data consumer contract on behalf of the data owner (MLS) or the individual Realtor, the vendor has evidence to show compliance and authority to consume data on particular URIs. Failure to present an JWT that shows authority to use endpoints and data for the particular facts of the dispute can be assumed to be evidence of non-compliance.

The scheme needs no inter-vendor agreement, being within the scope of existing per-vendor extension frameworks known to work in the field. The scheme can easily be extended to a multiple-vendor environment, with suitable cooperation.

Now, this description does not yet envision the client replacing the RSA signature on the RETS Login request with a SAML assertion. Since the RSA signature is already an vendor-extension field, one can similarly extend the client’s RETS login request with a SAML assertion.

Concerning implementation, we see from the IETF document

image

Using PingFederate’s existing capabilities, we could require RETS vendors to themselves signup or renew their authority by (i) completing conventional ws-fedp websso from an MLS IDP to the PingFederate OAUTH AS configure NOT to present a consent screen, (ii) presenting the authorization_code obtained from Realtor enrollment (which implies that a Realtor has been through a websso and consent process, too), and (iii) cite the access_token in the RETS login header (rather than the assertion or the RSA signature).

As we already saw featured in Azure AD demos requiring the tenant administrator to “admit” a particular third party webapp (SP) into the site-users’ experience, we see here the OAUTH-version of the infrastructure for vendor apps to be so authorized. We see this process then being extended to link up with the individual user’s authorization grant (communication by the authorization_code mechanism). The web app thus has authority from the site into which it is now a component part and authority to present particular user data from a realty data service.

Posted in oauth, RETS

rsa-sha1 signed simple ws-fedp response–for max interoperability (e.g. WIF/Visual Studio STS to PingFederate)

Steve Syfuhs answered my call for information on how to make a simpler kind of federation response, including the usual signed assertion.

The  project I built is download from here and the desired output is shown below (using the Fiddler tool and a Federation ‘inspector’, all from community members):-

image

To repeat the experiment using school-boy computing apparatus, install visual studio 2010 and the WIF SDK.

Then add the website, using the Claims Aware website template.

On that website project, run the STS wizard and add a (passive) STS project. The former is the relying party of service provider (SP) relying on the signed assertion produced by the latter IDP – the asserting party releasing an identity claim about someone or something. Note the default.aspx.cs file.

image

To change the output message formatting from its defaults, perform the following modifications to the pre-generated code in default.aspx.cs:

image

At 1 declare the serializer classes (for the older message formatting regime that we seek).

At 2, use the serializer in a replacement method call that processes the signin message received from the SP, requesting a response with assertion. You will need to resolve types, which will add package references:-

image

While you are at it, you MAY wish to maximize interoperability (at the cost of introducing “relative” lower strength in your crypto). By using SHA1 (or MD5), it will be now 10 years rather than 20 years before anyone attacking you that you actually care about can spoof the signature checksum – on your message whose noted expiry is 5m from now anyway…). Certain sites upon receiving assertions verify signatures only when signed with the RSA/SHA1 crypto combination and 1024-bit public keys

So that the assertion be signed with RSA/SHA1, alter the constructor of the STS configuration class thus:

image

“http://www.w3.org/2000/09/xmldsig#rsa-sha1”

http://www.w3.org/2000/09/xmldsig#sha1”

image?

We can go further into the maximum interoperability argument and ensure the assertion has an Authentication Statement indicating an authentication instant, authentication method, subject name and a couple of authorization attribute required by Azure AD (in its Office 365 incoming Federation Gateway incarnation).

This requires two steps:

image

Step 1: make claimset (note the Prip UPN (http://schemas.xmlsoap.org/claims/UPN) , particularly) and assign the claimtype of the name(identifier) claim

and in the STS class’ call back method for GetOutputClaimsIdentity() method apply our trick to make Authentication Statements appear in the (SAML 1.1) output assertion:

image

Step 2: ensure the output ClaimsIdentity has isAuthenticated property set true

With these additional changes we get as output on the wire something I’ve sought for 2 years!

image

Now the open question is: is this response compatible with (i) Ping Federate’s ws-fedp SP, (ii) the Shibboleth ws-fedp SP, and (iii) Office 365?

Yes, I’ve gone back a decade and laid out the non-optimal steps in schoolboy programming style – but that’s what maximum interoperability and maximum tutorial value often entails (since highly politicized vendors/academics ensure modern profiles don’t actually interwork in the wild, using some semantic trick or other to help “structure the market”)

Help on getting this far due to : Ping Identity (lots and lots of what-to and how-to), Steve Syfuhs, (how to go back in time in WIF), Dominick Baier (for Fiddler ws-fedp inspector and patience beyond measure, given the likes of me).


So, to testing with Ping Federate! lets create a connect from its SP agent to this STS – intending the websso assertion to play the role of a OAUTH grant:

image

image

Concerning the mapping from SAML assertions to OAUTH persistent grants (one of the advantages of paying for commercial grade servers!)

image

Concerning the mapping from assertion to access token we play a little:

 

image

And of course we import the STS’ certificate into the Ping Federate IDP connection, exporting it from windows thus:

image

makinga complete Ping Federate IDP OAUTH-IDP Connection of

image

When we try it we in the OAUTH context, acting as the phone app doing a redirect via its web browser to get the first authorization_code after a user authentication (via ws-fedp websso interaction)

image

We get an interworking success between (non SAML2P) WIF IDP and Ping Federate (for My first time)!

 

image

image

and RSA-signed JWT of

 

“eyJhbGciOiJSUzI1NiIsIng1dCI6IjcwVzNuUFJDQ3pTZVh1cXdzQlZ5MktNU01QayJ9.

eyJVc2VybmFtZSI6InBldGVyQHJhcG1sc3FhLmluZm8iLCJPcmdOYW1lIjoiQWNtZS

wgSW5jLCIsImV4cCI6MTM2NTgwNDAyOCwic2NvcGUiOltdLCJjbGllbnRfaWQiOiJh

Y19jbGllbnQifQ.TwYaXEZG21No4CeocPK7O_CCzx4PjUFIr6KfcD6d1zI-496vSl1xzS5UCqhjTltNuibUWvRntK77lbwQMfxbSeadvs5HPq0DdAdMKWHxbJkY93

aMbutVj-GzuaFzXxzHGyRn-U2rwWhRzAcJcrHz9Mb242YctiJadf30z9rDZ_7CQNjrFJNRPEgH6SFy9659Cof-ZvkarhqGKRRskv6ZqWBXLbRIO2GZrfhA6UesK_AT60QOin-eARRRv5bJDCwE6yO3OCcd35RiR0jaY4mGFVJmU2Nqy6_FIqPXCqbrx46M-07TMunA9qCUvgAG1CbhYsPyZhkl49uxZxCvy5u4og”

Just one gotcha to note in the original Ping Federation server IDP connection config:  one has to use the SP’s wrealm to give what is normally indicated in the wreply:

image

Perhaps repeat the steps – and let me know what factoid I’ve omitted from the schoolboy’s science project writeup.

P.S.

To make the assertion from the STS indicate am authentication subject name with a particular format,

image

vs

image

one makes a slight modification of the code:

image

Thanks to Steve Syfuhs

 

End.

Posted in pingfederate, SAML, SSO

from WIF ResponseCollection to just Response

By default, a WIF passive STS built from ASP.NET template samples produces a response of the following form:

image

Assume that was a later ws-trust and ws-fedp profile than what we want.

Now, what we want – in practice – is a response of this form:-

image

image

so what do we do in WIF programming terms – within the STS callback class or configuration class to make the latter?

Posted in SAML, SSO

1954 zero knowledge

image

image

image

A couple of things occur to me, thinking back to 1954.

First, one should think of a rotor as a means of producing distance (or difference). One thinks of the input set as a relative frequency. The inner product of the two is a particular space, where one deals with mean-set – based on random walks. A sequence of rotors produces the weight function when the “domain of definition” covers every vertice, with its atomic probability measure. With this condition, we enter a world of :total definition, which allows the notion of minimum distance to be the same as minim weight, which is a measure of the mean set in the asymptotic limit.

Second, the polish breakthrough in early enigma cryptanalysis was based on cataloging the conjugacy classes for particular substitutions of boxed alphabet (to use GCCS terms). The very notion of public key becomes apparent, back in 1954 terms, when the secret value s, as in w is conjugated by s giving t, operates in a world where t is only one member of the conjugacy class. This if course was the case with enigma/cyclometer cryptanlaysis

Posted in crypto

realty netmagic demo–handoff to Office365

Of the 6 billion humans on the planet, this will be interesting to about 3.

http://sdrv.ms/Z6btdj is a source project – with a trivial windows form – launching outlook, lync, browser. The point is that the thick clients be configured to talk to enterprise-class endpoints of Office365 – supported by a federated configuration that causes the clients/office to talk ws-trust back to an instance of an IP-STS running locally. In our case, it was a trial of Ping Federate’s ws-trust STS feature, with a username token processor addin.

Source may change to showcase using Office 365 APIs, etc.

Posted in SAML, SSO

Ping Federate active STS with Lync Online

When the logs say the following

image

It means that the relevant audience fields for legitimate token processor endpoints REALLY does not have http:// (or https://) scheme at the front – even if your expert in Ping Federate tells you otherwise.

image

note, you only really seem to hit this issue when you invoke the lync client component of your Office 365 solution.

Posted in pingfederate, SSO

From Realty IDP to Google Apps for Business (via Ping Federate and PingOne)

Let’s create a modern Google Apps for business account – similar in concept to our recent Office 365 tenants.

image

So, just as we have Administrator@pingtest2.onmicrosoft.com (accessible via https://office.microsoft.com launch point) in the Office365 world, we also now have Administrator@pingtest2.mygbiz.com (accessible via http://www.google.com/enterprise/apps/business/) in the Google world.

In parallel, set up an even-longer frog-hopping IDP chain. We configured the forward hops from our Realty IDP to a local PingFederate gateway to a PingOne cloud instance.

One authenticates (via websso of course) to ones personal cloud desktop (currently provisioned with no SP places to do.) But, the Google Apps domain noted above will shortly be added!

image

access via https://desktop.connect.pingidentity.com/clouddesktop/rapattoni.com/

image

One notes just how much “Google analytics” (the nice word for spying) PingONE is doing on the transactions – from where to where, by who – which may upset quite a few Realty boards – who don’t want such tracking properties given to commercial tracking partners. For Realtors, working there social network is what gives them an edge. Offering free SSO or reporting is not going to fly – if the expectation is that the tracking/reporting can be resold to others (as is usual in the US). Sigh! (When will American corporations get it!! Spying is NOT ok, no matter how you pitch it!)

Anyways, to add Google as place to go in this “Federation Gateway”, we go back to PingONE Administrator mode (which is not itself not websso-powered, which is annoying).

image

Via the apps catalog (of SSO-enabled “SAAS” vendors), we select Google, for which above we pre-prepared our tenant:

image

In a rather frightening page we see the hookup instructions and the double-ended invocation URL with which to talk sp-initiated wbesso upstream to the IDP to mint a local session in the “desktop app” before doing a trial landing on the Google App, which will duly do sp-initiated websso against the session in the app. (We do lots of this in Realty, too!)

image

(SSO) URL is  https://sso.connect.pingidentity.com/sso/sp/initsso?saasid=ce63276c-6cf6-4454-aae0-7c62097302fb&idpid=9038ea7f-9b46-4d81-8f1a-010a6dbc6826&appurl=https%3A%2F%2Fdocs.google.com%2Fa%2F%24%7BGoogle+Apps+Domain%7D

We save the pingone “signing” certificate for use later (and to be honest, I don’t actually know what this is for, yet). But, obviously it’s important for something!

image

Now, having JUST created a (working) Google Apps domain or this test, I’m faced with some horrifying text (since I don’t know as a mere dumb administrator….which of its conditions are true). All I know is I just finished signing up with Google Apps!

In order to allow your users to Single Sign-on to Google Apps, you must enable your Google Apps domain to support Single Sign-on. To do so, log into your Google Apps domain and then follow the instructions below.
P

lease note that if you are using Google Apps for Business as your Identity Provider (and NOT using AD Connect, PingFederate, or a SAML solution), then you should not add the Google Docs application from the PingOne Application Catalog. Instead, to add Google Docs to your PingOne account, you should go to the “My Applications” page, click “Add New Application”, and select “Manually Add New Application”. You can then create a link to Google Docs, using the following URL format: https://docs.google.com/a/${YourGoogleAppsDomain}

So what do I do!!?

Going back to Google, the company forces me to reveal my phone number as an administrator(for some security pretext). I feel lied to (whats new! About Google …and is USG relationship).

image

Using my intuition on how this all use to work a couple of years ago (when I last fiddled with anything Google’ish) we can presume that we need to focus on SSO setup:

image

we fill this in thus (eventually figuring that the “idpid” value comes from a parameter buried deep in the pingone URL for our instance of the federation Gateway)

image

Then upload the PingOne federation-signing cert (whose function I can now understand). In some absolutely crappy design, this act destroys all the other form inputs (so carefully input). Google is not the tech-firm it once was! So, we do it all again. This is the last time, before I abandon this.

image

After I consent to use SSO screen, we add a non-administrator user (much as we might license a user in Office365 land):

image

we get

image

image

http://www.google.com/a/pingtest2.mygbiz.com

SO, expecting websso (to my IDP, via the PingONe Gateway), I get a google signin screen, strangely. SO I enter the temp name/password, and get:

image

which leads to the usual google apps applications. But no websso! So lets try the double-initiator URL from PingOne:

 

https://sso.connect.pingidentity.com/sso/sp/initsso?saasid=ce63276c-6cf6-4454-aae0-7c62097302fb&idpid=9038ea7f-9b46-4d81-8f1a-010a6dbc6826&appurl=https%3A%2F%2Fdocs.google.com%2Fa%2F%24%7BGoogle+Apps+Domain%7D

image

Now, we have evidently lost the PingOne configuration! (probably because we failed to resume config back at PingONe, after doing all the Google SSO config).

Do I have the heart to try all this again, using values from (SSO) URL ? https://sso.connect.pingidentity.com/sso/sp/initsso?saasid=ce63276c-6cf6-4454-aae0-7c62097302fb&idpid=759a92dd-fa1c-4085-8a2c-bcbd38334d5d&appurl=https%3A%2F%2Fdocs.google.com%2Fa%2F%24%7BGoogle+Apps+Domain%7D

well yes, since there is stuff to do (that makes sense):

image

image

giving us something that MAY be important ( logically for pure SAML2 SPs that could do all this manual work rather better than google does it)

image

image

we now see more what we would expect (with the sp-init flow back to the Realty IDP) creating a federation gateway session, than from there trying to handoff to google ACS:

image

What is sent to Google’s ACS is this:

image

So if we now ensure that we release an emailName in the namespace Google expects,

image

we get….at the policy middleman (sigh, at the desire to exercise control concept)

image

if we arm the “group” (and this took multiple tries)

image

we eventually get to SP behavior (which is still a yet more of a pain):

image

if we log out of the persistent Admin session, we eventually get the experience we should have had 2 hours ago:

image

It works. But I don’t know how many people would be THIS persistent!

Posted in pingfederate

WIF SDK and dotNet 4.0 SAML authenticationStatement (and cert)

We can make an Office365-like assertion using nothing but the default template for adding a new website using the ASP.NET passive STS template…

image

We managed to make a trivial change to the auto-generated starter code so all the work of the previous memo was output:

image

The original code…

image

…becomes (in principle)

image

Yes, that little dance of 3 lines causes the outputIdentity to have its isAuthenticated property set..and for the nameidentifier to become assigned as the authenticationStatement’s subject. The critical thing to do is have the “custom” (or any other named object) present in the construct. It sets “IsAuthenticated=true”  – which triggers the output formatting (as you now would expect, no!)

The idea came from reading http://leastprivilege.com/2012/09/24/claimsidentity-isauthenticated-and-authenticationtype-in-net-4-5/ – which had the hint worth following up….

Note if you omit the first two lines, you can set the authorization statement’s nameidentifier (but no authenticationStatement – obviously with no subject value equal to the nameidentifier – is produced.) Similarly, you can add claims of authnInstant and authnMethod by hand to the output bag. These will create an authentication Statement (but no subject field, within)!

All makes perfect sense (now I know!).

image

Now, what do we do so that the formatting of the ws-fedp “wrapper” tags around the assertion be the simple one-element response (vs a collection of 1 responses) – just like that produced by ADFS 1.0?

 


Gawd, I feel so stupid. But not as stupid as the three American doctors who recently told my wife she was constipated (when it was a horrendous gall stone, requiring organ removal). Or the other American doctor who  told her the rash was nothing (when it was Shingles)… But, thankfully MRIs and CatScans (using nice noise reducing algorithms that are (nice) sideeffects of the cryptowars) save the day for internal medicine, since now you can actually see what’s going on…if only you have the skill to get passed the money-saving insurance company trying to prevent you getting access to them…

But having 5 advanced imaging scans available in properly-funded/profitable American hospitals doesn’t always help (as my daughter found out, yesterday). Assumed to be major kidney pain issues (logical, given the pain source and the surgery history), noone  – including me – looked for 4 weeks at the strange shadows near her spine on the CT scan, just where certain back muscles attach…and which can be *so* stressed they spasm wildly…at the merest touch

Posted in SAML, SSO

making a simple assertion with authenticationStatement

We download older Windows SAML code from packtpub.com, trying to go back in time and think how Microsoft though about topics BEFORE the WSE2/3 and WIF era. The assumption is we will closer to Win32 and the TCB. We look at a simple tester program, Chapter one, unit 2, which we amend so the form is more like what Office 365’s Federation Gateway will expect and demand.

image

First, make a signing-capable cert – but amend the instructions a little (to update the maximum expiry date!)

image

one amends the descriptor structure for the assertion to add the desired attributes an the authentication statement:

image

We get the assertion as desired, though its signed (in this program). Of course, Office probably doesn’t want a signed assertion. It wants a signed ws-fedp result message (bearing the signed/unsigned SAML assertion)

But, we have got closed to the “raw machine”.

image

If we compare this to the output of Ping Federate IDP (acting as a Proxy IDP to our Realty IDP), where Ping Federate is itself a proxy IDP to the MicrosoftOnline IDP … invoked by the Office Federation Gateway (itself invoked by Exchange ONline SP!)…

image

we see that we are missing the audience controls and fail to carry the certificate. And obviously, we are as yet missing the outer ws-fedp tags.

SO lets fix the obvious stuff.

To add the audience conditions and set the nameformat proeprty on the UPN as nameidentifier:

image

thanks to http://stackoverflow.com/questions/1348947/audiencerestriction-in-saml-assertion we added:

image

To add the certificate to the signing block:

image

thanks to http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/9a2c881c-5094-40be-a650-1fe27fe465fc we changed the keyIdentifier to the X509Raw type:

image

from image

Posted in SAML

Realty IDP to PingFederate IDP to Office Federation Gateway to SP

some notes on several sp-initated chains, mixed with idp proxying  and relaying.

To configure, ensure the realty IDP uses the desired ping federate server (with the capabilities to talk to Office with both passive and active sso protocols):

image

1: override the default stem of the ping federate server

2. mandatory attribute, when asserting to the Ping Federate SP connection

3. note the absence of the trailing ‘/’ (since this values is concatenated with the path given by PF on the callback,with its dynamic path components – presumably for security)

we can see the chain and assertion path here:

image

Posted in office365, pingfederate, SSO

dotnet validating Ping Federate’s RSA-signed JWT

Let’s update our ASP.NET SP old project – so that the OAUTH2 provider for PingFederate now verifies the access token in JSON format (rather than following a reference).

First, we set the path and loading parameters for the certificate for verifying the RSA signature (as the only parameter that gets controlled in this “liberal” security concept)

image

The Get User callback has to change similarly:

image

Posted in pingfederate

PingFederate’s JSON string array; using SAML token to fulfill access token contract

In the advanced settings section of the access token management page for JWT tokentypes, note the option (at red 1 below) to serialize scopes as a space-separated string (rather than as a javascript string array). Obviously, this issue hit us yesterday – where the decoder happened to use very simple assumptions about the array: being a name-value pair set representable as a dotNet dictionary<string, string>. We fixed it by amending the decoder. We realize now we could have fixed it by reconfiguring ping to simplify its token formatting for simpler (almost trivial) decoders.

image

At red (2) above we see our attempts to expose some “contextual” parameters, usable in mapping.

Below, we also note the contract we  formally disclose about the tokens content (one day in OAUTH metadata, presumably, once the “new” standard catches up with “old”  SAML). We add petername.

image

Now when fulfilling that  contract, why can we not fulfill petername from any of the inbound SAML assertion’s attributes/claims? Why are things restricted to the SAML subject? This makes no sense (though I can see lots of dogma reasons, for it).

image

I think the answer is to first extend the attribute contract – thinking now of the “OAuth SAML Grant Attribute Mapping” much as one thinks of an SP adaptor contract definition.

image

Posted in oauth, pingfederate, SAML, SSO

Validating the RSA signature on a signed JWT from Ping Federate

If we upgrade the JSON token project in this really old (by one!) memo to dotnet 4.5 and compile things using visual studio 2012 (upgrade2!) can we verify the JWT from Ping Identity’s Ping Federate server? Recall that a signed JSON object is subtly distinct from a signed JWT (intended to be used in token passing protocols).

http://www.cloudidentity.com/blog/2012/11/20/introducing-the-developer-preview-of-the-json-web-token-handler-for-the-microsoft-net-framework-4-5-2/ describes the validation procedure

image

Assume we forget all the Azure AD type concepts, and just do the bare minimum to verify a token, with a preconfigured cert.

So, lets install visual studio 2012 on our host (with the Ping Federate based OAUTH AS and STS) so we can compile our very trivial token decoder project against dot net 4.5. This also allows us to reference the JWT security token handler assembly, having installed the capability via nuget (searching on JWT)

image

Obviously, now, we have a reader for the base64(url) string encoding and a token reader:

image

Once we get the right (one-level verifying) cert loaded and more correctly identify the audience and issuer (i.e. swap the above), we get a verified principal:

image

at this point, we presumably go off and create a cert chain and check the cert is valid, re its chain, CRLDPs, OCSP pointers, etc etc.

Posted in pingfederate

decoding an OAUTH signed JSON token from PingFederate

Assume the token is signed with RSA and all we want to do as an OAUTH client (SP) is decode the attributes within (using a project such as this):

image

Let’s assume that this token is still subject to interoperability testing, within the community of OAUTH vendors. So, what we see in this email is “for future interest” – to get a feel for where products will be, shortly.

Within the Microsoft world, and the office 365 world in particular with its recent Exchange Online support for OAUTH in the API, we see the project:

image

http://code.msdn.microsoft.com/officeapps/Mail-apps-for-Outlook-10c039dd

we can add a test page that calls the decoder class, using the access token minted by the OAUTH STS above. We have to modify the project above, which is too “microsofty” (and makes assumptions about mandatory JWT properties.

image

we some some very basic compatibility by making the following alterations to the original code:

image

We first just gut the project to get to a basic blob decoder – while waiting for the dotNet4.0 release of the formal JWT security token handler (which will do “local” validation of token signatures).

Changing Dictionary<string,string> to Dictionary<String,object> so that the Javascript deserializer is happy to parse the Ping-supplied array of scope strings, we have a basic information object:

image

A trivial project to decode the string supplied by Ping Federate then holds together as

image

The supporting (variant) base64 decoder is given already in the project

image

giving us

image

and

image

with an array (with one element) of scopes: edit

Remember back here for the JWT Security Token handler project work. The above is just enough complexity to get one going, particularly if there is an SSL “bearer” channel.

Posted in pingfederate

cert URL references in the OAUTH community

image

so https://localhost:9031/ext/oauth/x509/kid?v=1

image

Note how the response doesn’t use the internet/PKI mime types for (user) certs. Hmm.

Presumably, we can create a hierarchy of keyids, such as 1, 11, 111, 1111, where 1111 is the user cert. One can walk the ever-shorted keyids (111 truncated from 1111) and the constructed URL references based on keyid to “discover” the cert chain.

Can now I have an American patent (based on all this “original” thinking)?

Posted in oauth