shows the materials used in the tempest shielding around these data general 16bit 64k mini computers (and associated io units), doing late 1970s/early 1980s message switching of 9600 baud channels
In an earlier post we documented how we learned about Azure ACS in the OAUTH context. Here we look at ws-trust, again, assuming that the Office 365 Federation Gateway wishes to authenticate the user of Outlook (a thick client).
In the OAUTH case we learned how our local web app code could create serviceprincipals in ACS using the ACS API; representing client vendors entitles to use the authorization and delegaion-record management services of ACS. In the Outlook to Office 365 case, the same API mechanism will presumably enable us to populate user records as serviceprincipals with names and passwords (rather than OAUTH client-vendor with vendor-names and vendor-passwords).
If I have the protocol model right for office tenants using custom domains with federated authentication, the Office 365 Federation Gateway or outlook.com or Exchange Online (or something! in the weird and ultra-complex Office 365 hosting world) will issue a server-serer ws-trust RST message to our ACS tenant’s endpoint. This request, over https, will bear a simple username token. Within is the username and password collected earlier by Outlook and supplied to the Office 365 world that turns around and issues the ws-trust request mentioned above – seeking the help of ACS to “Verify” the values.
http://msdn.microsoft.com/en-us/library/windowsazure/gg185954.aspx seems to show use what to do.
Presumably, the token issued would be SAML1, for compatibility with Office 365. One would NOT bother with the AC configuration of the Relying Party that sets up encryption of the token. The audience name (RP name) for Office would presumably be urn:federation:MicrosoftOnline. For demo purposes, 3 constant values for the 3 required attributes (UPN, ImmutableID, SAML_NAME=UPN) could be supplied. Obviously, one would also run the set-msoldomainfederationsettings command on the Federation Gateway (office 365 tenant) point to the ACS endpoints and mex metadata endpoint (rather than ADFS or PingFederate).
The biggest hurdle with doing this kind of things is usually the secureconversation stuff. I wonder if ACS is willing to expose a binding without insisting that the FederationGateway (microsoftOnline) performs secure conversation protocols (around the delivering the ws-trust RST). Alternatively, if the ACS metadata exposes the right secure conversation bindings, perhaps the Federation Gateway will just perform secure conversation, too.
It seems worth a try – if only to get our brains back into gear on writing a modern, dotnet4.5-era usernametoken-capable ws-trust endpoint hosted by IIS8.
Now, lets go back in time and use the tools that were assumed by the evangelism team. (This is important, since IIS versions, IIS security models, IIS scripts don’t tend from the WIF team don’t tend to be forward compatible; so use the environment they assumed, if you want to keep some hair.)
the previous memo plus the above enables us to see what Turing was saying (in a more sensitive way):
H/H1 is a cyclic group, giving the desired trace function.
“following a long gap”… I wonder why!
gives us several examples:-
The material nicely characterizes how folks thought, in the 1930s.
Consider now the defense against “cryptanalysis of the cryptologic” (to use NSA’isms):
One sees the motivation for the (optional) non-linearity of the s-boxes – applicable when one faces the “probabilistic version” of the attack.
generalizing the theory led to:
Of course, we recall:
Given the probalistic nature of quantum mechanics and the “probalistic” (i.e. sum/product decoding) attack on s-boxes, it makes some sense that Turing considered quaternion algebras when:
Things are getting worrying. Whereas once the above text would have seemed like nonsense, I can almost hear now the writer (and his passion for the topic). What’s more, I can start to hear the message – despite all the formal terminology about rather abstract “constructive” computational models. Math is fun (when delivered in essay form).
For context, lets quote
We can build an intuitive interpretative model – that just uses simple computer programming analogies.
If a Lie Group G acts (smoothly) … means G is a program, like the one running in the page you are probably looking at.
“its Lie algebra acts by differential operators” means the program has certain subroutines named A, B, C, … that can each compute a particular formula, expressed using some horrendous expression probably. Perhaps, the expression is a stats function…getting to a (differential) density.
The thing that these are “acting” on … is the data set – a input file of numbers, in all probability. Of course the file has a format, the numbers have types, there are ranges, there is a theory of the “file format”. And the file may even have its own algebra (e.g. the “sql tuple-centric” relational-algebra). But, at the end of the day it’s just a stream of numbers, whose inner-format the program has to know!
The process of acting on “the space ” just expresses the “file format” and the model behind the numbers within. M is the underlying type of the numbers and the “inner-format” governing the stream, and C is the complex form of stream-readable terms that folks have decided to use (because, contrary to intuitions of a 14 year olds doing school math, complex analytical forms are actually easier to work with, as are polar numbers and the forms of exponential trig!) (what does the C notation refer to, by the way? It’s probably not the complex number type, given the font.)
There is, then, an infinite length stream of such readable-terms…so don’t bother programming using finite size buffers! Think more in terms of “convolution” programming in which you may well maintain state(s), from which you generate your own stream(s), that then interact term by term with terms from the common input stream… If it helps to think in terms of convolutional encoders and turbo code stream generators… built from finite state machines and state spaces themselves represented using shift registers and feedback loops (and the shared meta-state of the loops as a group) …then do so!
That ‘operators are the “infinitesimal generators”’ that can “give conserved quantities” concerning ”the evolution of a quantum system on M” just means that your set of subroutines (each feedback loop producing one computed-stream due to a particular set of feedbacks of the shift register inducing a particular cyclic finite group, perhaps) work cooperatively. That is, the (perhaps) xor’ed output of the collected substream outputs, for example, might represent, in fact, a (computed) higher-level computational-algorithm result. Perhaps the result is a “representation” (my term) of a matrix that can itself transform some (other) input bit-vector into an output, that the inverted matrix (in its “computed form”) can transform back into a value identical with the original input bit-vector! We are used to thinking of a matrix as a kind of program too! We have a program computing a program…or, rather, a codebook whose generated codewords define a high-level codebook (which is assuredly how Turing thought about all this, though keeping such “simple” thinking secret)!
So, we have a manifold (data set!) and complex representation (type of datums in data set) and algebras (subroutines) of a group (program) computing a set of intermediate streams that convolve with the input stream to give a interim combination (a representation of a matrix “program”) that it itself a “data-program” that converts inputs to outputs (or vice-versa).
In the case he is interested in, the math is am embodiment of the computational model only when one picks certain groups (so that then symmetries in nature’s way of handling energy defined special case rules governing diffusion in random or quantum walks)
That the operator algebras (subroutines) can exist outside the group (the program) is known as… a “library” of interesting/useful stuff. IN our terms, there are certain finite groups more useful than others! In Turing’s (GCCS-era) thinking, there are a set of cyclic “base” groups that its worth keeping in mind, as one builds layered code-word/code-generating systems
We have to remember than Von Neumann and Turing – while doing pure math applied to cryptanalysis – were very much focused on using the only tools they had to define what we now call programming/computer science. Its tempting to assume they were engaged in some “higher plane” of pure Platonic research known only to those with Math brains. But… its more likely they were just using (higher) maths “apparatus”, and known special cases of “constructive proofs”, because that’s all the theoretical tools they had to define what we now call “making software”
One has to realize that Turing study of the Riemann zeta function was more about studying random walks (for 1930s theory of cryptanalysis purposes), than for finding this or that algorithm. Similarly, he studied different manifolds because of their properties being self-defining structures (having intrinsic coordinate systems, yada yada). He is interested in quantum algorithms (1920s physics , 1950s “held secret” cryptanalysis method, rediscovered publicly 1988) and particular the extension of the central limit theory to the quantum data model as a “computational-method” for asymptotic computation – working on streams, all along. The study on enigma is a side-line (to this intellectual work) – but along with Colossus (leveraging Newman higher understanding of topology-driven algorithms, as applied to Tunny break and the design of the Colossus cryptanalysis computer) this helps “show” grounded’ness of all the theoretical modeling.
Now the question we are left with is this: was the Phi (Q?) function Turing uses all along merely a reference to the Phi map being discussed in this topic area? That abstract map that, for any algebra, allows one to define some or other instance of an “infinitesimal symmetry” (and thus capture very generally the nature of Hamiltonian dynamics, including those found in DES …and permutations-based crypto generally)
While we tend to read what Turing was writing in the sense of markov chains (and the way in which 1940s cryptoanalysis teased out the marginals from conditional probabilities in communication channels to control the search space of keys), we have to remember JUST HOW FASCINATED Turing was with mathematical physics. That is, he’d be perfectly happy taking from the physics domain an pure math argument about Hamiltonians and time-independent motion and casting the result into the specialized domain of cryptanalysis and coding.
If 1930s-era Wittgenstein school was famous for the doctrine and the dogma of language games, 1940s–era Nazism was famous for using technology to turn it into an art form: to control how you think.
Can 2010-era Americans do better? Or is the power lust so great that they will be able to help themselves not from other than imposing how they think onto others – with a knife man’s CIA dagger in the back for those who might expose the truth of the Roswell cover-up of Nazi-era flying saucers? Will the constant surveillance state from 1930s be reborn now as flag-draped drones spying and hovering outside your privacy (not) shielding opaque bathroom window so as to monitor your bathroom singing from cradle to grade and to grave and thence to grace – in order to ascertain, evaluate and categorize that inner intent and so develop the FBI secret police profile – that infamous secret, Stasi-era, now all American “dossier” ?
One understands that Americans have an “illegal” immigrant issue set; and that American writers – like Nazi writers before them — probably fail to see how their very language game projects imperialism and contempt for those others who might have unNazi or unAmerican thinking.
But why do England’s immigrants need an amnesty? Is it wrong, in American eyes, to be an immigrant per se? Was the whole bring me your (not too) sick, your weary, your …to New York just a way of getting cheap labor, all along? Was it a way of stealing the intellectual property from other nations via a brain drain? Were folks entitled to receive from immigrants, legal or otherwise, the knowhow of their lords in Europe?
Is all American intellectual property truly American, or is it derived from theft?
It’s pretty clear that the language really is all American; as is the malicious side of the content. The property rights are less obviously well established.
Rich Clients, ws-trust, local client services, and Office 365 Identity Platform with Enterprise edition
From Identity Service description file
Our implementation/configuration of the IP STS seems to upset this particular tool. But, at least we see that most expectations are being satisfied (domains, SSL, names, endpoints).
So what does it mean!? Should we add the kerberos plugin to the STS too, which might expose an additional endpoint?
Go to site http://office.microsoft.com/en-us/ and logon (using the organizational option). You land on the “portal” site associated with the associated Office 365 tenant’s “Federation Gateway”. One can now pick webapps within that local federation of trusted parties and go there. Watch carefully and you see a websso handshake between your target and the gateway. In short, you logon to interstitial site using the trust you established between Microsoft Online and your organizational IDP; and from the URLS presented there you can induce your choice of target to initiated a websso exchange between itself and your “now-effective” Microsoft Online session (with Microsoft Online acting as the asserting party).
The UI is
following by a redirect to…
If you prefer, you can also just go to that SP site at https://portal.microsoftonline.com, whose UI is
Completing IDP challenge and websso lands you where you’d expect – on a “menu page”.
Now, that user experience may not be acceptable. So, we can also simply say : goto to my webmail, and do the necessary logons (passing by the inter-stitial menu page).
Nicely, for exchange/web-outlook we can do that easily, typing (for my site)
https://pod51038.outlook.com/owa/?realm=rapmlsqa.com , where one notes the identifier of the realm/IDP. Note that the use of realm avoids the need to type the same thing at the login/discovery prompt.
For sharepoint as the initial landing point, it seems to be the likes of
Now, when we do a bit of anglo-american spying, we see that the sharepoint site emits:
so, if we add whr=rapmlsqa.com to that (appropriately), what would be the result?
While that flow does what you’d expect (and we use it A LOT in the realty world), it doesn’t land properly on the my site in sharepoint. A variant works, though – able to land on the public website:- with session at all of the federation gateway (essentially identical to the realty portal, as a master session manager for SPs), the IDP, and the particular SP site (sharepoint public).
From that SP site, one can do inter-SP site hopping over to the OWA (outlook) webapp, as you’d expect – given an SP affiliation session exists in the “federation gateway”.
Fascinating to see all this happen finally – almost identically to how we did the same thing – since you don’t want to go back the IDP merely to hop between SPs.
Good backgrounder at http://www.syfuhs.net/category/Office-365.aspx.
US national identity infrastructure–a review in early 2013 of NSTIC; cheating collectively institutionalized
The US national identity infrastructure looks likes it coming along just fine, with the help of lots of vendors. In a moment, we will characterize what’s its like for our “test” case – working as you would expect with websso and ws-trust with Microsoft Office 365 (the cloud-hosted multi-tenant LAN in the cloud offering). This test is important – representing “commodity” and “normality”.
At the political level, one sees that this is ALL coordinated with the cybersecurity push – with the very same vendors (who will be SPs and tenants of said clouds) being “induced” to share the logging data with national authorities. This seems to be some mix of NSA and DHS (with covert participation of FBI, without a doubt).
What’s more one sees how the cybersecurity events are all pre-planned and coordinated with at least the UK – which has a parallel and largely identical program of events that sync REMARKABLY in timing with the US events. One should assume PTK and his committee are part of that world that is syncing up the national responses. Its interesting to note that foreigners in the UK probably know more about what the US government is up to, and what it plans to do, than the typical US person knows (or CAN know). The falsity of this position (that the last folks to know what the US govt is up to are Americans) is the end product – the state of where we are. It’s the new norm – the last folks to know are the folks who are citizens (since they are to be “guppied” into compliance).
We can expect “programmed” and date-planned cyberscares to be all part of the news now, concocted to “educate” the public and to make the “big shift” to government-regulated internet perceived to be a norm. We can see the history of the this over the last 3 months, and can project the planning back 3 years (synced with Schmidt and his covert planning activites).
One also sees the attempt to have China .indirectly adopt the very same posture as UK/US – and build a parallel set of capabilities – trying to create a world of mutual assurance (by formulating consistent and spying practice on each of the local citizenery groups – via their vendors’ sharing logging and tracing records on the pretext of scanning for “national level” threats). One also sees Commerce starting to engage for its part in the usual set of dirty tricks to induce “the laggards” into compliance (bulling with incentives, insurance obligations, biased legal presumptions, liability scares, and all the others things in the black bag that were shelved from the last time this kind of program was rolled out – the UK/US PKI “initiative” in 2000).
To be honest, I could not care less about much of the above and whether the US is spying on me (for good or evil intent). Of course it is. What is interesting is the social change that is coming about, as folks in negotiations with American companies realize that their negotiations were “spiked” for the last 40 years – with the other side have an unfair “cheating” position – having learned the other sides positions in order to WIN, WIN, WIN (by means fair or not, who cares).. That is, there was never a good faith attempt to reach a fair agreement – the coins were ALWAYS biased.
Now days, WHENEVER you engage in govt or large company negotiations, you can ASSUME that the US negotiating party is actively lying (by omission) to you – winning the chess game by having an unfair advantage. You know this … because history is teaching that this was going on for the last 40 years. Only now is it becoming generally known that the affairs and the “negotiations” were a cheat, all along. The might American “institutional” exceptionalism was, all along, a sports-like cheating program – leveraging an unfair and undisclosed practice. See the US flag, now assume cheating – at a national and institutional level. The evidence is history.
Now to be fair to the US, with its covert cheating practices, its hard to say it was fundamentally evil in how it applied its advantages. its still feeds more people than anyone else – the bottom line. And I still believe in the whole American opportunity thing, creating wealth, etc – even through activities such as the above (due the sheer size of it, and its side-effects of renewal). Its just that now you have to look someone in the face and KNOW that they WOULD cheat you, if you let them.
The new face of institutional spying/cheating seems to be, given everyone now knows the above, that there is a collective shared spying/cheating.
The web project, with embedded ping federate config (but no license), is here. It assume you have an already working “OAUTH2 Playground” installation for Ping Federate – from which baseline we make 1 config change. This adds a second redirection URI to the existing “client” (aka SP) showcasing the authorization_code grant.
Note, adding this redirection URI to the so called “ac_client” breaks the Ping supplied authorization_code demo. TO make it work in a config with multiple redirect URIs, ensure the consuming client supply the requested redirect URI. By default, it supplies none – thereby citing the default (and assumed singlar) redirection URI. Since there are now multiple choices for the server, there is currently no server-side mechanism to designate which of multiple URIs is the default.
In short, its just a standard ASP.NET web forms project built by Visual Studio, showcasing the ASP.NET integration with dotnetOpenAuth libraries and the new simple membership providers, etc. The project adds an OAUTH2 provider, allowing the project to talk to Ping Federate as an OAUTH authorization server (glorified named for an IDP) as well as Google, Live, Yahoo, etc.
To build on windows server 2012, use the web installer visual studio 2012 (with Azure SDK). Also use the web installer to install the (WIF) windows identity foundation component. (The optional SDK installation is not needed.)
The best experience happens when your first add the ping federate self-signed SSL certificate to the root store of the host; and second use IE10 and configure its https settings to NOT warn on certificate/DNS name mismatch. Alternatively, install fiddlertool.com and configure it to intercept and decrypt https connections, without warning about server certificate errors.
By fiddling around with the working settings in Office 365 talking websso to Ping Federate IDP and setting some of them to URLs to our own IDP, we managed to replace ping with our own implementation based on an entirely typical pages built from the WIF toolkits).
we note that going to the office portal “site” redirects to the federation gateway at microsoftonline.com, which IDP proxies to the realty IDP.
For unknown reasons with IE10, the posted XML (with a action verb beginning with urn: since that’s the form MicrosoftOnline uses) induces the browser to try and select an app (rather than post the form).
But we are making SOME progress, and seeing how to land on distinct RP sites (e.g. outlook webapp) sitting behind the Federation Gateway. Whether we use this of Ping Federate, its all useful knowhow.
Set-MsolDomainFederationSettings -DomainName rapmlsqa.com
Earlier, we used the WSE3-era library suggested by Ping Identity to talk to the STS, presenting a username token in return for an SAML2 token
We replaced that code with WIF code running in dotnet4.5, much as Ping themselves suggested for the KerberosWsTrustBinding. Obviously, we used the UserNameWsTrustBinding, instead – which still comes from the “legacy” microsoft.identity DLD (as do some some string constants). The rest of the functionality comes from native dotNet 4.5 DLLs, now.
where xmlToken is decoded (pretty abstractedly) as
given Ping Federated issued the following:-
Perhaps I’ll get in trouble from the old men keeping the faith; but I get the impression that upon Turing joining 1939-era GCCS he quickly learn how to play the game. It was an old Churchillian, mostly incompetent WWI-era game – perhaps best represented by the (failed) defense of Norway. He learned to join the world that was (since it denied mathematics and probably had no end of put downs for any possible accomplishment potential); and seemed to learn to fit in. Let’s assume that it was something about the mixed military/civilian world that enable a outfit of misfits to come together.
I have no doubt that the abstract problem of random and quantum walks on graphs was quite “in the mind” of Turing and other math type of the era. One has to remember, unlike the teaching that the average 1 year old gets about (old) physical, Quantum mechanics and relativity were, in 1930, brand new – a concept for intellectuals that one should assume were as earth-shattering as Henry Tudor dumping the Pope. Finally freedom (from the old science constraints of dour, depressed Newtonian thinking)
Now, while he may have had such theoretical constructs in mind (which provides a reasoning framework), there is no suggest so far that any “underlying theory” of crypto was driving the work on enigma. Of course. we still don know quite WHAT Cambridge was up to, pre-war – or what the real connections were (as they were in the 1990s) on code making and breaking. Knowing England, it was all class based.
Since PingFederate also has a (entirely standard, Java). jks based trust store for the certs used by its SSL endpoints (actually hosted by jboss), everything we see here could work there. Indeed, the Ping Federate console does a better job, nicely hiding the need to use the java tools. If the SSL is using a server-side SSL offloader card (built into NIC), then the console can be doing key loading directly into is trusted boundary.
Assume it does.
Note how that OCSP root is not imported into the trust store of the SSL engine – meaning the SSL engine is not enforcing the (out of band) OCSP process for the layer 4 client entity certs, used for peer entity authentication.
And note that the CSP that talks to the card crypto boundary to pull the attribute fields (cert values) is not mapping the intermediate certs into the right windows store. Should make you rather suspicious of the quality level of all this.
But, as it says clearly: this is some half-assed SIPRnet not NIRPnet thing. This is feel good stuff
http://qip2011.quantumlah.org/scientificprogramme/movie.php?id=1002.2419 (slide show with context…)
http://arxiv.org/abs/1002.2419 (paper version)
Does a great job of putting into simple geometric diagrams notions of eigenspectrums and the change of “representation” when one uses the quantum apparatus of analysis.
This is is actually a return to the 1930s – when folks were far more comfortable “modeling” on the complex (unit) circle.
Perhaps its inappropriate to project this material back on to intepreting “whats going on in “Turing’s On Permutations manuscript, but using this (excellent) minimal set of concepts we can do so. The parallels are quite amazing.
First, Ill assume its undoubted that Turing dominated graph theory and the theory of halting; and that he also dominated the central limit theorem. In the topic set, rather than define a Turing machine that so interprets graphs and halting on “marked states” as a number whose binary fraction recurs forever (once one has reached the stopping/halting state), we just work with the graphs themselves. The random walk, using normal cbits, allows us to model a stationary distribution – and capture homogeneity. Whereas Turing said K is the constant density, folks here say: tall it 1. it’s the largest eigen value in the spectrum. Much as in linear cryptanalysis, every other component-density making up the overall stationary distribution is (on the “eigenvalue scale”) some distance from K (or 1). in cbit space, one is projecting onto the constant vector.
When one now uses a quantum walk (instead of a random walk), one is thinking in terms of qbits (rather than classical bits, or cbits). Now we need unitary processes (i.e. reversible processes). Of course, Turing built such a model out a sequence of engima-style rotors, considering the sequence of several inputs and outputs in a chain, to give him his model of a unitary process.
Now what is interesting about the modern material is that the author constructs a “channel” –an “interpolation” under the weight function of probabilities that either the marked denstity is in operation (P’) or the unmarked denstity is in operation (P). The join density (of the channel) can then be modeled (as Pi(s))), and one can project onto this “state vector.
Now, its highly intuitive that the density of U and that of M are a bit like in the applied math we did at 16 – the sig() and cos() contributions – allowing projection distances to be calculated.
Now “thinking in phase space”, we can leave behind the cbit analogy and just THINK in terms of phase angles – since we just MOVE TO an measure of the set (of marked/unmarked vectircs) which is in the eigenvalue basis. Now the angular rotation from the stationary distribution of the marked covering graph is the “distance” of the projection onto the current state of the joint density, of this channel.
Then we wee what we say in Turing, considering the quadratic relationship. For turing he was acutlaly focussed on making a code (rather than the corresponding cryptanalysis problem). So we see him using particular properties of normal subspaces to create a particular set of marked nodes, and he wants to show that indeed in superposition space there is a uniform probability of any state evolving, at some limiting distribution of his linear functional.
Ok, That essay gets 10 for content, 2 for style!
OAUTH2 and openid have become a US government contractor fiesta de caja – or a nice little earner, to use a London expression. And the US taxpayer pays. Above, I show the true face of OAUTH2. Its just what you had before (but now it no longer interworks). Of course it “WILL” in the interim future (when filthy luca flows more liberally).
Why would the US be complicit in this charade? Because it doesn’t want the old stuff to interwork. It wants only the new stuff to interwork – and only when it is subject to “new” rules on interworking that have no technical requirements. (They do have political requirements, concerning how the internet is to be indirectly used for US warfare planning processes.) Of course, the trusted vendor maintains this series of parenthetical charades.
The picture comes originally from http://self-issued.info/docs/draft-ietf-oauth-v2-28.html#anchor7. It’s annotations are mine.
Note how in my “rendition” the authorization server is now a classical guard, governing access to resources.Its precious little different to a Unix kernels ACLs guarding the attributes in the password file! Note how the only OAUTH style resource anyone is interested in today is…ahem, the user record whose properties might enable some website to customize its behaviour (Hi Peter!) without having to ask me to fill out an account-signup form. IN other worlds, the combination of a Authorization Server and a Resource Server is a “IDP” – the party that does user challenges by one means or another, and then issues one or more assertions containing a list of name/value pairs. Just as SAML allowed 19 parties to divvy up the formal roles on the formal casting list, so does OAUTH2. Of course, only 2 matter: the Audience and the Company.
The website seeking to borrowing the user record from the IDP is just the SP, dressed in complicated new clothing. Yes its an old joke, its all sold by the OAUTH2 tailor (in the eyes of the Emporer, anyways) and yes the public can see through the joke.
Now, OAUTH does have some element of modernization in how this or that party at the protocol bash interacts. It even gives them all a new sixties name (groovy “authorization grants”). Arguably, the new names are better than the old ones (the 20 years old Liberty “name federations”). But, a spade is a fork is a digger – a muck racking took at the end of the day.
Now given the nature of the folks who I saw running the shibang, it really doesn’t surprise me that folks reinvented the wheel, playing the global politics like Philip (father of Greek song and dance act: Alexander). When you conquer Persia you might as well start wearing Persian robes, like the local boys, dispose of the annoying Eunuchs in the old political class with an impromtu head-on-pole party; and generally take on the mantle of the former ruler.
Quite how this OAUTH metaphors as history ends, getting to Ptolemeis ruling Egypt and getting it all on with Ceasar, Mark Anthony, Augustus and Cleopatra in the great “.asp ending” to the play … I don’t know. But lets wait and see how the OAUTH2 saga gets to tell the same old story. The ending is assured.
Those who designed the early internet we still live with were business men – perfectly happy to rig the national and military telco infrastructure to suit their interests of their (US centric) business models. This means folks were use “R&D” to “make the case’ for that which they wanted anyways – revenue-generating and long term contracting practices that would underwrite huge capital investments that in turn gained them access to huge loans (for other business sector ventures applying the same kind o thing now to, say, the intelligence, or academic or commercial internet).
One has to look at IPSO and STU-III as two outliers in this space.
lets assume the above simply talks about end-user certs – bearing security labels, and device-certs loaded in trusted store on the device so device “capabilities” limit what user-certs might seek to have the device be used for.
This is rather a different world to the “secure IP phone” of the 1970s, in which the phone’s DCE/DTE interface could label the outgoing IP packet with an IPSO marking (much as today internet apps mark ethernet frames with priority markings, that the intelligent-switch then acts upon (or not, depending on whether the marking device is trusted or not).
http://tau.ac.il/~tromer/acoustic/ valicert root keys A little known fact is that i was mc at the generation of three ca/ssl root keys still used widely. The cpus were spied on by a truck/trailer placed in the car parking lot of the adjacent building. My assumption at the time was that what folks wanted was the primality testing evidence. The ceremony was observed by Price Waterhouse. I always assumed that one of the individuals, not acting for the firm but for others to whom the firm owed a ‘higher duty,’ participated in facilitating recording activities, at the required fidelity. It was interesting to watch the charade (on video playback). Not citable. No permission granted for linking or downloading content. This is marked ‘peter fouo’. You may not precis, paraphrase or quote even 1 word.
We extended our ASP.NET OAUTH2 provider so its GetUserData method, called as part of ValidateAuthentication, talks the Ping Federate authorization server STS endpoint asking it to validate the access token received earlier (and presumably via a webservice call from the resource “client”). In response, it presents a new token type (ping federate proprietary),which contains a field called (happenstance) “access token”. Parsed as a JSON object, it’s an array of name/value pairs – source ultimately to the Ping Federate access token attribute mapping screen.
This allows us finally! to say we have built an ASPNET OAUTH2 provider to talk to Ping Federate, since we see the final attempt to do some account linking based on the message exchange
Well that was not too bad. What about 5h all in all?
Well done Microsoft (ASP.NET team). WEll done andrew@dotnetOpenAuth. And well done Ping Identity (for showing what to do). well done me, for being a second class type who wont give in to the American putdown labeling attached to me (and 5.5 billion others, who are not “exceptional”)
In the post “ASP.NET OAUTH2 provider to Ping Federate’s Authorization Server – part 3” we made an ASP.NET website, acting as SP, initiate a request against an OAUTH authorization service. Our Provider class got back the authorization code – a one time code to go get the very first acess token. Getting the latter is now our mission:
There is not really a lot to say… except do the above All I changed from the ACS implementation was the endpoint (to /as/token.oauth2); and added the SSL ignore cert validation errors (because Ping Federate is still operating with a self-signed SSL server cert).
Obviously, the next stage is to do the same again, this time using Ping Identity’s proprietary validate_token access grant type. In the callback framework that means calling
In the post “ASP.NET OAUTH2 provider to Ping Federate’s Authorization Server – part 2” we made an ASP.NET website, acting as SP, initiate a request against an OAUTH authorization service.
By suitable selection of port in the code, we get a round trip in which the response is deliver to our handler, as desired. It has all that it needs to be handled by the waiting provider (instance). Obviously, it also has the authorization code, as shown.
Now the return handler expected to see a /oauth2 component in what ASP.NET programming calls the pathinfo property. In this build, for dot net 4.5 and perhaps a change of ASP.NET version, by default the “page” identified by the returnURI supplied by the dotnetOpenAuth/ASP.NET framework I’m using on this host does not suffix .aspx. Thus the pathinfo (the path fragment after the “page”) is not determined by the pipeline, upon response processing (since the paradigm has changed). To fix this (in a hacky manner), we simply amend the redirectURI in our class to add .aspx to what we assume to the be the (old paradigm) page name – and update the corresponding Ping Federate vendor record to include the .aspx component. Obviously, you do better in production code!
Now we can start to debug our response processing! What our (bug fixed) initial response handling does is simply unpack the OAUTH-style state into the form the dotNetOpenAUTH/ASP.NET wants it handled.
Finally, the method in our own provider class for verifying the authentication (response) iis invoked. It due invokes its base class method which duly calls our subclass’s appropriately named QueryAccessToken method. Its job will be to exchange the authorization code for a token, minted by the STS feature of Ping Federate!
This isn’t hard, is it!!
Let’s do token minting (and then attribute collection) later!
In the post “ASP.NET OAUTH2 provider to Ping Federate’s Authorization Server – part 1” we got ourselves a basic implementation of a provider class for OAUTH2. It cooperates with a set of ASP.NET page handlers that build upon the events and messages of the OAUTH2 authorization_code procedure. In short, those add to the OAUTH protocol value-added “request formation” and “response handling” procedures, where the latter involves lots of post protocol account linking and database work.
We invoke the OAUTH process by clicking the button that chooses our “Ping Federate” server as the authorizing/authenticating partner of this SP website.
The page handlers invoke the dotnetopenauth framework – which finally calls the first method in our provider class. The output is this method is the returnURI Address, with suitable local parameters. We see two local parameters (at 1) added to tie the incoming response back to the correct outstanding protocol state block (waiting for responses from PingFederate). They identify the class of provider and the unique instance/session identifier or “SID”). The handling at 2 allows this site to be a protocol bridge, where the returnURL parameter stored in a cookie (storing this address “even lower” in the stack of return values) will allow the indicates OAUTH response message handler in turn to invoke the next outstanding layer of protocol activity retrieved from the (cookie) stack. I.e. perhaps generate a ws-fedp response).
Rant Alert: For privacy reasons, this information is hidden form the nosy American IDP and its policy (probably) of sharing information on who is associating with whom with NSA or DHS or some contractor proxy in the UK or US (to skirt local laws). Sigh (at the duplicity process…of IDP vendors).
note that enforcement of port 80, above, may need to be other port value – depending on how you debug or deploy. You also may not DESIRE to downgrade from an https indication…
The next step is to learn the address of the OAUTH authorization server (i.e. a Ping Federate endpoint). This happens in the GetServiceLoginURL method. In short, it formulates the request message to the OAUTH authorization service, moving pseudo-state information FROM the returnURI to its correct position in the request. This is important – especially when working with conforming OAUTH protocol servers (since they handle the latter form of the state information correctly, and typically ignore forms that attempt to leverage parameters on the redirectURI).
Since Ping Federate is programmed to only return messages to registered returnURI addresses, we must update the vendor’s list (of potential return endpoint):
Note how in GetServiceLoginURL call we actually altered the return URI prepared by the earlier method, adding /oauth2 as a pathinfo element. And, note how we register this in PingFederate, too. Between registration, indication on the request, and the additional signal represented by the “oauth2” pathinfo, state handling can be handled upon return. We see how this happens, later.
We now see request handling et all the way through to the various checks so that an IDP challenge occurs. Clearly we are doing SOMETHING right.
We will address the second half of “response process” in the third part of the series.
Presumably, IBM had to consider the same argument – given the amount of silicon taken up.
Perhaps we see that there were criteria other than cryptographic strength. There are also cryptographic CONTROL plane aspects one wants to enforce. And, one may want to make a stacked die (even back in 1975) – assuming that the same silicon is also going to be driving a crypto-search machine (not a encipher/decipher function machine). Remember, IBM’s largest customer is NSA – for the language/sorting function of the 1970-era agency. Its only by 1990s that SUN gets a hold.
It also occurs to me that really one doesn’t want to use DES ECB with a source that has any language characteristics (i.e is other than IID). The original (1977-era) model of (triple) DES ECB for 57-bit (yes 57) key wrapping is one use, as a cipher one has a different use – given that initial permutation.
Whether a 1950s-era Hagelin cipher is implemented on a lug/rotor contraption or a hand held calculator makes no difference. Its not chaotic; and does not seek to base is strength on the notion of the distinctiveness of 2 evolutions of the probability densities.Rather, its very linear; and therefore susceptible to linear programming attacks engaged in “approximation”.
And so when we see the NSA (cryptolog) phrase “manual cipher”, read Hagelin machine (circa 1970/1980, not 1945).Believe the company when it says its protections were fulfilled as designed – which is not to say that the tweaks made were not sent to NSA; or the micro was not so irradiated by Motorola so as to induce emissions that a sensitive detector could pick up (later).
To be fanciful, imagine that the Swiss calculator (with chip dies made in the “custom” Siemens ICC fab) is now in one’s super-secret Iranian military intelligence facility …full of well indoctrinated coding clerks doing best practice. But, they also go home, taking “information” with them. Of course it decays pretty fast, so one needs ones a sensitive detector to be scanning the human “carrier” – who is a damn sight more predictable and recurring in his habits than the algorithm. Use the unsuspected “easy vector” to compromise the hard vector (pun).
More practically, assume that the micro is induced to make errors in its fetch-execute cycle which subtly alter the path of the “Software” finding “lugs” so as to leave a poker-style tell (that reduces the search, if you know the tell).
Now, at the same time one has to be careful – since all those 20 year old calculators still exist (and our friends in Iran can now go test them TODAY with our collective and different level of appreciation and compute power). Our Iranian friends (and there are some, amongst the religious nuts of that and related regions of the planet) can even today GO BACK in time. They can go find out now how hard it would be to perform the search, assuming there are tells. So, the first thing to do is FIND the tell(s); now you assume they exist.
There is nothing that the highly indoctrinated subgroup within the US hates more than to be bested in the secrecy world of double-think (even 20 years later!). If you want to “annoy” the old men then “showcase” the spying (of 20/30/40 years ago) – leading to increased distrust (today). Now you are playing Kayla-style (fictional) politics (even with formally-irrelevant crypto history). But it still has power; and its entirely legitimate POLITICAL power.
Let’s assume that the block on releasing the likes of the 1945-era Tunny report comes to an end by 2000 simply because the era of reading “manual ciphers” is just over – by that point. Those who were going to be duped, were duped; and new techniques are relevant. So consider that perhaps we have the Iranian people’s revolution from the local dictator to thank – for putting people before empire. We can also assuming that continuing blocks on Testery reports from 1945ish are still in place because they hint at “the human skills” needed when puzzling over even modern cryptanalytical solutions.
Well lets take all that scifi and fantasy now, and go find some reality:
giving us, with some anti-clockwise rotates:
Now I wish I could remember where I was reading, just the other day, something on normed spaces, in which one wanted to know which of the linear vectors was “nearest”. This contrasted with picking the nearest point. Perhaps it was in the PCA eigenspectrum decomposition stuff I was reading from UCL.
Hardly! I have good reason to believe that the reference is to Motorola GSTG (as it was, and no longer is). And, I worked for it; and certain persons who, 10 years before, were “fully indoctrinated”.
The fun times in that job were two visits. One went and looked at lots of old military boxes (presumably full of circuits), from the Caneware projects. The others was a closet holding the root key (phone unit) for the worldwide civilian secure phone system (the civilian version of the STU-III, with different electronics and ciphering). Next door was (in a non compartmented area) was the folks selling the Type I LAN encryptors (with its red book distributed TCB concept) and the “secure” satellite phone system.
What was MOST interesting was the engineering method in use.
I remember meeting the general manager. It was quite fascinating meeting someone who had spent a lifetime doing military contracts (in a classified facility). I don’t think he knew that another world existed. All they really knew was that NSA had collapsed from within; and the old world had ended.
Boy does the internet not make a hash of things.
First, realize that in 1990 a SUN 3 workstation just happens to have the ability to support custom boards – with lots of custom-programmed (in the VSLI sense) LUTs. Your job as a cryptanalyst, having been briefed by the cryptologicians, is to “intuitionistically” break this or that variant of a hagelin cipher. If you have the information about the tweaks made to the general mechanism for country X, so much the better. Your job is to sit there, in a remarkably similar manner to 1945 colossus cryptographers (in the UK nomenclature) and let you human brain do what the computer cannot. But DON’T underestimate the combination of human-driven (computer) search! There was a t (me when I could tell you WHICH (concert) pianist was playing (a major labels recording)! I could hear the intonations of his/her favorite piano, and the way his/her particular muscles were tuned into its touch and the response of its particular mechanism, for the particular way the hand/arm would have to move – to play certain note sequences.
So assume the cryptographer can do the same thing, with his/her favorite cipher “music”, as its intones its way on an FPGA “instrument”, with display on the sun III framebuffer! Its an art, similar in some ways to reading xrays or MRIs.
Nows I think about it, I had great fun also re-learning to play on a (high end) electronic piano, with no traditional mechanism, no soundboard, no vibrating strings that buzzed at your ear drum. Playing with 16 bit polyphony was great fun too (and seemed more than the brain could deal with, back then), as was using a PC to design one’s own PCM-encoded waveforms that could be uploaded. It was also fun re-learning to “touch”: a keyboard thinking in terms of inducing the responsive computer to change the attack and decay filter for the particular waveform – quite a different musical instrument to the true mechanical piano (even though it looked like one).
In an earlier post we set the stage for integration of a ASP.NET webforms application with the Ping Federate OAUTH2 authorization server. We showed that the application could talk to some other oauth-like provider (Google). Let’s get to the next stage and try to develop a “provider” class that plugs into the ASP.NET/dotnetopenauth framework for those OAUTH2 websites seeking to talk to Authorization Servers, IDPs, and graph endpoints (web services supplying user records!)
Remember! Don’t get frightened. The language of OAUTH2 is all designed to intimidate you, making it seem ultra complex such that only “professionals” and “experts” have much of a say. It’s VERY simple, in fact. Anyone can do this (even me, I hope)!
First, to our host let’s add the windows identity framework (and SDK).
Then, to our source code project we can add a reference to the “microsoft” identity DLL, enabling us to work with “claims”:
Since we are starting our “OAUTH2 provider” with the class we already built to talk to the Azure ACS-enabled Authorization Server, we imported the prototype class into our project. Now we can use associated packages. First we resolve the identity class (above), and then the resolve the reference. to the NewtonSoft JSON DLL.
Since access tokens from Ping apparently come in JSON form, we will add a nuget package for handling the parsing of JSONP objects:
We also added a reference to two standard dotNet DLL: system.runtime.serialization, and system.identity. These round out JSON and Claims object support.
We now have a compiled provider class(not that it works). It basically exposes the right interface –enabling vendor-specific behavior to specialize the dotnetOpenAuth framework integrated with ASP.NET.
Before we can registered this provider, we need to collect some data from the Ping Federate configuration (endpoint addresses for (1) the STS that mints access tokens & (2) the authorization server that administers the consent and issuing of “persistent” grants), and the application-client’s “vendor” credentials):
This allows us to specify a registration (again noting that the data is neither perfect nor working at this point). But, we are making SOME progress. We have an view of the landscape on which we can now paint the actual players.
The last piece we need, recalling what we learned from a similar exercise on talking to the Azure ACS OAUTH2 procedures, is some code that we added to the suggested ASP.NET page handlers. This handles the syntax of the OAUTH2 “state” information (containing ASP.NET provider names AND anti-CSRF value) and the authorization code. For this we add to the handler a recognizer routine, that detects the authorization_code coming back from Ping Federate. It basically intercepts the response, and simply prepares the fields so that they can be handled either by the provider class or by the ASP.NET provider framework ensuring the message goes to the correct provider class instance..
note, the Convert.ToInt16 may want to be more general. For example Convert.ToInt32() will allow for a wider range of valid port numbers.
Since this all compiles, we can tomorrow start to debug it all – and make it all fit with the Ping Federate way of doing things. Let’s cross our fingers and hope we can make something work that plays the role of the “ac_client” in the OAuth2Playground web app – which we have outgrown.