[0804.0011] Classical and Quantum Tensor Product Expanders


Building on seberry’s paper on bent functions and the math background of unitary processes, permutation matrices, spectral gaps, trace eigenbases etc, we get to glimpse material and application topics withheldfrom any such paper: cryptographic method for wheel wiring and mixer analysis. The overlap between maximal quatum mixing states, supercomputers with hammng weight seiving capabilities augmented by 1980 era prabalistic search capabilities (using actual nuclear chain reactions to cover the combinatorics), and more modern attempts to now define a reliable multi qbit processor array for indetermistic processes with 1950 era wheel wiring . Is fascinating.

One sees why even the topic of rotor wiring and analysis (of what cobstitues a good set) is even today so highy controlled, with the usual academic deceptions and participation.

Posted in coding theory

254B, Notes 2: Cayley graphs and Kazhdan’s property (T) | What’s new


Cryptographic design method for sigaba index rotors?

Well within rowletts capabilities.

Posted in coding theory

NSTIC the reality–wot a mess (typically); the free lunch to be avoided

So once I had 19 login accounts, to each of which was bound some authorization information. it was a pain, and sso was born. Now I have one login account and life is good.

Or it would be if now I did still have 19 authorization licenses bound to the one login.

The point is that the mess didn’t go down with NISTIC-inspired nationally-coordinated login – it went UP. Folks just moved the complexity point along the chain a little. Now there are just lots MORE places where control is exercised – any one of which can go wrong.

So, now I have an MSDN account , bound to home_pw@msn.com –  which issues various codes that that can pay for authorization liceneses – such as be a developer/publisher of store apps and/or phone apps (which are different!). I have azure subscriptions tied to home_pw@msn.com and others tied to home_pw@outlook.com. Logging on to the azure portal can also happen via the integration with Office365 (as an IDP), which can delegate to a corporate ADFS – so logon as that ADFS issues the office365 account’s name. When I “deploy” to the Azure hosting world, however, anyone who owns a .publisher credential can do so. When I want to RDP to the resulting site, I have to use a username/password (that is not tied in anyway to microsoft accounts, Office 365, ADFS, or anything else);;

To make visual studio 2012 local debugging work properly, I have to run it as Administrator – except when running store apps (which don’t allow the process to run in a group tied to the local Administrator). The windows accounts are actually Microsoft Accounts (formally known as LiveIDs) in which Adminsitrator@domainAdmin@localAdmin –> home_pw@msn.com and store@domainAdmin –> home_pw@outlook.com. While store apps running on the windows server box MAY leverage the SSO that comes from logging onto the PC with a microsoft account, this is not true for webapps – which constantly prompt me for live credentials . such as doing all the above.


of course, I have absolutely no interest in the above – except that I have to do it to get anything done. What Im really interested in is having my own SSO network projected through ACS which is NOT “governed” by any of all the above. its offensive, its in the way, and its Very American (a giant mess). All that happened was the there is now even  greater granularity of governance control – for absolutely no benefit to me. Whatever TRUST the above may be intending to confer upon me in the eyes of consumers is wasted (since their trust in me has nothing to do with Microsoft’s proxy-NSTIC governance story.) The number of third party apps that need to connect up to the data service than mine conencts up to is zero. I am not facebook, and not attempting to be a facebook. I just want to present an app on a device .. so it goes klunk – just like on the PC.

The problem with the OpenID Connect style “device” story is that it seems to be MOSTLY about governance – attempting to have a few TTPs (acting as proxys for regulators) run stores – that project national social policies on the “trusted space” of devices – vs those evil PCs that are an open platform.

Until the devices behave more openly, I think we avoid their fancier “integration” features. Something about free lunches not being free…

Posted in openid connect

Azure AD OAUTH (vs ACS OAUTH) from Windows Phone



The picture above shows the AD client recently added to the ASP.NET “component” in dotnetopenauth open source project. One sees, by default, that this provider is set up to protect the graph API endpoint (the “resource”).

We see that the endpoint supports the websso-friendly authorization code grant type:


We see that the token endpoint has a few quirks:


and we see that the provider client retains the tenant ID and the user OID in instance variables.

One notes how the cert handling fails to conform to PKIX…and one notes the imposition of SHA256.


We see now handling of the refresh token (assuming there is one). We also see no evidence of “id tokens” either.

Now, what we do NOT see is how the provider and SID variables would be handled on the redirect from the authorization endpoint (since proper use of the OAUTH2 state mechanism is not in evidence). Evidently, the service principals redirect is not configured to go back to the ASP.NET template’s ExternalLogin page but to the AbsoluteUri of the page hosting the requestAuthorization handler (normally login.aspx). This will be interesting to track, to see how it was done PROPERLY. Login must maintain state somehow (and presumably it will be via viewstate).

We also see the article at http://www.cloudidentity.com/blog/2013/04/29/fun-with-windows-azure-ad-calling-rest-services-from-a-windows-phone-8-app/ that also consumes the (Azure AD OAUTH2) endpoints directly, too. The latter looks like the next thing to emulate, if only because I can now so trivially swap out the native use of Azure AD endpoints and substitute my own (which happen to delegate to Azure ACS’s OAUTH2 endpoint “support”).

Posted in Azure AD, oauth

Using PingFederate-style OAUTH2 in Azure Mobile Services apps

The job code article series nicely lays out how to think about nodeJS scripts in Azure Mobile Services and their interaction with the OAUTH2 protocol. IN one case ,show below, we see that perhaps our own JWT could be used, as already being minted by our Ping-Federate OAUTH AS emulator site (itself hosted as an Azure cloud service supported by the Azure ACS OAUTH2 and management service endpoints).



So the steps to get here would seem to be:-

  • Take our working smarteragent IoS Application and build our own equivalent working on an iphone – using the Azure Mobile Services starter project for IOS apps. Of course, this means we need a web site delivering json services to the app, too – a role that can be played by mobile service site.
  • We need to change the app’s client-side code so that the embedded browser goes to our ping federate-AS-emulator /authorization/as.oauth2 endpoint looking for redirect bearing the authorization_code. Perhaps this redirect should target a page on our authorization server – that induces a suitable (javascript) push notification to the various apps built with the microsoft mobile client library. We can study how ACS’s HRD support and then Azure mobile scripts do this, for example. The net result is that the app’s native code gets control back from the embedded browser.
  • Do we really want the app to then to use the PingFederate-AS-emulator site to convert the code into a JWT …
  • …which is then used as give in the article above?
Posted in oauth, pingfederate

254B, Notes 1: Basic theory of expander graphs | What’s new


This time around im understanding most of it.

Different in tone to the stanford prof. Here, its a mix of textbook and commentary (on writing textbooks). Its quite effective style – focussing on drilling the connectives of the brain to align with those found in the material.

Posted in coding theory

Cypherpunk sexistential tizzy

Cypherpunks have got themselves into a uk communist party style tizzy – destined to create the usual “final schism” that befalls all false religions founded on inconsistent  (and self serving) doctrine.

For at least the last 5 years (if not since 9/11),  cypherpunks have lived with self denial: the desire to perpetuate the myth that anarchy won in full knowledge that it didn’t, actually.

Today we know now that ssl, ipsec or skype was/is compromised not by the (figuratively evil) us federal govt  – that  easy to target amorphous evil (whose evilness magnitude is a function of size) – but by corporate vendors – out to make a buck…while employing and thus feeding one or two folk and their dependents. Of course the very nature of governance means that corporate profit is a function of general compliance with public or secret policy – that stated explicitly, “or otherwise”. Such is a nature of democratic politics (hardly ideal , even in its academic form).

Is  cypherpunks dead …like the corpse of the (academic relic of the) uk communist party I happened to encounter in my first job (at qmc-cs)?

It certainly seems so – from the dynamics of the end game.

Good riddance (though I’ll miss it, not that I read the endless drivel beyond, as with marks first pamphlet, the founding prophecy)

Posted in coding theory

a little civil servant joke



originally cited by Cryptome.

Posted in early computing

Reso, odata, oauth2 and nstic

US realty is a political place – and rightly so. And so it was with a non-decision by the so-called RESO group to not endorse the odata approach to data apis. Of course, things had been setup ahead of time so that this non decision will evolve over time into an endorsement. Quite who outside is driving this will be interesting to identify. I already have some ideas. For example, the azure ad graph api comes nicely with oauth and odata, already!

To be honest, the direction being taken is folly; but the least folly of several choices. Competing with the legacy protocol – which is itself a distant ancestor of odata – is going to be hard, however. This will be particularly the case if the legacy protocol, called rets, modernizes its session token and login handshake to now use (quite easily) the oauth2 handshake and its bearer access token. If it also offers a new content type extending those negotiated today to also offer json, oauth/odata is going to have a run for its money – since 80% of what odata advances beyond todays legacy will become available within the rets franework, for a minor refit cost. While others stuggle making odata scale, those using an updated rets+oauth2 might capture the 80% of the interactive, restful uses that really matter. At that point, its game over – once one factors in the re-engineering costs of a native odata query engine doing huge searches.

Of course the simplest oauth server is a gateway, from new to old, for simple, fixed read only search queries – referenced using the odata mechanisms. That toy should take 10m.

Posted in coding theory

Expanders – Lecture 3 – Part 2 – YouTube


Luca explains why the power method works, as an algorithm. In the course he explains the concepts behind tge steps – which are really quite intuitive (once you recognize what certain symbols are modelling). In particular one sees reasoning about orthogonal eigenspaces, preserved by the numerical method.

While its fun to see this explained in matrix centric theory (taught to every 14 year old since 1950), its even more fun too see the same argument made in pure group theory. Orthogonality in 1920s relativity math is more about permutation cycle lengths, letting a cycle (term) act as a generator (in a cayley graph), cycle closure, and self-conjygate subgroups of the symmetric group. Eigenvalues are addressed more geometrically, than in special linear groups. A series of 2k+1 matrix multiples may look more like conjugation of a cycle by a 2k+1 (or 2mk+1, rather) term

Posted in coding theory

Authentication with Windows Azure Mobile Services


Adding custom (oauth) provider to an ios app, supported by azure mobile services.

Posted in coding theory

Building WebSSO sample projects from Visual Studio for Android

On my kindle’s browser, an azure hosted web forms project offering a google (websso) login didn’t work. It became quickly apparent that the forms auth cookie presented back to the site was invalid in some way  (or perhaps not present!); meaning that SP site would loop back to the IDP. If the IDP was set for auto-logon (perhaps the second time one visits…), one gets an endless of loop of websso requests and responses.

Of course, I just happen to be using ASP.NET 4.0 on DotNet 4.0 (so one can use WIF libraries easily, with websso). This turns out to be a nasty combo…till you fix it thus:



Posted in SSO

Lecture Notes | in theory


Hardly a useless word.

Posted in coding theory

CS359G Lecture 18: Properties of Expanders | in theory


Finally! We see a modern presentation of the topics addressed by turing in his unpublished manuscript on permutations. Perhaps the editor, noting the missing title, should have called it on mixing.

Looking at the very specific gccs terminology turing uses, and given the extra context that this stanford professor adds concerning the design of rotor wiring schemes  (and pseudo wiring, by extension, in the spn of the des) based on expander graphs ideas, we can see what both were/are not allowed to say openly … concerning cryptographic design method, per se.

Its nice to see this prof use non pretentious language.

Posted in coding theory

Cryptanalytical Quantum computing circa 1980; building on 1950s theory

One of the interesting things about modern (2013-era) quantum computing mechanism is that it is NOT so different in its conceptual basis to the concepts used experimentally from the 1950s. But shush! …lest folks figure that quantum cryptanalysis has been around “practical” form rather long than folks might realize!

Of course, just as Colossus was a cryptoanlaytical processor rather than a general purpose  computer that followed up the theoretical Turing computing of a decade earlier, assume that quantum crypt-analytical computers of the 1980-era  were not general purpose (and fault-tolerant) computing devices. Rather, they did just one thing – just like Colossus, in the electronic era: they proved a theoretical possibility at high expense.

Back in 1950s crypto circles, folks were very much still thinking about 1930s length functions in topological spaces – searching out those special groups that allow secret or hard to compute “trapdoor” graphs whose difficulty in solution by computation denies an attacker the ability to compute the length function – necessary for solving hard puzzles.

Looking at recent theory, we can look back at what were was probably considered secret cryptanalytical theory – back then.

From the notions associated with the analysis of cayley graphs we get to the heart of the security property known as graph expansion. In the quantum computing world of 1950 and 1980 alike, folks were interested in a related property – the minimum energy state for a quantum superposition. The two concepts are related, when considering optimization functions that link combinatoric properties to algebraic properties.



When computing mean sets of graphs, under the random walk presumption that underlies mult-particle modeling (such as the bits in a plaintext), we are used to calculating the volume of probability space occupied by the average clique of neighbors. Then, we are interesting in the expansion property – that only for certain cliques of relative size less than alpha.N can we be certain that the expansion ratio from mean-set members to adjacent neighbors is greater than beta.K. Of course, k is related to the minimum weight – the latter being an upper bound on alpha.N.



If we now see things in terms of the 2 generator terms of A and B (above), we are interested in “that series of basis change” modeled by a particular sequence of generators applied to a start state. Of course, we remember seeing this back in 1954 (or earlier) from Turing, who modeled the cyclic powers in terms of a sequence of enigma rotors, which of whose wirings represented a generator. A sequence of rotors modeled h(M) – such that the sum of the powers would be zero (and thus model the 7-point geometry or hamming code on the hypercube).



We have to remember that at the end of the day, all cryptanalysis is a search problem.  Thus one is always interested in the computing or alternatively “finding” the minimum energy state of a superposition system – realizing that if one JUST has the right basis sequence one can compute more effectively. This clique search for mean-sets is the “trap door” for certain groups,  of course – likely to apply to DES of course (if only one thinks different). One tends of thinks of Braid groups as the foundational group able producing expressions hard to calculate in a standard computational model, but easy to calculate (if one can figure just the right basis [sequence] in a reasonable time).

Is interesting that to see what Turing understood that Quisquater evidently does not – that for a non-Abelian group, a given sequence of generators (each term being Ci where I runs from 1 to n distance/edges) can be re-modeled in terms of density evolution.  Turing understood that finding the mean-set was equivalent to finding the minimum energy state of a superposition – or the average sampling function for sufficient trials that induce a cauchy sequence.

Looking at the Stanford group (which has long been a NSA math front  for pure theoretical topics):



The material http://theory.stanford.edu/~trevisan/cs359g/lecture05.pdf is essentially the same material as Turing discussed in his On Permutations manuscript. What’s more, the well written summary clearly lays out the thinking steps. Turing’s example helps too, being so tied up with early mixers – that also related to quantum mechanics and the “chemistry” of uranium fission, etc.

Posted in crypto

Revising the dotnetopenauth oauth2 plugin for the Realty OAUTH AS

A very long time ago (like a few months), we learned about OAUTH as a protocol to talk to authentication IDPs. We wrote dotnetopenauth plugins to talk to wordpress (as an IDP) and then PingFederate (as a proxying IDP to websso partners). Since then, we have emulated PingFederate’s optional Authorization Server feature, using a Azure ACS namespace as the backend service. Today, we updated our OAUTH2 provider for ASP.NET – enabling a standard dotnetopenauth-aware web site built from the Visual Studio wizard to consume the various assertions, user credentials, and the “directory graph access points”. Of course, this all really just means we obtained an access token having logged onto Google, which gave us access to the token verification API, which duly unpacked the JWT issued by ACS .. .delivering “attributes” to the SP site – which then implemented an account linking experience.


So as to do NOTHING but add the plugin to the assertion-consuming web project built by the wizard, we altered our AS to be maximally flexible when interworking. (It is thus somewhat insecure, relatively, on the topic of redirect URIs, state management, and the like.)

Of course, we should now change the name of our plugin – to be “PingFederate-emulation (via Azure ACS)”.

So we are now using OAUTH for authentication, which is not what its for. But its what everyone does with OAUTH. So… so do we.

Posted in oauth, pingfederate, SSO

Adding Multiple Scope support to an Azure ACS based OAUTH authorization server




The pictures above show an ACS namespace and its relying parties for a particular “rule group”, the claim issuer attached to the rule group (and thus all the relying parties associated with it), and the list of associated identity providers similarly (who are also issuers). One notes that the issuer for the rule group is not an IDP (though it is an “issuer”). This distinction is critical.

The first relying party, whose name is in the rule group name, can be considered to be the default scope. The relationship with the issuer is established by our code, at application startup. When one adds other relying parties who associate with this same rule group (and the issuer, therefore), one can consider these to be additional scopes – to be cited possible in messages to/from our AS’s OAUTH endpoint –  and to which scope (more vitally) authorization codes can be minted.

So lets see if now we can make ACS issue a JWT bearing this limited scope. We can! But note that first we have to extend the PingFederate token issuing UI to allow us to specify the scope (since it treated as the audience, in the Azure ACS model):


Posted in oauth

Windows Azure Cloud Emulator port 80/443

When creating a standard setup for the local Azure emulator, one typically wants to host the services on port 80 and 443 – aping reality. In this way, we can create in IIS express a “development environment” on production domain names (and ports), in Azure local cloud emulator a “QA” environment (that imposes more realistic firewalls and load balancers, system management etc), a staging push (in the Azure host cloud), and a production service (when one uses the Azure console to flip over the vips of the staging and production deployments.)

But, it doesn’t work unless no one else on the dev/QA machine is listening on port 80, etc. Otherwise, the azure cloud emulator maps ports (making a mess). We use netstat to find if anyone is listening (note the absence of 80 and 443, when noon is listening).

The obvious culprit is IIS (which you need to stop). But, also you need to stop ADFS (if installed). Netstat does a poor job of identifying who owns the processes. Folks also advise killing other processes, such as web deploy (and some SQL Service reporting service, also listening).


Posted in SSO

Is Google Identity Management spying on me?

Who has the records? Who gets them? Can I opt out?

How much is already provided to NSA? Having prevented (a perfectly legal signin), why is it asking about it a week later?

What is the Google kickback for making spying a “normal business event” (FBIs stated intent for public policy)? Is it a large federal procurement of Google Apps on the line, or does it just see it as normal (under “American” value system)?





Posted in rant

ACS OAUTH2 behavior regarding refresh tokens

In building my emulator of the Ping Federate OAUTH feature, I encountered the same behavior concerning OAUTH2 and Azure ACS as discussed below:



Now, I don’t happen to use the library cited by the author of the memo when formulating the requests; but bytes on the wire will be the same.

In the context of the sharepoint 2013 app model, it makes some sense that refresh tokens are intended for a particular way of applying OAUTH (authorizing iframe plugins to sharepoint apps)


Note that this is hardly a standard flow! It involves a non-standard service element called a context token (an early signed JWT, in fact). The flow looks a LITTLE like the authorization-code grant in overall structure, note. However, it clearly has proprietary security enforcement semantics – that are not our concern particularly. In particular, one sees the role of the refresh token in the concept. It comes from the JWT rather than from the authorization response message. Its essentially a bearer cookie (from someone who could decrypt the JWT) – something to be delivered to then enable the server get the more classical bearer access token for API calls.

So, how does my stuff do its thing?

Well first , as we have reported before, we use the Ping Federate-hosted OAUTH2Playground app to play the role of the client. We have it all working, as reported below – except for the mentioned multiple refresh token issue.

First, we invoke the authorization_code grant, providing parameters as shown


And we get back a rendering of the response message issue by ACS. What Is not shown is that the authorization endpoint is guarded by a forms authentication cookie, to be issued by the dotnetopenauth framework, once it succesfull interacts with an IDP using a plugin. IN our case, we are leveraging our ws-fedp plugin to dotnetopenauth that talks to ACS (as a websso FP). ACS in turn is talking to Google (via openid).All in all, verified assertions are delivered to the websso guard, in much the same way that IDP connections open the OAUTH authorization access point guard in PingFederate.

We first see how a properly handled ACS exception from a failed websso interaction (with ACS/openid)  can be projected back to the client in a conforming manner:


PROVIDED THAT we insert some 2+ seconds of timing delay before forwarding the issued authorization code to the client, the client can RELIABLY use it at the token issuing endpoint:


This gives us a response to the token request – a standard message format.


We then implement a verification action, using the ping custom grant type – which simply returns a custom access token (a JSON object, that was the JWT’s payload)


So, perhaps upon receiving a refresh_token we simply replay  it back to the client… along with the new access token? this works nicely (and allows repeat use of the refresh token, as issued originally upon exchange of the authorization_code)

Now, what we do not get from ACS – presumably as we are responsible for enforcing it – is the Ping -Federate style expiry periods on grants.


Posted in oauth

Cryptanalysis reflection

The world of cryptanalysis gets ever easier to see in intuitive terms. The trick is to see the world as Turing saw it: before codes and calculating devices mechanized codes and their cryptanalysis. Folks had to learn to exploit a particular language of mathematics conceived to model the entire space of discrete calculation. Of course particular we know this as the design language of the state machine – that in its cayley graph or wavefunction forms served the early role of a software language for turing machines.

One cannot separate coding, state machines, cryptoanalysis, or turing machines. They are each a manifestation of the same thing: walking the (quantum) random walk.

– Coding takes the 1920s notion of a probability space so cleary applied to mathematical physics in the form of heisenburg or schrodinger theories. It requires the concept of a field to be represented in terms of group theory – in which the vertices, edges and invariants such as meansets of the Cayley “designer’s” graph represents the applied codeword security “rules” of the group, in a highly geometric manner.

– The notion of state require that we recognize that group actions can be expressed in such as the 2d complex plane which better characterizes the rules of possible state transitions better than does the raw presentation of the underying graph.

– Cryptanalysis recognizes that advanced, exploitable geometric properties exist between nodes in the graph when rendered on a suitable plane. There are concepts such as distance, measure, coordinates and weights. More refined and analytic than phase diagrams that express raw possibility limits of state transitions in finite dimensional worlds building upon the very idea of quantization, the weight notion capture the notion of volume of probability space carved out of highly abstract functional analysis spaces. This all builds on and up from the area of space implied by the edge-distance between some node and each of its neighboring nodes and the probability depth capturing the likelihood that a random variable model ling discrete events will have a particular graph-value.

– And turing machines are the meta-spaces  that governed by meta rules focused on the inner product of correlation relations and configurations for asymptotic limiting functions that can take such as the mean set induced by a sample function and tie together the rules of the group, the constraints of the probability field, the nature of the measure or distribution of probabilities and either output a codeword consistent with the rules or analyse a ciphertext codeword to distinguish it from a random event.

To a Turing in 1939 perhaps encountering the production side of the process of spying for the first time what is not new is the theory of codes, ciphers, and cryptanalysis. For he has been studying it in its theoretical manifestation for quite some time, quite evidently, in both the us and uk. All he has to do is apply the formalism theory to the particular nature of the various enigma machines he encounters – which introduces him yet more firmly to an aspect of the puzzle he has hardly encountered to that point: the notion of complexity of the search problem that stresses the relationship between the notion of security and cryptanalysis.

Though Turing surely knows the theoretical relevancy of such as the conjugacy search problem and has embraced the idea that certain coding groups engender reverse-searching problems that so expand the search space that the time required delivers “security” (that it takes longer to search out the key by brute force methods on average than the useful lifetime of the ciphertext) he also knows that the nature of the search problem changes in the calculation complexity sense – if only you move your calculator’s workings into the complex plane supporting phase or state space. The notion of the trapdoor, due to a change of scale or rotation of coordinates is not unknown.

In state space it becomes apparent that one can approximate the “inner nature” of coding functions using frequency analysis to at least guess crypto keys. the 1960s ldpc sum/product decoding is clear a minor variant of cryptanalytica process using centers and meanset theory to reduce the search problem – based perhaps on exploiting collisions within the differentials in the functions underlying ciphers (such as suitably engineered hagelin machines).

So the hunt has to be on for transforms that easily move a problem in discrete space into frequency space – in the days before the FFT, of course. However, let us not be overwhelmed by the FFT, for we know that even back in 1945, from Tunny documentation, that folks already had a discrete form of the FFT – what we now call the WHT. It merely exploits elementary generator sets (plus and negative signs on 1), that special heisenburg relation between parity and the fourier transform, and pairwise counting that reveals kasiski style bulges in distributions/measures, long used to crack indicator systems based on vigenere squares.

And so it falls to Turing and Newman. There topology focuses on the classical case of the cayley graph where the generators are the signs +1 and -1 (so usefully applied to peano arithmetic). These elementary members of the generating set nicely enable one to approach more generally what the old timers in room 40 of the Admiralty had been doing since WWI – when breaking the vigerere square (formulate depth cages for possible codeword lengths finding which auto-correlation measure is closed to the expected value). Having found the overall distance constraint for the unique space represented by the particular ciphertext in what we would these days call either additive or multiplicative auto and cross correlation, then consider the pairwise possibilities within each column of depths. One measures them – in terms of the possibilities that things agree or disagree in terms of sign counts.

As with breaking the codeword of a vigenere square through such as the computing the index of coincidence and then performing a frequency analysis of the depths in a particular column of cipher, more generally folks learn to identify the center of a graph – as manifest by its frequency spectrum. This is that “configuration” – to use a turingesque phrase – of vertices that allows particular nodes to be the center of the distribution of the space – and whose quantization oscillator now accounts for the variance found in ciphertexts that have depths/collisions.

In some sense the (tiny amount of) variance in these particular elements ultimately accounts for all the variance of the sets that can be generated. These foundational-fluctuations are a basis for the graph’s measure, much as generators and the non commutativity properties ultimately account for the evolution and transitions in a particular cayley graph making codebooks.

We tend to think of the turing machine as having started with the intensely discrete mechanism that slowly evolves, post war, into the notion of the indeterministic machine – driven by the probabalistic oracle already outlined (and obviously classified) in Turings phd  thesis. But its not clear to me that this is the correct order. It seems more likely that a Turing, having studied measure theory, started with the notion that certain configurations can, in the limit, control variance – if only the mechanizing graph can be so constrained with the likes of cutsets. And use of free groups can deliver the desired levels of control over variance – giving one an algorithm for designing configurations of graphs (to be explored by turing machine “runtimes”) leveraging underlying groups such as high dimension braid groups that enable the cryptosystem designer to distinguish different types of security protocols, including indicator protocols such as the type Turing spent so much time attacking: naval enigma.

Posted in coding theory, crypto

Kindle Fire HD – UK o2 “inbound” email problem

In absolutely typical UK fashion (i.e. the corporations are contemptuous of consumers) it is hard to make a UK edition Kindle Fire HD device talk to your o2 email servers. This is despite o2 publishing all the technical details! What information the support sites omit is the remedy! From internet comments, the firm just cannot be bothered to tell you what ORDER to do things, given some quirk in the Kindle software.

First make your incoming email work (ignoring any and «all» fields to do with outgoing) email settings. That is the magic. This really means changing the word “pop3” in the suggested domain-name for the incoming email server to the word “mail”. Now make it work, before proceeding.

Later on, once the incoming email is visible as expected, go change the suggested outgoing port to 25. Do not do anything else! Do not fiddle with names, logins, addresses, passwords….


O2 know this, but don’t let on. Keep the English peasants stupid and dependent, apparently. Make money by hiding information. Now O2 can deliver “service”.

Don’t forget, peasant, that when trying to make incoming email work you may be in a world in which you may have already configured your Outlook client to be also downloading incoming emails from O2. And, you MAY WELL have set outlook settings “to delete the emails from the server” upon delivery (to outlook). You need to reconfigure outlook for your kindle – so outlook leaves the emails on the server upon receipt. That is, ensure outlook only deletes the server side copy when you delete a message using some device’s app’s delete button. This way, you will see your incoming test emails in your kindle – since no longer may outlook be having the server delete from the server (before the kindle has a chance to connect)

Oh O2! Real customer service is NOT that hard.

Posted in coding theory

UK internet – a deception based public security policy

Travelling to the UK for the first time in many years I encountered public policy –  as regards the internet security.

Typical public wifi acess points (where occasionally available) require identification – of email address or mobile phone number. That is, one probably commits  a fraud by using anonymous identification. This is not disclosed and is one of thoze american style hangem via a 1000 formal violations pre-arrangements (for when and ever if there be a need or desire to exert pressure, during interrogation). In public policy terms, one notes the design style: load the dice.

On the last day, just before travel, suddently corporate email ceases to work. The point is to isolate, of course: in prep for isolation and the planned delivery of emotional distress. One notes the design style: plan for isolation (so the loaded die have more value).

Despite being a united partner, lufthansa could not emit boarding passes for all legs (unlike the us outbound). This surprised the agent – used of course to normal practice (and having good intuition on what induces variance). One notes the design style: isolate on non-us soil (invoking guantanamo logic) using compliant proxy agents (Germany).

Harassement takes many forms: so ensure that concerning money, lack of money will imply lack of standing. Of course too much cash will mean something else (equally disadvatageous). Cash is not always cash of course as i found out (since my English notes turned out to be no longer “valid tender”). Note the public policy: force electronic money (subject to monitoring).

Now to be fair (not that the sentiment can be expected in return) all the policy goals have sensible intent – in preparation for the 0.0001%). Tax dodging, money laudering of cash for drugs, and surprise: to maximise interrogation advantage (ie find where the damn bomb is located, before it goes off).but what we note is the preparation – applying all the techniques to everyone. And that aint to be anti-discriminatory! Its there to address subversion.

Now also interesting where deception based attacks on Internet encryption were mounted. Phones (vs tablets with wifi-only capability) do not show the issuer of the cert and do not show the ciohersuite in effect. It could be a firewall-issued cert (spoofing the true website by https proxying) and a null ciphersuite for all you know. Remember, both enable cleartext reading of the packets (in real time). It was interesting to experiment in various wifi worlds on which https sites in the us worked or which worked with what type of warnings. One notes the public policy; Internet encryption is there to make you feel good, not enforce any privacy rights (that you probably don’t have to start with, and will be suspended in any case upon mere utterance of the certain magic spell; public safety!

Using an american issued device in europe was also revealing. Some sites (inckuding googlz uj relay site) could detect which kindle device i was using and refuse to complete login for “unrecognized” devices. This did not interfere with web browsing note; but did interfere with any and all subscriptions (that hinged on a Google websso assertion). Of Course google.com was a front for google.co.uk, and i never found a means to bypass uk national controls (built  into googles cloud). Dns,  Certs and redirects thru https connect proxies had all been well massaged to present the illusion of a global web – that is not actually a web of national webs. Of course the public policy is to distinguish the web from nationalky regulated sites (able to withstand better web attacks, having access to higher assurance security credentials). But look at the elaborate deception, coordinated behind the scenes with other governments and the multinational clouds and the identity Providers!

My us subscription to netflix content worked – in the sense that i had now access to uk-licensex cobtent. Presumably, for copyright and censorship reasons, not all us works can be made available to uk subscribers. Things like uk censor ratings might be missing, for example. But it did work. So where is the history of my viewing habits now stored, ready for analysis? Is it in the uk, us, or half and half? Yes i assume i have no expectation of privacy in either place, even should my packets be tagged with notices asserting my expectations. Behaviour based analysis has to be the new public policy norm, one notes, much like the surveillance of ones habits at the public library.

To be fair, nothing in the above was particularly hindering, annoying or even unnatural. When in england, id rather netflix be giving me a change of scenery. Id be happier that netflix.com be a synonym for netflix.co.uk (when accessed from ip addresses in uk-registered isp address allocations). But not always is it so! Of course, i have no choice.

Now im not going to bother bemoaning the phone roaming, the data plan policies or similar. Of course they are spying on me, and of course my location and video and mic are controllable remotely. Its the public phone system, stupid (and nothing has changed public policy wise in a hundred jahr.)

Uk updates of firmware onto the kindles were interesting, too. Different packages were installed. Uk bugging-prep capability was not installed onto the non-euro-distributed kindles. This affected how the different installs of the skype app worked note, built to leverage the different (now bugged/buggable/buggered(?)) platforms. Of course, public policy protects the vendors who knowingly participate in the charade, allowing them to claim they know of no vulnerabilities in their software (as tested on an unreal platform).

So where does this leave the typical uk consumer?

Well, i struggle to deny id do much different – given the nature of the subversive-trained nutter on the bus trying to make a large explosion as it wanders through trafalger square, right infront of a live broadcast. But, i also see a uk failing; why not own up (rather than engage in multi level deception of your own public). This is a failing of oublic policy: cromwells spies being preferred over policy modernization.

Do folks really believe that anti anti-subversion orientation doesn’t include briefings on all the above!

Posted in coding theory

WCF service with DotNet4.5 Claims–with STS

To a solution made with Visual Studio 2012 add a WCF service for the dot 4.5 framework (specifically). Then, use the latest I&A tool to add the system.identity model configuration, having identified your active STS service and its metadata endpoint – as built originally using the tooling from Visual Studio 2010 (with the WIF SDK for dotNet4.0) and running under dotNet4.0, augmented with the WIF libraries.

The net result is a file similar the following which successfully identifies the IDP and its certificate. However you must change the bindings metadata endpoint, as shown. (It defaults to an assumed ADFS path.)


Assuming the STS offers 2 endpoints, one for Office365 friendly wstrust of Feb 2005 and one for version 1.3 of WS-Trust, we need the client built and configured using svcutil to use the 1.3 endpoint.


We had the change the client config:


We had to change the anonymous address from svcutil to the 1.3 endpoint of the STS; and add binding – that we happened to have- identifying the client credential to be sent to the STS.


Finally. no cardspace or its strange policy errors; and no weird and wonderful WIF-specific configuration of the service.

Posted in SSO

Forcing Ping Identity to Adopt WIF metadata from IDPs

ITs been  a pain making our otherwise excelleng Ping Federate servers (version a few years ago) talk to modern ws-fedp IDPs built using Microsoft developer-focussed WIF toolkits. There was always “SOMETHING” that didn’t work.

Well, now we understand how to make a passive STS from WIF emit (i) a Feb 2005-serialized response, (ii) include an authentication statement, and (ii) be signed with RSA/SHA1, we know from last weeks report that WIF IDP built in 10m with Visual Studio can now talk to Ping Gederates ws-fedp SP.

But, Ping Federate is a pain to set up, as it does support metadata import emitted by such IDPs!

So we decided to play the game back. Our IDP now dynamically emits a true role descriptor and now a second role description intended for consumption by Ping Federate (pretending to consume a SAML2 IDP in its excellent metadata-driven console.).

Remember, we are just issuing something that Ping Federate can import (so there are no typing errors). SO we take our ws-fedp descriptor and re-export it as a SAML2 IDP descriptor:


Then we use Ping Federate to import a SAML2 IDP, as normal. Then, once saved and ready, we change it be a ws-fedp connection simply by editing (when the server is offline) the sourceid.saml2-metadata file. We give it the desired protocol and binding values:


So we get around a vendor’s biases (implicit or intended) against WIF IDPs – with only minimal fuss.

Posted in pingfederate

substitution ACS OAUTH for the PingFederate OAUTH AS in the Oauth2Playground


As shown above, we took a working Ping Federate installation with OAUTH feature and its working OAUTH2Playground site – that showcases the protocol.  Then we copied the expanded war directory for the playground so we can make a parallel UI use our own AS (rather than Ping Federate’s AS).

Within the copy, we amended the “case1-authorization.jsp” and “token-endpoint-proxy.jsp” – to use the URL paths of our own OAUTH AS – and our own https port.

Then we changed the SSL cert on the IIS express that hosts our debuggable site to use our godaddy cert (using the Windows netsh tools, for the port in question). We also allowed iisexpress to bind to the certified domain name.


now, the PingFederate hosted oauthplayground site will work, even proxing token requests (using java chain building). Everything uses the GoDaddy cert. So we add it to the trust point list that Ping Federates manages – also used by the playground site, evidently.


We also now alter the settings of the substitute playground to point to our endpoints


Next we register the Client (a vendor consuming the API) to have the right callback URI:


Thus we can now try the playground, first seeking to get an authorization code:





The consent process returns control to Ping Federate hosted OAuth2Playground:


Since we now have enabled the proxy in the playground WAR app to trust our SSL cert, w can pass along the JWT issued by ACS:


Posted in oauth, pingfederate

acs signing a jwt ; wstrust token verification

The token we receive from the OAUTH endpoint of our Azure ACS namespace has a (decoded_ header field given below.


Using fiddler tools base64 decoder, we change – and _ back to + and /, and add the padding char(s)


It’s supposed to be a hash, and probably an SHA1 hash.


We see our GoDaddy cert is:


Now lets say that the cert has a critical extension. And it’s a URL, say, that demand that the verified contact a given OCSP responder.

If we now receive the JWT over a ws-trust channel, will the seucrity token resolvers pick up the JWT’s reference, locate the cert AND verify the cert chain?

Posted in SSO

ACS-based OUTH2 AS augmenting Google IDP (via ACS)

Let’s see how far we have got in emulating Ping Federate’s OAUTH engine (for the authorization grant).

We start out as will any vendor/client, making a request for the grant type:



At 1 we see our AS consuming  our ACS namespace’s store of “service identifiers,” checking up on the redirect URI given on the query (query string, not FORM POST for now).

Since there is no user session at the AS, the Session creation mechanism kicks in and also shows a list of “IDP Connections” – including Azure AD. Remember, Selecting Google here really means send a request to ACS…which will chain/gateway the request to Google’s WebSSO Cloud…which will chain the request Ping One, which will… Anyways, at the end of the day ACS issues us a SAML assertion that we verify, pulling the metadata on the fly.

our AS is in account linking mode for this IDP connection, and thus we get a challenge – to logon as the local IDP that governs linking.



The net result of this one time linking, for an inbound Google identity, is some UI (remember this is a prototype…!)


Using the Return option, we resume the OAUTH AS processing, that consumes local (rapstaff) account whose profile is driven by the SAML assertions relayed by ACS from Google. We get the consent screen – that which guards release of the authorization code and storage of the persistent grant in ACS.


This gets us back to our simulated vendor site, ready for the vendor to call the access token (STS) service to convert the “code” into an longer-lived access token.


Ok, we see lots of the pieces coming together! When we now use the postback of the token minting form to actually talk to the ACS access token minting service, we get as result:


I suppose the thing to do next is the refresh token, swap from symmetric-signed SWT to RSA-signed JWT and then verify the signature as a “client”, pulling the public key from ACS, somehow.

But, even now, if put this site behind the Ping Identity Oauth2Playground UI for developers of client sites, I suspect it will work reasonably well as is.

To expose our old AS available by button pressing as a protocol engine for real clients, all we really added was a handler for the authorization and token endpoints:


authorization endpoint


token endpoint

The trace on the wire tells the whole story:


if we swap the token for JWT and assign a signing key


we get now a JWT


Since the DateTimes are not in a friendly dotNet syntax, one converts (from Microsoft sample code for SWT notbefore and expiry times)



Posted in oauth, pingfederate

virtal machine’s hosting DC and SAML assertion testers (showing time validation issues)

Remember, when the DC is hosted on a virtual machine using hyper-V, the host of the VMs induces the DC to change its time to sync with the host. Being a DC, it then updates all its domain hosts – which get the same (wrong) time as the hyper-v host.


You might think the DC host’s time as set by the domain-admin was authoritative – but its NOT!


Nice military attack vector, here. Assume your DC is hosted in Azure VMs = and thus a “request” to Microsoft (Azure) to re-set the time on a given DC VM could induce all sorts of “nice” effects.

Posted in Computer and Internet

Building an Azure ACS OAUTH AS–emulating the AS of Ping Federate–part #1 getting to consent

The goal is to use the OAUTH2Playground website of Ping Identity’s Ping Federate server (evaluation edition) to talk no longer to the OAUTH2 AS feature that are built in but to an AS we build ourselves — in an ASP.NET website – supported by the Azure ACS. We know then we have largely emulated Ping Federate’s own AS behaviour using its own test site – and we have done this at a good enough level for getting a first OAUTH project off the ground (by learning from the best).

Using Visual Studio 2012 create a web forms project, internet type. Register Google as an IDP, leveraging the built-in OAUTH provider that comes with ASP.NET Now add a class with our OAUTH provider; and register it alongside google. This allows us to send OAUTH2 messages to an Azure ACS namespace’s token issuing endpoint. It also allows the provider to leverage other “support”. interfaces that allow our AS to delegate to ACS much of the work of implementing an AS service.

The account linking model of the default website thus build allows to show that we have logged on to a locally-managed account (called rapstaff) having used an Google-asserted identity. IN other words, we have a user session on our AS, induced by an inbound ws-fedp SAML1.1 assertion received from ACS – since this is normally how we talk to Google’s IDP. We can now press the CIS button, which emulates an OAUTH client (called Fabrikam2) invoking the AS and its token issuing endpoint. Obviously, we will make the URLs look and respond like the equivalents in PingFederate’s implementation.


What we have just done with button clicks is the equivalent of use this screen from the oauth2playground:



Now typically, when we want a URL to specify which IDP to use, we invoke our particular pattern (which sends off an ws-fedp request to ACS, requesting further gatewaying with google’s openid endpoints).


Now we can accomplish some but not all of what the Ping URL does from the start, by having a page class that redirects


If no SAML session exists, this lands on a IDP chooser page. Upon the return of the assertion from Google, say, the CIS  button is logically pressed … continuing onto the AS phase of the flow (invoked also by the CIS button). If a local SP-side session exists, the chooser and authentication process is omitted – passing straight to Go!


Once invoked, we see the (CIS) provider pass control to our implementation of the  authorization_code handling component of our AS (which delegates much of its implementation to Azure ACS). At the point, non of the parameters show are configurable from the ping emulating URL.

So, now to get rid of button pressing and make the flow be driven from the formal interface of an OAUTH AS endpoint. We implement a form handler



With an existing user session (for a local user “rapstaff” previously linked from google websso), a user is directed to the consent page – that will authorize creation of an “authorization_code” persistent grant.


If there is no user session, we get a ping federate like experience. The OAUTH AS site invokes an “IDP Connection”, as selected by the user:


Next, we see the storage of an authorization grant – using the (OAUTH-guarded) ACS service – which also mints the authorization code to be issues to user devices.



from the relying party NAME, the token issuing service will require that the party seeking to swap the code for a token must cite the realm of the named party – acting as a scope.

Posted in oauth, pingfederate