lets use access tokens from wordpress at the APIs

Let’s pretend we are a website “app” – trying to get rights and tokens to use at a Realty ODATA API.


the device, sending a first message to get authorized as an app.

The app is assumed to using a web browser, so our OAUTH AS does something a bit like an openid connect flow is supposed to do – and shows (for this AS tenant) its set of IDPs:


Show a configurable by tenant subset of IDPs configured into the AS, where the AS is itself often an OAUTH client (with client_id and client_secret)

We have to pass the core challenge at wordpress.com


which gets us a unique bit of OAUTH in the wordpress world – which is to select a wordpress site:


This done, we are directed to that site – which as a non-cloud membership system:


this is the blog site with some locally-registered blog posts (and we may want to use ITS API)

Since our websso/openid-connect style middleman IS the app, it consume the access token (“iesmrDXy4h*GTblP9D^1iE92ab3(n6GdcbU0PkvynxUWDGBmGoq6bNa3)ViXB)NW” and uses it to consult the ME API endpoint.  From this, the linking experience is introduced:


Now, this showed that the access token “associated with” the jetpack-powered site is still viable at the ME interfaces offered by wordpress.com. Is it actually valid at the jetpack site, however?

Our OAUTH AS doesn’t answer that, so we have to use CURL:


wordpress.com API endpoint



C:\>curl.exe -k -H “authorization: Bearer HERE”  “https://public-api.wordpress.com/rest/v1/sites/jetpack.azurewebsites.net?pretty=1″

my jetpack.azurewebsites.net API endpoint and some blog post listings:


and indeed our blog post is at http://wp.me/s3CQ9u-51

What is interesting is that the endpoint is still wordpress.com (which presumably proxies requests to the self-hosted server in realtime, or fullfills the API from a local cache pulled from said server).

Perhaps its time I pulled down the azure websites hosted site into my own webmatrix on my PC – so I do some proper spying on whats going on!

Posted in oauth, ssl

wordpress.com oauth2 and ASP.NET dotnetopenauth provider

Here is a good start on making extending the providers in an ASP.NET dotnetopenauth-enabled project to talk to the WordPress.com OAUTH and API endpoints, as part of websso.

namespace WebRole1
    using System;
    using System.Collections.Generic;
    using System.Diagnostics.CodeAnalysis;
    using System.IO;
    using System.Net;
    using System.Security.Authentication;
    using System.Text;
    using System.Web;
    using System.Web.Script.Serialization;
    using DotNetOpenAuth.AspNet;
    using DotNetOpenAuth.AspNet.Clients;
    using DotNetOpenAuth.Messaging;
    using DotNetOpenAuth.OAuth;
    using DotNetOpenAuth.OAuth.ChannelElements;
    using DotNetOpenAuth.OAuth.Messages;
    using Newtonsoft.Json.Linq;
    using System.Runtime.Serialization.Json;

    using System.Collections;
    using System.Net.Security;
    using System.Security.Cryptography.X509Certificates;
    using Newtonsoft.Json;
    public class WordPressComCustomClient : OAuth2Client
        private OAuth2AccessTokenData data;

        private InMemoryOAuthTokenManager imoatm = null;
        private InMemoryOAuthTokenManager rsoatm = null; // resource server
        private string ep;
        private string blogsite;
        public WordPressComCustomClient(string site, string consumerKey, string consumerSecret, string pn, string RSKey, string RSSecret ) :
            ) { }

        public WordPressComCustomClient(string site, string accountlinkingURL, string pn, string consumerKey, string consumerSecret, string RSKey, string RSSecret) :
            base("WP_" + pn)
            imoatm = new InMemoryOAuthTokenManager(consumerKey, consumerSecret);
            rsoatm = new InMemoryOAuthTokenManager(RSKey, RSSecret);
            ep = accountlinkingURL;
            blogsite = site;

        protected override Uri GetServiceLoginUrl(Uri returnUrl)
            var coll = HttpUtility.ParseQueryString(returnUrl.Query);
            string __sid__ = coll["__sid__"];

            const string formatState = "{0}";
            var state = String.Format(formatState, __sid__);

            UriBuilder u = new UriBuilder(returnUrl);
            u.Query = "";

            UriBuilder authzUrl = new UriBuilder(ep.ToString());
            authzUrl.Path = "/oauth2/authorize";
            authzUrl.Query = string.Format("response_type=code&redirect_uri={0}&client_id={1}&client_secret={2}&state={3}&blog={4}",
            return authzUrl.Uri;

        protected override IDictionary GetUserData(string accessToken) 
            Dictionary extraData = new Dictionary();

                += delegate(object sender, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors)
                    return true;

            WebRequest tokenRequest = WebRequest.Create("https://public-api.wordpress.com/rest/v1/me/?pretty=1");
            tokenRequest.Headers.Add(HttpRequestHeader.Authorization, "Bearer " + accessToken);
            tokenRequest.Method = "GET";

            HttpWebResponse tokenResponse = (HttpWebResponse)tokenRequest.GetResponse();
            if (tokenResponse.StatusCode == HttpStatusCode.OK)
                using (Stream responseStream = tokenResponse.GetResponseStream())
                    StreamReader s = new StreamReader(responseStream);
                    JObject j = JObject.Load(new JsonTextReader(s));

                    foreach (var el in j)
                        extraData.Add(el.Key, el.Value.ToString());
                        if (el.Key.StartsWith("username"))
                            extraData.Add("id", el.Value.ToString());

            extraData.Add("accesstoken", accessToken);

            return extraData;

        protected override string QueryAccessToken(Uri returnUrl, string authorizationCode)
            const string oauthsts = "/oauth2/token";
            UriBuilder tokenissuingpoint = new UriBuilder(ep);
            tokenissuingpoint.Path = oauthsts;

                += delegate(object sender, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors)
                   return true;

            UriBuilder retpath = new UriBuilder(returnUrl);
            retpath.Query = "";

            UriBuilder entity = new UriBuilder();
                entity.AppendQueryArgument( "client_id", imoatm.ConsumerKey);
				entity.AppendQueryArgument( "redirect_uri", retpath.ToString());
				entity.AppendQueryArgument( "client_secret", imoatm.ConsumerSecret );
				entity.AppendQueryArgument( "code", authorizationCode);
				entity.AppendQueryArgument( "grant_type", "authorization_code");
            WebRequest tokenRequest = WebRequest.Create(tokenissuingpoint.Uri);

            //tokenRequest.Headers.Add(HttpRequestHeader.Authorization, "basic " + Convert.ToBase64String(Encoding.ASCII.GetBytes(imoatm.ConsumerKey + ":" + imoatm.ConsumerSecret)));
            tokenRequest.ContentType = "application/x-www-form-urlencoded";
            tokenRequest.ContentLength = entity.Query.Length - 1;
            tokenRequest.Method = "POST";
            using (Stream requestStream = tokenRequest.GetRequestStream())
                var writer = new StreamWriter(requestStream);

            HttpWebResponse tokenResponse = (HttpWebResponse)tokenRequest.GetResponse();
            if (tokenResponse.StatusCode == HttpStatusCode.OK)
                using (Stream responseStream = tokenResponse.GetResponseStream())
                    var serializer = new DataContractJsonSerializer(typeof(OAuth2AccessTokenData));
                    var tokenData = (OAuth2AccessTokenData)serializer.ReadObject(responseStream);

                    if (tokenData != null)
                        data = tokenData;
                        return tokenData.AccessToken;

            return null;

        public override void RequestAuthentication(HttpContextBase context, Uri returnUrl)
            var coll = HttpUtility.ParseQueryString(returnUrl.Query);
            string __sid__ = coll["__sid__"];

            var contextCookie = new HttpCookie(__sid__, returnUrl.AbsoluteUri.ToString());

            base.RequestAuthentication(context, returnUrl);

        public override AuthenticationResult VerifyAuthentication(HttpContextBase context, Uri returnPageUrl)
            AuthenticationResult res = base.VerifyAuthentication(context, returnPageUrl);
            if (res.IsSuccessful && res.ExtraData != null)
                var contextId = context.Request.QueryString["__sid__"];
                var ctxCookie = context.Request.Cookies[contextId];
                if (ctxCookie == null)
                    throw new InvalidOperationException("Context cookie not found");

                var originalRequestUri = new Uri(ctxCookie.Value);
                var contextCookie = new HttpCookie(contextId)
                    Expires = DateTime.UtcNow.AddDays(-1)

                var extraData = res.ExtraData.IsReadOnly
                                    ? new Dictionary(res.ExtraData)
                                    : res.ExtraData;

                var nvc = HttpUtility.ParseQueryString(originalRequestUri.Query);

                extraData.Add("ReturnUrl", nvc["ReturnUrl"]);

                res = new AuthenticationResult(
                    res.Provider + extraData["id"],

            return res;

to make use of it, note that the wordpress AS returns control to the Asp.NET pattern’s “RegisterExternalLogin” handler, as is normal. However, because of the way wordpress.com token issuing and authorization endpoints handle the registered value of the redirect URI, we were not able to let the authorization endpoint redirect back to the handler with the (required) additional query string parameters – used within RegisterExternalLogin to locate the outstanding request. However, OAUTH itself saves the day – by handling state – within which we can transfer cookie names – allowing RegisterExternalLogin to recapture the full URI required to resume processing of the inbound OAUTH code message.

      private void ProcessProviderResult()
            // Process the result from an auth provider in the request
            ProviderName = OpenAuth.GetProviderNameFromCurrentRequest();

            if (String.IsNullOrEmpty(ProviderName))
                var contextId = Request.QueryString["state"];
                var ctxCookie = Request.Cookies[contextId];
                if (ctxCookie == null)
                    throw new InvalidOperationException("WordPress Context cookie not found");
                UriBuilder entity = new UriBuilder(ctxCookie.Value);
                foreach (var s in Request.QueryString.AllKeys)
                    entity.AppendQueryArgument(s, Request.QueryString[s]);


Posted in oauth | 1 Comment

openid connect basic client profile

We don’t have a Ping Federate server that processes openid connect flows – but Ping Identity did give use the clients that exercise (those endpoints). Lets see what happens when we use our own AS/STS, that delegates to Azure ACS,  as the endpoints. Perhaps we can learn more about what is driving openid connect.

Unlike Ping Federate, our own Authorization service is multi-tenant (each tenant being supported by a custom SQL db, membership provider and unique ACS namespace). So we connect the client to one such tenant:-


What this does is require us to have registered the special scope “openid”. So lets add that:






The pass through rule connects the scope(s) to the issuer that tags valid delegations of authority


We firm off the Ping Federate form that attempts to land on our consent page – guarded by Google webSSO. As usual, this page takes the claims from Google and writes a delegation record to the ACS namespace, before redirecting to the (ping federate) form registered as the redirect URI tied of the service principal of this “client”. Presumably, when using a Ping Federate AS for openid connect flows, it notes the scope “openid” and does “special things” – behind the scenes. One day, we will find out what they are!


The request and response look pretty normal (except for the additional fields…that enable the client to instruct how the websso is to happen or be processed upon result, presumably).



After some fiddling (and some bug fixing), we get an openid-connect specific followup”


ok. so we have learned a little (which is slightly more than the zero we knew yesterday); and also learned that ACS token issuing point will not accept a scope of “openid” (though urn:openid is fine); and also learned how to mint a symmetrically-mac’ed JWT. We also see what is supposed to have next…. in the openid connect world (get user info);


For now, our result! Of course, a real Ping Federate server would work (unlike our simulator)

Posted in openid connect, pingfederate

nothing like a picture to wire a rotor



We can look at Turing’s schoolboyish appreciation of higher math, as it leads to notions of programming.

One sees why Turing would be good at codes, as he was first taught to reason geometrically – rather than having to study notions of generators, spanning sets, bases etc. from linear algebra. We can see now that to a Turing, the notion of an automorphism group is akin to a software program – specially when you take the semi-direct product of the group and the automorphism group (think input, and program). To a Turing, the automorphism group is just a set of card index organized to translate English to Chinese using an English to German and then a German to Chinese dictionary.

Similarly, the notion of  foundation group or “Structural subgroup” is particularly evident in dihidral groups, which so clearly factor – as one breaks up factor chains of subgroups. The notion of the center then becomes also a programming-style concept – that  which is entirely self-describing and self-sufficient. One can see also the notion of basis, when one then considers quotient groups.

Clearly, we can imagine re-wiring 8 of the 26 positions on an enigma  “drum” according to the picture above; and we can imagine stringing several such rotors in a rotor array. From Turings own analysis, one looks at the array as a composite operator, reducing the variance-distance between the relative proportions fond in the input alphabet to the uniform proportion.

Something about the next 2 pictures totally makes the point that ifs fine to relabel things (using your card indexes). And indeed, a suitable re-labeling really shows up certain underling cycles:



Getting to the idea that an operation and a code for that operation can be seen as one and the same thing (so one has operations on numbers – which are operations) is obvious next (without resorting to more advanced von neumann linear operators, etc)



It seems pretty clear that Turing was “well taught” quaternions – not as some abstruse branch of math but as a reasoning system for calculating.

Posted in early computing

netsh to add a ssl binding on a port


netsh http add sslcert ipport=

dont know why I can never remember the above; having to relearn it each time.

Similarly, to list the ssl-enabled bindings:


netsh http show ssl >c:\foo

Posted in ssl

a Guardian Tell–woops it’s all a China-Policy cover




Its occurred to me before now that the goal of the UK/US – assuming the whole NSA snooping “revelation” is an intended policy – is to persuade the world that its normal to do what the US does. And, it should be the new “public trust” norm.

Which explains why the Utah data center is SO LARGE – being intended to be a “service” center to be outsourced to the likes of countries like Jordan, so they too can demonstrate they meet the new international norms  – yet to be agreed – on a normal public trust founded on “state surveillance”. As that all sounds rather Orwellian, we will need to find new words (that are NOT double-speak, somehow).

To re-educate the American public – brought up on the very expensive to mount propaganda years of animal farm and 1984 – that the new norm is actually (i) what the US has been doing for years (in secret) and (ii) in their own interests SO LONG AS EVERYONE DOES IT, one uses the press to deliver the change in the one and only one property of the policy that is in need of change: the need for secrecy. Having outlived its usefulness, consider whether secrecy is now getting in the way of snooping on the scale America wants or all – so that it is NOT AT A DISADVANTAGE when it spies on folks (but others don’t).

Might surprise folks to learn that I don’t object to such a world – so long as spying is the new norm, that everyone is subject to and therefore DOES NOT CREATE innate tensions, I don’t actually have a fundamental objection. It’s actually very American (and that was meant, for once, as a compliment).

Now, speaking up for Britain, Id like to say that Britain’s record on outsourcing spying is BETTER THAN the the US version. Come to Britain for spying knowhow; we have centuries of practice. Catholics, Protestants, Irish, Ugandans, Tasmanians. You name it, we have spied on them. And lots of them didn’t make to to 70 – which is an additional service you can buy.

Posted in rant, spying

1940s speech ‘decoding"’

Cryptome distributes an interesting link:




Let’s have a look at the language game in use back in US in the 1944 era. In contrast to Friedman’s military cryptanalytical texts (focused on generic analysis of Vigenere squares) we see what we would expected all along lots of signals terminology!


One sees that the very notion of privacy – in this field – is tied up with speech (and the telephone as the means of conveying speech). It’s distinguished from the world of codes and ciphers – involved in the data world of telegraphy. We have to remember that A2D is only just about to be invented (in a viable form).


The most interesting feature for me was the characterization of TDM as a scrambling system (vs a multiplexing notion). We have to recall just how primitive was “privacy” in these days  – thought of as denying “real-time” exploit of the eavesdropping.

Finally, we note just how critical was the (voice) spectragram – using classical phase-space ideas to build (literally) a picture of noise whose evolution towards the solution shows the analyst how to solve the puzzle of the scrambling elements used by the target system (today).

one sees the constraints are, unlike in telegraphy decoding, based on the nature of voicebox for each human language upon sound formation. In combination with the spectragram these constraints enable the (military) analyst to cut down the search space of puzzling elements.

Posted in early computing

spying dam is breaking under the water pressure



Talking Points:

“no agency” vs ‘no contractor’. The contractor vs agency handoff is common trick. Even dirtier semantic trick is the use of vendors (who are not even personal service “contractors”)

“direct access” does not deny ‘indirect access’. A CALEA tap is defined as indirect. The CEO with IM and voice services have to know they have CALEA pops in their core network.

“servers” are not ‘databases’ (and SANs used within database server architectures are ‘communications’ techniques – not servers).  Again. you see Americans CEOs conspiring to subvert the English language (to hide thereby and not get hauled off to prison for breaking the policy – not to campaign against the program – as agreed with the FBI). Remember, their personal billion is at stake – which is a pretty solid bribe and personal threat.

“facilitate” rather than ‘provide’. facilitating means you host the FBI/NSA/GFE equipment and provide for connectivity . Again you see the attempt to mount a trust gap via language games that Wittgenstien would be proud of – which goes to the heart of the political lie. Remember in the US, typically folks study philosophy and politics together (which I always thought more natural and effective, than Euro style academic study of philosophy). You have an entire cadre of trained liers, that is.

“fast data transfer” – begs the question “how fast”. Its only $20 million in total (for NSA budget). Sop are Facebook investors paying for the bandwidth consumed by the FBI (and don’t they have a right to know)? Follow the money. Is it proper to make the prisoner pay for his own (improper) incarceration? Or is an (amorphous) human right violated, thereby?

FISA request only are the topic of controversy, apparently, Well that covers only 3 billion web users (i.e. everyone who is not American in or America). And, as the comedy joke goes, tossing a coin is what defines whether there is “confidence” you are not American (i.e. foreign) since a statistics audit would not be able to tell the difference between your actual process and coin tossing. Of course, cryptanalysis is all about finding loopholes in that stats argument!

So thanks Facebook. You just threw 3 billion people under the bus, when “facilitating” your fast access. Are none of exceptional (not being Americans)? Not even one?  Not even the brilliant concert pianist (who is not American)?

Can the world population trust Americans (when it comes to spying)? The answer seems perfectly clear, from the in your face subversion of the very English language. The affront is clear – subvert language; no trust.

Oh how the elite and the exceptional have fallen. All along, the exceptional grades in the exam were due to cheating! Or, as my mock American English dictionary entry defines cheating; winning by any means, including lying to yourself abut your innate excpetionalism.

Can we assume that the assumed broad  FISA ‘scope’ means that the fast access that Facebook “facilitate” (and possibly pay for) includes basically the majority of information about internet users, transferred to a government controlled “store ” – upon which the queries are made ? How is the copy being made AND WHO IS PAYING FOR THE BANDWIDTH? If the phone metadata scope is universal, why would not believe that the email and web tracking (thanks Tim! for nothing!) has similar scope?

So, is Google participating the “fast access” program to (indirectly) “make a copy” of the ‘communications’ data (that NSA then trawls when having “direct access:” to its what are then its own IBM and DELL server arrays)? Are folks going to hind behind technical inaccuracies concerning the verb “collect” (missing above)?

Now we know why I had in-person surveillance 3 weeks ago. Perhaps I should post the video (of the incompetent attempt). Talk about Keystone Cops – from the Reno (outsourced) brigade of military intel! But then, I was just a cover our asses (so we can show we spied, dutifully) target. But, it probably shows for how long folks were spying on the journalists (before they broke the story FORMALLY by asking for permission to publish!). Be interesting to know if it was the Hong Kongers or the Chinese who were asked to spy on the “hotel” interviews – another story to followup. Bet it was the latter.

Let’s look around today, for my tail. I’m somewhere were its MUCH harder to hide oneself. Lets see if its different to yesterday.

Posted in spying

windows 8 wordpress and the JSON API of a jetpack-powered blogsite; cloud based discovery

One of the things that openid connect does well (and who knows what the elements discussed in secret do, do not, will not, or will do) is articulate the role of identifiers and discovery. We see the ideas already manifesting in the wordpress world as one “connects” up the windows 8 app to a self-hosted blog site that itself hosts a server-side app called jetpack. Jetpack itself connects up to the wordpress cloud, being one of several sites that now associated with an account managed by wordpress.com.

Lets get practical!

Logon to wordpress.com (getting an account if required). Have wordpress.com host a blog site for you. Goto windowsazure.com and login (getting an account too, if required). Have azurewebsites host an azure website for wyou (provisioned with wordpress). Configure both so they operate classically, and then connect them up – which is the novel bit. Do that as I outlined in an earlier memo by installing, configuring and “connecting” the jetpack plugin for the azurewebsite hosted site to wordpress.com

The net effect is this:- on presenting your wordpress.com identity you will now be see listings of 2 sites. One happens to be hosted by wordpress.com; the other hosted in azurewebsites. You may be presenting your identity while, for example, using the share button in windows 8 that augments the internet browser (but not the desktop browser). Share with the wordpress app that you presumably installed into  your windows 8 experience and see that a wordpress.com login prompt is displayed. Use your wordpress.com account, see the selector for the multiple “sites” now “discovered” to be assocated with that account, and pick one.

if you pick the azurewebsite hosted wordpress, you will eventually see a login prompt from that site and you may logon using the identity credentials you were assigned THERE. Note how there is NO SSO (at this point in the pre “openid connect” rollout).

So note that one “cloud” identity was used for discovery – of your “social network of sites” and another was used  for local logon to those self-hosted wordpress sites that are autonomous of the cloud identity system. But note that the autonomy is strained. For the azurewebsite site can only leverages cloud services (tied to the could identity) when in some sense it “governs” the locally-hosted instance.

Now we have used this all successfully with the jetpack powered install we made yesterday and with the windows 8 wordpress app. The latter apparnetly uses the JSON API of our zurewebsite hosted wordpress site – and indeed we see such as stats from that site appear in the cloud portal (tied to our cloud identity).Similarly, we see stats from the wordpress.com hosted site (yorkporc) to which we can actually logon directly, with the cloud identity, too (unlike our self-hosted wordpress instance).

What I had expected to find, since this is a usage of the JSON API, is that the OAUTH authorization and sts endpoint of wordpress.com would be controlling access to the API endpoints in my locally-hosted wordpress instance. And, Id expect to see some “applications” listed now, for at least the windows 8 app – which is obviously a semi-trusted “interactive app” vs a website app that plugins a frame to the likes of a facebook page.



Posted in oauth, openid connect


This link is shared by the windows 8 wordpress app, having “connected” to the wordpress.com using the oauth2 authorization mechanisms.

Link | Posted on by

NSA worst enemy: Booz Allen and Dell!

I think NSA may be critically doomed unless they change their hiring practices; given booz allen and Dell may supply (evidently cleared) inhouse contracting folks whose countermeasures against eavesdropping (in Chinese Hong Kong!) are pillows under the door!




Now, in the defense of NSA from what (little) I know, Id like to say that that its JUST NOT FAIR to label all folks like this (employee OR contractor)… and this SHOULD not be taken as the average capability in the art of spying, or defending against spying! Rather, one should assume a dumb journalist – who cannot tell one end of a spying channel from the other, probably given the way the English editors treat the whole topic of institutional spying (given their knighthood is at stake).

Generically, when are typical Americans going to learn – generally – that a billion Chinese folks are intelligent and resourceful and rather focused on fulfilling their own and their still-tied-to-the-plough compatriots’ full human potential (chinese style) – and are not folks wearing 17 century pigtails and worrying about talking cats and hiding from feudal monkeys?

Posted in spying

wordpress, jetpack.me and blogging adoption of oauth2

It’s been my assumption for a while that the main mission of the cloud – in public policy – was to change how the internet is governed. First, the policy would be to nationalize it; and second to ensure that a few TTPs could seize control (when needed). Only by this means can the internet – as the core of a service economy – allow post-industrial powers like the UK (and US) grow (and thus keep up). Otherwise, as manufacturing job transfer from high cost economies to low cost economics (with 300 million recent middleclass joiners to embrace), one loses.

And its in that context we write this blog – mostly to play with all the tools: since tool making is actually a service!

Now, blogging is still evolving (as shown by the evolution of the toolware). And slowly, Im easing off the PC-era blogging tools and onto the newer paradigms – built into windows 8 and related devices (my windows phone, and android kindle). And so we get to play with “jetpack” – to understand where wordpress.com (that evil cloud player, now) is going with its OAUTH2 strategy for a post-imperial US role in world domination.

Hmm. Starting to sound like Cryptome, now. Best get off that shared joint, before I catch the American disease of gushing continual depressed rantings as a (middle-aged – actually old-aged) “geezer”. Oh well. Lets not rant but merely bloviate a bit (and forgive me if its has insufficient bloviation; I’m still learning how to be a depressive operating at the end of the American empire – one of the shortest-lived history will note. But fortunately, having caught the tail end of the end of the British empire, I have *Something* to guide me…).

So on installing wordpress app onto my windows 8 PC, I used what I learned from my android tablet: the share button, available after a suitable gesture. Up pops the wordpress “addon” to the browser. ANn, Im invited to “connect”” to the wordpress.com services – I.e. my (hosted) blogsite. I get to use either my (old and boring) wordpress.com account at the (hosted) login screen or use “my jetpack” – whatever that is.

Well, it turns out that jetpack is a hookup between my self-hosted blog (that I will now create) and wordpress.com – the evil “cloud vendor”, recall – who will support my “self-hosting”. Presumably, it will exercise “Continuing governance”, so that “self-hosting’ won’t delegate too much power (to me, the evil, non-exceptional foreigner playing the role of the visigoth, at the fall of Rome). Of course, since all this “Connection” information is “metadata” – one assumes wordpress.com will give it straight to NSA!

But lets get back to computers (since they are just as much fun as annoying folk by proxy).


I have an MSDN paid up subcription in Azure bound to me home_pw@Msn.com login account. But, I also have a 3 month subcription (free) bound to home_pw@outlook.com (to which I made home_pw@msn.com a co-administrator). Thus, from home_pw@msn.com’s console, I can provision an azure website with wordpress installed into “my” free subscription. (doesn’t this sound like a loophole!?). This is the first step before I get to use “jetpack” – I assume.

So, from the scaffolding, we initialize wordpress itself;


once I get a site (and logon classically to the self-hosted site) I access the site admin area, which allows me to install jetpack:



Note what Im going to get (in addition the things highlighte above) is the avility in the windows 8 to use the “share button” between IE and my blogging tool (an app loaded onto the PC) that in turn will use my Hosted site (jetpack.azurewebsites.com) membership system and associated login processes. If I understand the billing properly, this will connect up to the wordpress.com’s oauth for-cloud-subscriptions endpoints – that will enable my hosted site to expose APIs (guarded by tokens minted by wordpress.com).

That is… this is wordpress.com equivalent to the Google IDPs and Azure ACS namespaces! Anyone, enough theory. Let’s see what works when we “activate” the plugin:


Not surprisingly, we have to bind our site immediately to the equivalent of a new Azure ACS namespace… and hopefully learn what wordpress.com call such OAUTH2 tenants.

So let’s get passed the first hurdle (ssl certs):


Now, the whole reason I selected azure websites was so that the SSL provisioning problem got solved as in:


So, what’s up (and what induces wordpress.com to object to the chain above)? Well my conclusion is that the host environment of azure websites cannot process the wordpress.com trust chain (and never will, be a shared hosting platform). So we turn off https:




getting us to a “tenant authorization” screen:




Using some SSL magic id get shot if I tell you about, we get passed a blockage and onto:


…after a classical sso handshake and  request for and delivery and then use of an authorization_code:



We see that the JSON API component (of our self-hosted site) is now active, due to jetpack, and that wordpress.com offers a client-id/password registration site (for third party apps wanting to connect up to my new hosted site, courtesy of the offloading between jetpack and the wordpress.com cloud):



Let’s play more , once I’ve stopped laughing about an NSA contractor’s “pillow countermeasure” – the all-American grandmothers’ defense of the right to think only about apple pie.

OK. Back to something important (like making an authenticated internet work right):

Let’s create an OAUTH client record (intending to authorize some webapp-client (foo) – trying to access the new wordpress server at jetpack.azurewebsites.net:-



id: 3955

secret: k60ViyceYPCNynguCfHLIifaQUM7Sywil6fCconKBgfBUlw3fqUTDjFmOZnwvMn5

giving an authorize URI of:



using this authorize URI at a browser gives


after selecting “jetpack” (from my choice of this “connected” site and my hosted wordpress site “yorkporc”):


we get a challenge (to logon at the server’s IDP – which is the server itself):-


delivering a code (ready for a token client to use at the wordpress cloud’s token issuing endpoint).


ok. I think we see how things work. What we need to do is try again, later, now with a decent client (perhaps built into a second install of wordpress). It can consume this API.

Anyways, where we started was merely using a share button… that invokes the wordpress IDP (that we just proved work.)


giving us a sharing link:


Posted in oauth

Protected: No evidence of NSA’s ‘direct access’ to tech companies | Politics and Law – CNET News

This content is password protected. To view it please enter your password below:

Posted in spying

Americana and spying on citizens by proxy audit

Americana is what it is – and its right the one finds americana where else but in the american (internet) heartland. These days, its vital that the spying, surveillance, interception and stored data acess be terms applied to the Americana seen and now felt by Americans themselves – and not just those on the end of the big stick weilded by some us-backed dictator.

Note the terms of art: interception, distinct from surveillance, distinct from stored data.

The pop in facebook and google (and any other firm with a broadband isp license) is there to facilitate surveillance (not interception). Surveillance is scanning (for defined terms to be found in sans and lans). Interception is blanket capture, usually by tapping a cable or a relay switch operated by a carrier (vs a “service provider”).

Finally, note that surveillance does not need access to the sans and lans depositing your email bytes onto a remote disk drive say. It may more simply survey the (lan carrying the) audit logs generated by the process, that trace the steps. For within is the juicy stuff (being even more juicy than the message and its “public” metadata). One scans or samples the audit data… dummy! A cleared employee (authorized to lie to the ceo) operates the “capability”.

Posted in coding theory

What spying questions are the uk press failing to pose, properly?

Spying is compartmentalized by its nature – so spys can spy on each other (including your own side, ex presidents (to be) with an apparently fatuous comprehension of what little the us constitution is actually worth, lawyerly presidents who didn’t have “sex” with “that woman”, camelot  presidents who sleep with movie stars, and presidents who attempt to maintain the compartments by rationalizing about what the meaning of ‘”is” is’). The rediculousness is a function of the need to lie.

Assume the silicon valley ceo hires folks with clearances in order that s/he can disclaim knowledge of “secret ongoings” in the firm – since the ceo has no clearance. This is a typical corporate trick (that the typical board should be looking for). Why not ask about it? Or dont, if you want to keep your board perks (the bribe).

Assume that its a secret that what is “against policy” for the us to do in the us when targeting us persons is not illegal for the uk to do (and share back with the us). The secret is the bilateral agreement to mutually skirt public policy, with the uk receiving reciprocal benefits, including getting the good story line that appears to do other than what is is the intent of the (secret protocols of the) policy. Why not ask about the history of such practices (and the intent to mislead parliament). Forget your knighthood if you do.

Follow the money on the funding of the “semantic web”, and note where most of it came from. Why! Who in the standards setting knew the ulterior motives, or should have known? How deeply involved are engineering groups in “facilitating” spying?

Ask why intel is on the list of firms to receive special attention from the fbi. Goes to the heart of the pc (technical) trust model! Crypto is only as good as the machine doing the computation, remember.

Posted in coding theory

Interesting sync between openid connect voting, Ping federate testing and embargoe release


Just reading between the lines.

Nsa indoctrination takes years, and openid connect folk seems to have fallen for it hook, line, and sinker.

Openid connect was born in secret, and retains secret components. Vote against until folk apologise for that legacy.

Posted in coding theory

Protected: Silicon valley complicity in lying (about nsa/fbi systemic spying)

This content is password protected. To view it please enter your password below:

Posted in spying

delegating Handling for JWT-guarded webAPI – for dotNet 4.0

Sample codes for dotNet 4.5 show how to insert a delegating handling for webAPI controllers into webAPi projects. FUrthermore, the handler then validateds the token, inducing appropriate http errors when things go wrong at the guard.

because our projects uses lots of WIF, we cannot move to dotNet 4.5 – and thus cannot use the sample code (or the JWT securitytokenhandler). So, we have to approximate the right way of doing things (for now). Our delegating handler thus far looks like this – and at least it compiles, and guards Just the webAPI calls (in our web forms project);



Obviously, now one adds a roll your own token validation routine….

but at least we have a basic guard, without having to insert a IIS handler.

Note, Ive absolutely no idea what I’m really doing with parallel tasks, the APIs, etc. So there may be lots of better ways of doing what is shown.

Posted in oauth

cannot stopping laughing…on UK spying


What  a stupid question.

But the rules have evolved since when I got to “see inside”. There is no longer the pretense (where the UK contractor surveilled the American public s “metadata” (for the US govt); and vice versa.

Ah, but the real fool is Tim Berners Lee – whose “open” web concept was designed to be hijacked. But, he got a knighthood for facilitating “openness”. Very British. Hope he enjoys clanking around in his armor. Now he can join his father in the crypto ‘establishment’ (and spy on all of us, not just nazis). One better than dad!

Posted in spying

Webforms WebAPI + todoItem Phone App + ACS AS for XAML client

We are now taking 3 sample apps from Micosoft and using them to tell our local OAUTH AS story.

That is, FIRST, we have managed to repurpose the XAML client that previously showcased popping up a web browser into order to invoke websso and thus get a hold of the redirect URI issued by the ACS namespace serving the windows data marketplace; upon which event the code show a background task converting the code to an first access token (using the token issuing endpoint of ACS).

Second, we altered the XAML code’s settings so that client and its web browser login process now points away from the marketplace endpoints and now points at our own OAUTH AS endpoints. (Recall, that our own AS wraps an ACS namespace whose management service and token issuing endpoint do most of the real work).

Third, we added the todoList webAPI controller to our dotNet4.0 windows forms project – necessarily dotNet 4.0 since we make heavy use of the WIF extension libraries. The point is that this is the API just happens to that which that a certain demo Windows Phone app wants to talk to, having talked to an OAUTH AS. While the original sample wanted to showcase the app handling the JWT from the OAUTH STS using the JWT Securitytoken handler (available for dotNet 4.5 only) , in our case we will be content to simply manually parse the JWT – without verifying its RSA signatures etc.


So there seem two steps:

1. finish up the XAML client so it uses the todoApp API (Rather than a much more complex Zillow API). We want to see it pass across the JSON token that our own OAUTH AS/STS has minted, and for the App to handle the token. It should also use the userid from the token to help select a subset of the todo Items.

2. replace the role of the XAML client with the windows phone app, doing essentially the same thing. This project showcases the windows phone app using an embedded browser.


Now we have seen some design points occur already. First, the browser pop in the XAML case shows a kind of address bar (so there is webby feedback about trust). We need to see how the model is continued in the phone app case. Second, the embedded app may not send a useragent header (as in the XAML case) triggering bugs…in code that failed to assume that it could be a null string.

Posted in oauth

Using Dallas OAUTH CLIENT to talk to ACS OAUTH2 endpoints

We have moved from using dotnetopenauth framework for oauth2 providers (that have to date allowed us to emulate Ping Federate AS, using Microsoft ACS OAUTH endpoint and management services)). Our plugin is still compatible with the dotnetopenauth ASP.NET pattern, but we now use objects from the “dallas” sample code to talk to the token issuing endpoint of ACS.

The reason is simple – we needed our own token issuing endpoints, delegating to the token issuing point of ACS remember, to fully handle expiry dates of refresh tokens – or access tokens when no refresh token is signaled.

Having done that, two benefits accrue: (i) the ping federate demo client correctly shows expiry fields, and (ii) the dallas API sample  works to get authorization headers. The latter is the demo of a thick XAML client firing up a web-browser window so as to complete websso and interact with (our) OAUTH2  AS and token issuing endpoints, when then accessing a data API in the Azure data marketplace (showing zillow mortgage data, as it happens)


using a better client to talk to ACS token issuing endpoint

The ping federate page showing the authorization response now handles delivering  the (renewal) expiry issued by ACS:


validation by our own endpoint then shows the remaining time to expiry of the access token which can be longer than the renewal token expiry time (when the clock skew between ACS and the token issuing computer works against us!)


Posted in oauth, pingfederate

Protected: Verizon and windows phone spying

This content is password protected. To view it please enter your password below:

Posted in spying

[0804.0011] Classical and Quantum Tensor Product Expanders


Building on seberry’s paper on bent functions and the math background of unitary processes, permutation matrices, spectral gaps, trace eigenbases etc, we get to glimpse material and application topics withheldfrom any such paper: cryptographic method for wheel wiring and mixer analysis. The overlap between maximal quatum mixing states, supercomputers with hammng weight seiving capabilities augmented by 1980 era prabalistic search capabilities (using actual nuclear chain reactions to cover the combinatorics), and more modern attempts to now define a reliable multi qbit processor array for indetermistic processes with 1950 era wheel wiring . Is fascinating.

One sees why even the topic of rotor wiring and analysis (of what cobstitues a good set) is even today so highy controlled, with the usual academic deceptions and participation.

Posted in coding theory

254B, Notes 2: Cayley graphs and Kazhdan’s property (T) | What’s new


Cryptographic design method for sigaba index rotors?

Well within rowletts capabilities.

Posted in coding theory

NSTIC the reality–wot a mess (typically); the free lunch to be avoided

So once I had 19 login accounts, to each of which was bound some authorization information. it was a pain, and sso was born. Now I have one login account and life is good.

Or it would be if now I did still have 19 authorization licenses bound to the one login.

The point is that the mess didn’t go down with NISTIC-inspired nationally-coordinated login – it went UP. Folks just moved the complexity point along the chain a little. Now there are just lots MORE places where control is exercised – any one of which can go wrong.

So, now I have an MSDN account , bound to home_pw@msn.com –  which issues various codes that that can pay for authorization liceneses – such as be a developer/publisher of store apps and/or phone apps (which are different!). I have azure subscriptions tied to home_pw@msn.com and others tied to home_pw@outlook.com. Logging on to the azure portal can also happen via the integration with Office365 (as an IDP), which can delegate to a corporate ADFS – so logon as that ADFS issues the office365 account’s name. When I “deploy” to the Azure hosting world, however, anyone who owns a .publisher credential can do so. When I want to RDP to the resulting site, I have to use a username/password (that is not tied in anyway to microsoft accounts, Office 365, ADFS, or anything else);;

To make visual studio 2012 local debugging work properly, I have to run it as Administrator – except when running store apps (which don’t allow the process to run in a group tied to the local Administrator). The windows accounts are actually Microsoft Accounts (formally known as LiveIDs) in which Adminsitrator@domainAdmin@localAdmin –> home_pw@msn.com and store@domainAdmin –> home_pw@outlook.com. While store apps running on the windows server box MAY leverage the SSO that comes from logging onto the PC with a microsoft account, this is not true for webapps – which constantly prompt me for live credentials . such as doing all the above.


of course, I have absolutely no interest in the above – except that I have to do it to get anything done. What Im really interested in is having my own SSO network projected through ACS which is NOT “governed” by any of all the above. its offensive, its in the way, and its Very American (a giant mess). All that happened was the there is now even  greater granularity of governance control – for absolutely no benefit to me. Whatever TRUST the above may be intending to confer upon me in the eyes of consumers is wasted (since their trust in me has nothing to do with Microsoft’s proxy-NSTIC governance story.) The number of third party apps that need to connect up to the data service than mine conencts up to is zero. I am not facebook, and not attempting to be a facebook. I just want to present an app on a device .. so it goes klunk – just like on the PC.

The problem with the OpenID Connect style “device” story is that it seems to be MOSTLY about governance – attempting to have a few TTPs (acting as proxys for regulators) run stores – that project national social policies on the “trusted space” of devices – vs those evil PCs that are an open platform.

Until the devices behave more openly, I think we avoid their fancier “integration” features. Something about free lunches not being free…

Posted in openid connect

Azure AD OAUTH (vs ACS OAUTH) from Windows Phone



The picture above shows the AD client recently added to the ASP.NET “component” in dotnetopenauth open source project. One sees, by default, that this provider is set up to protect the graph API endpoint (the “resource”).

We see that the endpoint supports the websso-friendly authorization code grant type:


We see that the token endpoint has a few quirks:


and we see that the provider client retains the tenant ID and the user OID in instance variables.

One notes how the cert handling fails to conform to PKIX…and one notes the imposition of SHA256.


We see now handling of the refresh token (assuming there is one). We also see no evidence of “id tokens” either.

Now, what we do NOT see is how the provider and SID variables would be handled on the redirect from the authorization endpoint (since proper use of the OAUTH2 state mechanism is not in evidence). Evidently, the service principals redirect is not configured to go back to the ASP.NET template’s ExternalLogin page but to the AbsoluteUri of the page hosting the requestAuthorization handler (normally login.aspx). This will be interesting to track, to see how it was done PROPERLY. Login must maintain state somehow (and presumably it will be via viewstate).

We also see the article at http://www.cloudidentity.com/blog/2013/04/29/fun-with-windows-azure-ad-calling-rest-services-from-a-windows-phone-8-app/ that also consumes the (Azure AD OAUTH2) endpoints directly, too. The latter looks like the next thing to emulate, if only because I can now so trivially swap out the native use of Azure AD endpoints and substitute my own (which happen to delegate to Azure ACS’s OAUTH2 endpoint “support”).

Posted in Azure AD, oauth

Using PingFederate-style OAUTH2 in Azure Mobile Services apps

The job code article series nicely lays out how to think about nodeJS scripts in Azure Mobile Services and their interaction with the OAUTH2 protocol. IN one case ,show below, we see that perhaps our own JWT could be used, as already being minted by our Ping-Federate OAUTH AS emulator site (itself hosted as an Azure cloud service supported by the Azure ACS OAUTH2 and management service endpoints).



So the steps to get here would seem to be:-

  • Take our working smarteragent IoS Application and build our own equivalent working on an iphone – using the Azure Mobile Services starter project for IOS apps. Of course, this means we need a web site delivering json services to the app, too – a role that can be played by mobile service site.
  • We need to change the app’s client-side code so that the embedded browser goes to our ping federate-AS-emulator /authorization/as.oauth2 endpoint looking for redirect bearing the authorization_code. Perhaps this redirect should target a page on our authorization server – that induces a suitable (javascript) push notification to the various apps built with the microsoft mobile client library. We can study how ACS’s HRD support and then Azure mobile scripts do this, for example. The net result is that the app’s native code gets control back from the embedded browser.
  • Do we really want the app to then to use the PingFederate-AS-emulator site to convert the code into a JWT …
  • …which is then used as give in the article above?
Posted in oauth, pingfederate

254B, Notes 1: Basic theory of expander graphs | What’s new


This time around im understanding most of it.

Different in tone to the stanford prof. Here, its a mix of textbook and commentary (on writing textbooks). Its quite effective style – focussing on drilling the connectives of the brain to align with those found in the material.

Posted in coding theory

Cypherpunk sexistential tizzy

Cypherpunks have got themselves into a uk communist party style tizzy – destined to create the usual “final schism” that befalls all false religions founded on inconsistent  (and self serving) doctrine.

For at least the last 5 years (if not since 9/11),  cypherpunks have lived with self denial: the desire to perpetuate the myth that anarchy won in full knowledge that it didn’t, actually.

Today we know now that ssl, ipsec or skype was/is compromised not by the (figuratively evil) us federal govt  – that  easy to target amorphous evil (whose evilness magnitude is a function of size) – but by corporate vendors – out to make a buck…while employing and thus feeding one or two folk and their dependents. Of course the very nature of governance means that corporate profit is a function of general compliance with public or secret policy – that stated explicitly, “or otherwise”. Such is a nature of democratic politics (hardly ideal , even in its academic form).

Is  cypherpunks dead …like the corpse of the (academic relic of the) uk communist party I happened to encounter in my first job (at qmc-cs)?

It certainly seems so – from the dynamics of the end game.

Good riddance (though I’ll miss it, not that I read the endless drivel beyond, as with marks first pamphlet, the founding prophecy)

Posted in coding theory

a little civil servant joke



originally cited by Cryptome.

Posted in early computing