Acoustic cryptanalysis – valicert root keys

http://tau.ac.il/~tromer/acoustic/ valicert root keys A little known fact is that i was mc at the generation of three ca/ssl root keys still used widely. The cpus were spied on by a truck/trailer placed in the car parking lot of the adjacent building. My assumption at the time was that what folks wanted was the primality testing evidence. The ceremony was observed by Price Waterhouse. I always assumed that one of the individuals, not acting for the firm but for others to whom the firm owed a ‘higher duty,’ participated in facilitating recording activities, at the required fidelity. It was interesting to watch the charade (on video playback). Not citable. No permission granted for linking or downloading content. This is marked ‘peter fouo’. You may not precis, paraphrase or quote even 1 word.

Posted in coding theory

build on working resource/client grant and test http client talking to an odata server–to buildout an odata proxy client

Now  that we have two clients working in command line windows (i.e. CGIs) able to talk to the authorization server to swap their vedor’s UA-usernametoken or the record owner’s User-usernametoken for an Access token, we have to focus next on the authorization grant case.  But BEFORE WE DO, lets imagine we were just going to use the client-credentials grant – be a (server-side) application (such as a CGI) that has UA-rights to talk at a a privileged “application” level of permission to the odata API.

Let’s update our shipper client, a WPF app, to prove that it can apply the resource-grant swapping of usernametokens for access tokens. Rather than hand code the test call, lets use the correct mechanism – an ODATA proxy. So, as usual let’s follow instructions and figure out why we don’t understand what is said.

First, update the tooling in visual studio – since for unknown reasons the service reference tool cannot apparently read the metadata from odata and produce the proxy.

image

http://blogs.msdn.com/b/odatateam/archive/2014/03/11/how-to-use-odata-client-code-generator-to-generate-client-side-proxy-class.aspx

image

To ensure it all works with the beta release of code we are using for odata hostsed in webAPI 2.2, lets follow how to modify the above:

image

http://blogs.msdn.com/b/webdev/archive/2014/03/13/getting-started-with-asp-net-web-api-2-2-for-odata-v4-0.aspx

image

image

With the odata server know to be running (without debug) and working as the webasip[ v2.2 based resource server (on http://localhost:38385/odata, per metadata reference, above), we run the custom tool, as commanded:

image

to get our client proxy:

image

How we test and secure it is a DIFFERENT question! BUT ITS not too hard (having removed the security guard at the controller, for now).

image

Posted in oauth, odata

RESO, odata and oauth

The RESO group of the National Association of Realtors (a trade group, who have to be careful not to fall foul of anti trust rules re software standard) are looking at odata, ouath – mostly for server-server transfer of bulk data.

We build a model – an authorization server using machine-tokens and machine token protection (from sample code), its resource server AUGMENT with an odata v4 controller (from a blog post), all mixed in with a thick client talking to a normal API, using JWTs for bearer tokens obtained from ACS, that talked to AAD, that talked to the realty IDP with Agent data.

https://onedrive.live.com/redir?resid=5061D4609325B60!7038&authkey=!ABH5yl6V4EYqqz4&ithint=file%2c.zip

Posted in RETS

scaffolding for an authorization server

image

http://www.asp.net/aspnet/overview/owin-and-katana/owin-oauth-20-authorization-server

Posted in oauth, OpenID, owin

security policy label (negotiation)

its interesting to see how, from the days when we realized that “CORS” in the world of cisco phone/PBC protocol negotiations were nothing more than a security label negotiation, the “CORS” now seen in the web world IS THE SAME THING.

Its just being presented, first off, as IBAC (even though its true intent is RBAC, where R is as in Rule BAC, where rule means security policy label). its just fascinating to see the way that the US/UK are militarizing the web WITH THE EXPLICIT UNDERSTANDING of the likes of microsoft etc. Presumably the want the military dollars.

But, at least now I have a client project – a web project fashioning a javascript UA running in a browser – talking to a server project – a webAPI project with a webAPI controller augmented with CORS sub-negotiation at layer 6.

Just interesting to see the concept of CORS, in which one requires the browser (or equivalent in the app world) to reject the inbound response. Also fun to see how trivially easy it would be to impose a security policy label ordering world:

image

So just to test this out, we amend the project a little  – to make it a bit more like a military security labeling world:

 

using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; namespace WebApplication3.Controllers { using System.Net.Http; using System.Threading.Tasks; using System.Web.Cors; using System.Web.Http; using System.Web.Http.Cors; namespace WebService.Controllers { [MyCorsPolicy(PetersMilitaryCorsPolicy.MyLevels.Level1, PetersMilitaryCorsPolicy.MyCaveats.Caveat1)] public class TestController : ApiController { public HttpResponseMessage Get() { return new HttpResponseMessage() { Content = new StringContent("GET: Test message") }; } public HttpResponseMessage Post() { return new HttpResponseMessage() { Content = new StringContent("POST: Test message") }; } public HttpResponseMessage Put() { return new HttpResponseMessage() { Content = new StringContent("PUT: Test message") }; } } } [AttributeUsage(AttributeTargets.Method | AttributeTargets.Class, AllowMultiple = false) ] public class MyCorsPolicyAttribute : Attribute, ICorsPolicyProvider { private PetersMilitaryCorsPolicy _policy; public MyCorsPolicyAttribute(PetersMilitaryCorsPolicy.MyLevels level, PetersMilitaryCorsPolicy.MyCaveats caveat) { // Create a CORS policy. _policy = new PetersMilitaryCorsPolicy(level, caveat) { AllowAnyMethod = true, AllowAnyHeader = true, }; // Add allowed origins. _policy.Origins.Add("http://webapplication51006.azurewebsites.net"); } public Task<CorsPolicy> GetCorsPolicyAsync(HttpRequestMessage request,System.Threading.CancellationToken token) { return Task.FromResult(_policy as CorsPolicy); } } public class PetersMilitaryCorsPolicy: CorsPolicy { public enum MyLevels { Level1=1, Level2=2 } public enum MyCaveats { Caveat1 = 100, // "room opnly", Caveat2 = 200 // "house only" } public class MyLabel { public MyLevels level { get; set; } public MyCaveats caveat { get; set; } } public MyLabel PolicyLabel { get; set; } public PetersMilitaryCorsPolicy(MyLevels level, MyCaveats caveat) { PolicyLabel = new MyLabel(); PolicyLabel.caveat = caveat; PolicyLabel.level = level; } } }

Posted in owin

Visual Studio 2013 webAPI (MVC-based) project – with individual authentication

What this means is that the project is really two: an authorization server (doing something like the oauth protocol, in pattern terms) and a webAPI. The components doing each function are all jumbled together in the source tree.

One sees an accountcontrollee implementing the guts of the oauth endpoints, and the new ASP.NET libraries helping the authorize endpoint challenge users (and validate credentials using a db of challenge evidence). one also sees in the webAPI the mechanism used to introduce an bearer interceptor into the pipeline, that can do the oauth dance with what we just described .. when no bearer token is recovered from the HTTP request.

Fortunately, an article helps out! See http://www.asp.net/web-api/overview/security/individual-accounts-in-web-api

In short, we will use Fiddler to pretend that we have built the “missing” link – a third project that one can think of as the missing windows store app, the windows phone app, or a azure mobile site’s app, etc.

http://localhost:2288/api/Account/Register

Content-Type: application/json

{
  “UserName”: “Alice”,
  “Password”: “password123″,
  “ConfirmPassword”: “password123″
}

Our having updated libraries to use pre-release builds probably explains why we get the following error:

image

So let’s add an Email Address to the registration information. Let’s also change the password, since later one would also get “Passwords must have at least one non letter or digit character. Passwords must have at least one uppercase (‘A’-’Z’).”

http://localhost:2288/api/Account/Register

Content-Type: application/json

{
  “UserName”: “Alice”,
  “Password”: “Password!23″,
  “ConfirmPassword”: “Password!23″,
  “Email”: “home_pw@msn.com”
}

image

 

Getting back to the article: we now do NOT use the authorize phase of the OAUTH handshake moving straight onto what is normally phase #2: token swapping. So, we provide one token (the uid/pwd) and seek to get back another (a JWT) asking the engine to be a swapper – since that’s want the grant type of ‘password’ does. Complicated OAUTH formalisms about that also exist, if you want to confuse yourself.

but the token method doesn’t work (and no accountcontroller method exists); whereas method do exist for “external” sources of login event. its almost as if the internal authorization functionality was removed.

So let’s abandon THAT article and go with the flow, now figuring on using external authentication. Presumably, all the fabric left in accountcontroller and asp.net Identitymodel exist to do account linking (of that yet to be obtained token from an external provider). Perhaps,  yet, post account linking our own code will mint a “local” bearer token. Lets see!

image

http://www.asp.net/web-api/overview/security/external-authentication-services

First, we just do the obvious:

image

but it doesn’t do anything. So what gives?

Somehow we cannot talk to Token and we cannot redirect to our external source. lets asume that all such logic is supposed to be in the webapi consumer.

Posted in owin

interactive authentication gives simple demo passive STS

Nice code showing a simple passive STS. Known as interactiveauthentication.

http://code.msdn.microsoft.com/AAL-Native-App-to-REST-de57f2cc

image

Posted in wcf

Poor Microsoft OWIN ws-federation security model

image

if you do give a metadata address, it doesn’t bother confirming whether the certificate used to sign the metadata is valid (ever).

Seems poorly thought out – since lots of folks are NOT going to know to write their own validator.

Posted in owin

certs and FOAF+SSL yet?

At least its self signed a client cert (and a self-contained demo).

 

image

 

https://aspnet.codeplex.com/SourceControl/latest#Samples/WebApi/ClientCertificateSample/CustomCertificateMessageHandler.cs

Posted in FOAF+SSL

Custom JWT Security Token Handler (for ws*) based on OWIN metadata based validation

image

Image | Posted on by

WebApplication7.zip is the project, ready to compile etc.

https://onedrive.live.com/redir?resid=5061D4609325B60!6977&authkey=!AG5Up32lehBgsNI&ithint=file%2c.zip

 

image

http://blogs.msdn.com/b/webdev/archive/2014/03/13/getting-started-with-asp-net-web-api-2-2-for-odata-v4-0.aspx

now to add the oauth authentication element, based on understanding how the single-page application does it.

Once we have the webAPI/odata using an oauth endpoint provided by the very same app, we can have it consume one of microsoft’s “external providers” (google etc). Then we can substitute in our own, either AAD-based or ACS-based.

Link | Posted on by

owin webapi authorization server bearer provider

 

image

From http://stackoverflow.com/questions/19938947/web-api-2-owin-bearer-token-authentication-accesstokenformat-null

These seem to be saving only in that they are great example of how to roll your own stuff.

[RoutePrefix("api")]
public class AccountController : ApiController
{        
    public AccountController() {}

    // POST api/login
    [HttpPost]
    [Route("login")]
    public HttpResponseMessage Login(int id, string pwd)
    {
        if (id > 0) // testing - not authenticating right now
        {
            var identity = new ClaimsIdentity(Startup.OAuthBearerOptions.AuthenticationType);
            identity.AddClaim(new Claim(ClaimTypes.Name, id.ToString()));
            AuthenticationTicket ticket = new AuthenticationTicket(identity, new AuthenticationProperties());
            var currentUtc = new SystemClock().UtcNow;
            ticket.Properties.IssuedUtc = currentUtc;
            ticket.Properties.ExpiresUtc = currentUtc.Add(TimeSpan.FromMinutes(30));
            var token = Startup.OAuthBearerOptions.AccessTokenFormat.Protect(ticket);
            return new HttpResponseMessage(HttpStatusCode.OK)
            {
                Content = new ObjectContent<object>(new
                {
                    UserName = id.ToString(),
                    AccessToken = token
                }, Configuration.Formatters.JsonFormatter)
            };
        }

        return new HttpResponseMessage(HttpStatusCode.BadRequest);
    }

    // POST api/token
    [Route("token")]
    [HttpPost]
    public HttpResponseMessage Token(int id, string pwd)
    {
        // Never reaches here. Do I need this method?
        return new HttpResponseMessage(HttpStatusCode.OK);
    }
}

Startup class:

public class Startup
{
    private static readonly ILog _log = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);
    public static OAuthBearerAuthenticationOptions OAuthBearerOptions { get; private set; }
    public static OAuthAuthorizationServerOptions OAuthOptions { get; private set; }
    public static Func<MyUserManager> UserManagerFactory { get; set; }
    public static string PublicClientId { get; private set; }

    static Startup()
    {
        PublicClientId = "MyWeb";

        UserManagerFactory = () => new MyUserManager(new UserStore<MyIdentityUser>());

        OAuthBearerOptions = new OAuthBearerAuthenticationOptions();

        OAuthOptions = new OAuthAuthorizationServerOptions
        {
            TokenEndpointPath = new PathString("/api/token"),
            Provider = new MyWebOAuthProvider(PublicClientId, UserManagerFactory),
            AuthorizeEndpointPath = new PathString("/api/login"),
            AccessTokenExpireTimeSpan = TimeSpan.FromDays(14),
            AllowInsecureHttp = true
        };
    }

    public void Configuration(IAppBuilder app)
    {         
        // Enable the application to use bearer tokens to authenticate users
        app.UseOAuthBearerTokens(OAuthOptions);
        app.UseCookieAuthentication(new CookieAuthenticationOptions
        {
            AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie,
            LoginPath = new PathString("/api/login")
        });

        // Configure Web API to use only bearer token authentication.
        var config = GlobalConfiguration.Configuration;            
        config.SuppressDefaultHostAuthentication();
        config.Filters.Add(new HostAuthenticationFilter(OAuthBearerOptions.AuthenticationType));

        app.UseWebApi(config);                          
    }
}

MyIdentityUser just adds an extra property:

public class MyIdentityUser : IdentityUser
{
    public int SecurityLevel { get; set; }
}

MyUserManager calls my custom user authentication method to an internal server:

public class MyUserManager : UserManager<MyIdentityUser>
{
    public MyUserManager(IUserStore<MyIdentityUser> store) : base(store) { }

    public MyIdentityUser ValidateUser(int id, string pwd)
    {
        LoginIdentityUser user = null;

        if (MyApplication.ValidateUser(id, pwd))
        {
            // user = ??? - not yet implemented
        }

        return user;
    }
}   

MyWebOAuthProvider (I took this from the SPA template. Only GrantResourceOwnerCredentials has been changed):

public class MyWebOAuthProvider : OAuthAuthorizationServerProvider
{
    private readonly string _publicClientId;
    private readonly Func<MyUserManager> _userManagerFactory;

    public MyWebOAuthProvider(string publicClientId, Func<MyUserManager> userManagerFactory)
    {
        if (publicClientId == null)
        {
            throw new ArgumentNullException("publicClientId");
        }

        if (userManagerFactory == null)
        {
            throw new ArgumentNullException("userManagerFactory");
        }

        _publicClientId = publicClientId;
        _userManagerFactory = userManagerFactory;
    }

    public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
    {
        using (MyUserManager userManager = _userManagerFactory())
        {
            MyIdentityUser user = null;
            var ctx = context as MyWebOAuthGrantResourceOwnerCredentialsContext;

            if (ctx != null)
            {
                user = userManager.ValidateUser(ctx.Id, ctx.Pwd);
            }                

            if (user == null)
            {
                context.SetError("invalid_grant", "The user name or password is incorrect.");
                return;
            }

            ClaimsIdentity oAuthIdentity = await userManager.CreateIdentityAsync(user,
                context.Options.AuthenticationType);
            ClaimsIdentity cookiesIdentity = await userManager.CreateIdentityAsync(user,
                CookieAuthenticationDefaults.AuthenticationType);
            AuthenticationProperties properties = CreateProperties(user.UserName);
            AuthenticationTicket ticket = new AuthenticationTicket(oAuthIdentity, properties);
            context.Validated(ticket);
            context.Request.Context.Authentication.SignIn(cookiesIdentity);
        }
    }

    public override Task TokenEndpoint(OAuthTokenEndpointContext context)
    {
        ...  // unchanged from SPA template
    }

    public override Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context)
    {
        ...  // unchanged from SPA template
    }

    public override Task ValidateClientRedirectUri(OAuthValidateClientRedirectUriContext context)
    {
        ...  // unchanged from SPA template
    }

    public static AuthenticationProperties CreateProperties(string userName)
    {
        ...  // unchanged from SPA template
    }
}

MyWebOAuthGrantResourceOwnerCredientialsContext:

public class MyWebOAuthGrantResourceOwnerCredentialsContext : OAuthGrantResourceOwnerCredentialsContext
{
    public MyWebOAuthGrantResourceOwnerCredentialsContext (IOwinContext context, OAuthAuthorizationServerOptions options, string clientId, string userName, string password, IList<string> scope)
        : base(context, options, clientId, userName, password, scope)
    { }

    public int Id { get; set; }        
    public string Pwd { get; set; }
}
Posted in katana, OpenID, SSO

Turing, actions from rep of permutation swap, Chi/Characteristic, expander graph family

 

image

http://yorkporc.wordpress.com/2014/04/05/quantum-random-walks-along-turing-era-world-lines-with-swaps-built-into-1950s-rotor-machine/

We can see now that Turing was modeling this, using a rotor wiring plan etc. The alternating group is a manifestation of the constant group.

So we can see that GCHQ has had qudit computational models for 50+ years (even for discrete computing and their algorithm designs, for cryptanalytical searching)

we can just look at the above picture as 3 oscillators, with the line of wave1 wapping between +1/-1, the wave2 between –2/+2, etc. Each is a branching space, and the 3 oscillators combine to create a tree of simple  combinations that expands leftwards, from origin point 1.

Now one can imagine that the field in which this is all calculating is some RS ring, so that it can be implemented using a finite state machine. Now we can easily see how, by 1950s, that the pure rotor cage is augmented by a simple set of shift registers that allow the production of binary linear convolutional codes. They can be producing systematic codebooks (when one wants signatures) or non-systematic (for a OAEP-style padding regime). taking a couple of shift registers boards from the colossus design and applying it to a rotor cage sounds eminently doable, in 1946 for things like aircraft I&A. This feels like the kind of thing Fiestel was doing.

Posted in crypto, enigma

enigma-era representation theory, constant spaces and orthogonal duals

enigma-era math is famous for the work of the Polish cryptographers who exploited the notion of conjugacy classes (that relate to the character functions of permutation functions).

In Turing’s only known math-centric treat of enigma-era theories about crypto, in his On Permutations manuscript,  we see him also focusing on represenation theory of the symmetric group.

When turing modeled U+x.U-y

we now see that his whole argument is about swapping the order or rotors (by the action of the representation of the input group member – a cycle from the upright, recall).

What is more, we finally have a solid metal model for why, in sub-representation theory, one wises to constrain the +x and –y (etc) so they are 0 – before and after the permuting effect of the input cycle. We see how his H is what these days might be called Vper in contrast to V and Vconst.

The constraining of the representation of the input group “cycle” to the Vper impacts the degree of the resulting Cayley graph -reminding use somewhat of those NKD codes of the form (n, n-1, n-1). But, since we are in a q-ary world already with enigma wheels (which makes q=26) we should not be thinking in binary (i.e. q=2) but in terms of Reed Solomon (where each of the sequence of rotors, above, is the nth degree of the reed solomon world polynomial).

With this perspective we see how rotors, with suitable feedback, can be seen in the that reed solomon coding sense. They are calculating in a particular field (or ring); in which “rotors” can divide “rotors” to form residue classes – in analogy to polynomial fields.

We have to recall that the unitary condition means that distances measured after the transform are the same as the distance between the terms used in the domain applied to the transform. And we do recall that Turing’s main argument hinged on such numbers – formed by considering the pairwise distances between the terms in the image space, figured after computing the rod specified by the power term (conjugating the original upright’s alphabet, upon rotating it relative to the diagonal by +x, or –y, etc).

Now it was always a little confusing what Turing meant about his use of alpha term, and the associated 1-dimensional argument. But that becomes a little clearer if we think in terms of “diagonalizing a matrix, having computed eigenvalues and their multiplicities”. If we now take Vconst as a (rotationally “scaled” (by alpha degrees) Sn-invariant subspace (just as is Vper) then we have another clear analogy to coding theory, Vper is the space orthogonal  to Vconst.

Take a circle’ and take the line from origin to the rightmost point. Now rotate that line a bit anticlockwise thereby “scaling it” by “alpha” degrees. Next, create cones of future and historical tree development at right angle to that (rotated, translated, scaled) line…. and this is our coding space – with bi-infinite sequences of historical and future branching events and a p-adic distance measures between points on the cones’ leading edges.

I’ve actually purchase a e-copy of a good book that discusses the background theory to qudits in the context of expander graphs – all expressed in terms of group theory and permutation representation of symmetric groups (without mentioning enigma and rotors, of course). The text hardly says eigenvalue (or makes one do boring linear algebra fiddling) unless the eigenvalues means something of relevance either to diagonalizing matrices to form quantum gates or noting how relations between eigenvalues in a spectral basis relate to the conditions of expander graph families.

of

Posted in early computing, enigma

signing powershell

We signed our powershell script, having identified the indes of the signing cert in the list enumerate by powershell

image

We simply followed the instructions, here, to make the signing credentials – in the visual studio invoked command tool.

image

image

This gives us:

 

1 param([string[]]$args) 2 3 4 $msolcred = Get-Credential -UserName admin@netmagic.onmicrosoft.com ` 5 -Message "password for netmagic is Rapattoni1!" 6 Connect-MsolService -Credential $msolcred -ErrorAction Stop 7 8 $setfed = Get-MsolDomainFederationSettings -DomainName "rapmlsqa.com" 9 $alog = $setfed.ActiveLogOnUri 10 11 $strarr = $alog.Split('/') 12 $len = $strarr.Length 13 14 #colc/8/BARS 15 #appid/linkid/mlsid 16 17 18 $mlsid = $strarr[$len - 1] 19 $linkid = $strarr[$len - 2] 20 $appid = $strarr[$len - 3] 21 22 23 Get-MsolDomainFederationSettings -DomainName "rapmlsqa.com" -Verbose 24 25 26 echo $mlsid 27 echo $linkid 28 echo $appid 29 30 foreach ($name in $args) { 31 32 $upn = $name + "@rapmlsqa.com" 33 34 $displayname = $name + "_at_Rapattoni" 35 36 $someString = $name + $appID + $mlsID 37 $bytes = [System.Text.Encoding]::Default.GetBytes($somestring) 38 $md5 = new-object -TypeName System.Security.Cryptography.MD5CryptoServiceProvider 39 $hashbytes = $md5.ComputeHash($bytes) 40 $result = [GUID]($hashbytes) 41 $resultstring = $result.ToString(); 42 $resultstringbytes = [System.Text.Encoding]::Default.GetBytes($resultstring) 43 44 $base64 = [System.Convert]::ToBase64String($resultstringbytes) 45 46 $msoluser = Get-MsolUser -UserPrincipalName $upn 47 48 Get-MsolUser -UserPrincipalName $upn -Verbose 49 50 echo "new-msolUser –userprincipalname $upn -immutableID $base64 -lastname At_Rapattoni –firstname $name –Displayname $displayname -BlockCredential `$false" 51 } 52 53 54 # SIG # Begin signature block 55 # MIIFuQYJKoZIhvcNAQcCoIIFqjCCBaYCAQExCzAJBgUrDgMCGgUAMGkGCisGAQQB 56 # gjcCAQSgWzBZMDQGCisGAQQBgjcCAR4wJgIDAQAABBAfzDtgWUsITrck0sYpfvNR 57 # AgEAAgEAAgEAAgEAAgEAMCEwCQYFKw4DAhoFAAQUNcK5l7KQymdpsvSK5ykEqcG6 58 # GH+gggNCMIIDPjCCAiqgAwIBAgIQ+o34q/izeYlBb9C8iTKNxDAJBgUrDgMCHQUA 59 # MCwxKjAoBgNVBAMTIVBvd2VyU2hlbGwgTG9jYWwgQ2VydGlmaWNhdGUgUm9vdDAe 60 # Fw0xNDA0MTkyMzA4MzFaFw0zOTEyMzEyMzU5NTlaMBoxGDAWBgNVBAMTD1Bvd2Vy 61 # U2hlbGwgVXNlcjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJZVtSj5 62 # K+VOD8tpAc5SCGM5EsYdFXhQgrjyLY3QdfEoQ1N9vuUPc2xIpzQpbY5cRNMSw1sz 63 # qFQqmbxLYcDKa3Q0cTcKrj66EuV7U2uJaYQP7N6WLkuZMM8NTdu7PlkbDt/2bVN1 64 # GN2MSg556lfwaZQjPfAY8PzVXWzDEGqeoCXOZ7awITGLMg4vD0nIT8PH1kGwq9gB 65 # lmo/++S+UmJ9DofAp9lFRxC388fv2dmzWHAWT3rRO3DYUhrzQEVkv9JN8ik2RQuY 66 # sQUW9J57NMDsLOYudsB9AqMd4i6KdYgQQtG0Cc8ndTKScp1yc3Lk+evPATxA0cHk 67 # bv877CXUtcHH1isCAwEAAaN2MHQwEwYDVR0lBAwwCgYIKwYBBQUHAwMwXQYDVR0B 68 # BFYwVIAQ4p70s+RMprL+FlVWTNz0vaEuMCwxKjAoBgNVBAMTIVBvd2VyU2hlbGwg 69 # TG9jYWwgQ2VydGlmaWNhdGUgUm9vdIIQJMvpPbZDPbNM2uQJ9+W12jAJBgUrDgMC 70 # HQUAA4IBAQAktpH6aQEu5QKKmxlWHfpFKOkCT2awy7RLIdbNp6YtMICzn9bumU6a 71 # jpNaMi/Apo/IAfrIpqsPv6yoJjmmtKaUgja6mR13xyesudXbWLvVrAXE9NcbDzmO 72 # RqF6Yk2C1Lf/A7yOBq8GJTaGwgaf9LI8Z7wGfqLpGJ92j2S6uIAk3Ww8HSB4TyvF 73 # ZrHAx1YIcFnKUk6ItY0ElOVnUzPc6OaFmO+jHtAXqNWLwpyPBF5d4ZoxPBSEqBWp 74 # ARzxqpXtXvdnB0zqMMdmaW6raC3BVzslOHpC8GdUkNV7vakbzf60BNy5cWwuc7FB 75 # ckrf1oZWvkXgF24T1S1yOhjq+jp+5OaiMYIB4TCCAd0CAQEwQDAsMSowKAYDVQQD 76 # EyFQb3dlclNoZWxsIExvY2FsIENlcnRpZmljYXRlIFJvb3QCEPqN+Kv4s3mJQW/Q 77 # vIkyjcQwCQYFKw4DAhoFAKB4MBgGCisGAQQBgjcCAQwxCjAIoAKAAKECgAAwGQYJ 78 # KoZIhvcNAQkDMQwGCisGAQQBgjcCAQQwHAYKKwYBBAGCNwIBCzEOMAwGCisGAQQB 79 # gjcCARUwIwYJKoZIhvcNAQkEMRYEFHnUbgCdPI1Dj56IhWGiLZ0UdC3YMA0GCSqG 80 # SIb3DQEBAQUABIIBAA6FcaBW+FqtOEpeM1I5KrIo77+1cbQeStkO1Iij0AERKXMv 81 # ay+Vedh8Zp9skwXrZ8CDSVbr+B1oOchahR665SRX7E194vIMpzeECqo1Wr0gtDWy 82 # ikMXslVx+dx1u2SFbgEvcwCS9ZBic6lYQutWTua06AKl6ebnFm3NEbIgOjxwot0X 83 # eaQq+Z6YvClMjBnctP8YqidRtrN8NL1c1pMErdG9FmtEIiXWsDmWcixV3qUHxHqz 84 # PM89IcpdqWtGoZl94VV4jOAtpaq5kUhYpPdcdmizvmTixo1lBvfVBggbr+2sAbpQ 85 # XinVRtVs/N75bvMloy3XnsCJzg7OaAivjwMKu90= 86 # SIG # End signature block

Posted in dunno

Depressing

Eyebrows going grey!

Posted in coding theory

cryptome gushing.

 

image

 

Cryptome has fallen in love with the semblance of unconformity – some great engineers and writers simply failing to do what they know how to do: spill the beans.

Ross Anderson knows how to communicate; and teach, even. He is just a natural. He can get an advanced class up to examination grade and get them all to pass – probably with flying colors. Unfortunately, the material they learn is second class stuff. It SKIRTS how systems are compromised. Until Anderson teaches HOW the typical (and technically and intellectually unimpressive) GCHQ-er undermines ALL the techniques he teaches, he is failing to be a true non-conformist. Such failure put him in the usual bag: the academic who whines about a system he doesn’t really want to study; since what he wants to study is not HOW to defeat the compromise arts but the theoretical basis of systems that *would* be naturally resistant (in an ideal world). He wants to eliminate the problem at source, rather than apply a fix. Meantime, none of his current knowhow from the “theoretical art” of fixing stuff actually works with practical system. Having indoctrinated the brightest and the best, the net result is JUST to facilitate the deployment of MORE systems that are just as penetrated by 50 years old techniques – that he knows full well, but doesn’t discuss (strange that, no) – as before – all of which provides a nice fertile study ground.

IVE ALREADY said that Peter Guttman’s draft book is a masterly work of ranting about internet PKI, with some good anecdotes learned in the course of us all finding out what an Internet-scale PKI even is.

Concerning Kahn, I have not read the work in question – but have read other essays. At least it all focuses on the main topic: cryptographic penetration.

Snowden is also somewhat disingenuous – in propagandizing that crypto work – IF implemented properly. If you are on a commodity PC, crypto CANNOT be implemented “properly”. If you are using the crypt on a bank-issued smartcard (or from the same vendor that manufactures for those banks), your hosed from the outset. He knows that; but somehow, just like Anderson, he just cannot get around to delivering the main point to the public. It’s too much fun whining – and being a(nother) part of the billion dollar boondoggle. Or, you’re a CIA implant whose mission it to disclose that new reality (of american surveillance on its own folk) and to argue the case “via spy hysteria” – that its still the best of a bad lot (compared to the Chinese and Russian equivalents).

Posted in rant

azure virtual machine with MSDN image – iis express

image

image

 

out of the box, a SSL using web application running on the default IIS Express host installed as a result of doing running the visual studio 2013 update 1 enabled image (and updating to RC of Update 2, even) does not work.

To fix:

Run mmc and load the certificate plugin. delete the localhost cert.

 

image

If you are interested, note on using the manage private key operation available on the right click menu for the cert, that one gets

image

Now repair iis express 8 (regenerating the cert, under YOUR context)

image

We loaded the new localhost cert into the trusted root cert store (and trusted people) and created a  new project with new IIS express bindings:

image
Posted in ssl

nsa spying on ssh admins?

 

if I were NSA, I would want the keys to the kingdom. Which means you spy on the folks who run the kingdom. Who are they? They are the system administrators. if THEY use crypto, its their crypto that you first want broken (so you can steal their privileges and exploit the systems that they run, for spying then on others who happen to use those american-grade systems).

image

So, as I use openssl to make a crypto key for system admin purposes, am I “unwittingly” assisting NSA/GCHQ in their mission – assuming that this “commodity-grade” security software only ASSISTS them, covertly?

Posted in coding theory

from authorize to error–resource not authorized for the account

image

 

Location: ms-app://s-1-15-2-368411030-1769956373-826299661-4019874439-3442704750-222489034-3800660787/?error=invalid_resource&error_description=AADSTS50001%3a+Resource+%27https%3a%2f%2freso987.azure-mobile.net%2flogin%2faad%27+is+not+registered+for+the+account.%0d%0aTrace+ID%3a+c2a75697-0623-4ca1-9e04-8e1954d8b571%0d%0aCorrelation+ID%3a+a08d4056-0081-4fa7-b3aa-ecdf5341c3ba%0d%0aTimestamp%3a+2014-04-15+01%3a29%3a52Z

The solution is to NOT do what the instructions call for! (This is the second time being too truthful is hurting me, thinking like an NSA contractor TRYING to show one is trustworthy.)

image

 

Sample instructions suggest, for the webAPI part of the equation that you use a different signin and appid  (than then name of the redirect URI at the mobile site, for aad) DON’T DO IT. HAVE ALL THREE THE SAME.

Ignore step 8, of http://azure.microsoft.com/en-us/documentation/articles/mobile-services-how-to-register-active-directory-authentication/ when having “sso”. just use aad.

Posted in coding theory

Adding Azure AD to a Mobile site with .net backend, and store-integrated windows 8.1 app.

Having built ourselves a known-working .net backend for an Azure Mobile site (and having updated quite a few packages in order to make it all compile with security attributes on guarded interface methods) we managed to follow along and also do the AAD-part of the process, as discussed at http://azure.microsoft.com/en-us/documentation/articles/mobile-services-windows-store-dotnet-adal-sso-authentication/

 image

 

1 2 string authority = "https://login.windows.net/rapmlsqa.com"; 3 string resourceURI = "https://reso987.azure-mobile.net/login/aad"; 4 string clientID = "563cb644-1918-4c35-8a9f-800f4e31c5f9"; 5

image

The figures above show, on the right, the mobile site configuration (the oauth client) being accessed logically by the configured desktop application on the left, that has delegated rights to the ToListApp webAPI hosted in the mobile site using the .NET backend. This of course exposes odata interface to some domain entities, using an http binding

Running all this, we get

image

image

image

and …

 

image

 

Looking at this on the wire, we see a websso token,

image

 

image

But then a failure to issue an oauth-mediated access token:

image

Posted in Azure AD

graph explorer has many more application permissions than Azure AAD console

 

image

Posted in Azure AD

Kernel mental model

An mit video, at minute 22 or so, asks for N elementary proof. Can’t say I can prove anything,but I do have a very simple mental model to the point raised that if a code is the image of a function, the dual of the code is the kernel.

I think of that geometrically. From the classical 3-bit hamming cube (the domain) constrain the space to that of classical tetrahedron (the code mapping’s “image”). Now project all the nodes of the cube from the zero vector and get the Fano plane geometry (the code’s “dual”) whose inner circle line represents the kernel (when the dual is a lie group).

The above is a tone poem, not math! The inner space between the triangle’s lines that is NOT occupied by the circle represents uncertainty. one should think now of joining up each of the lines, making nominally three new circles by round out the straight lines. Then join the 2 outlier points of each circle to the point adjacent to it. I think of it as a table top (the original circle) and three loopy legs! (and there goes the math reputation I don’t have in the first place…)

Finally, I think about the relationship between the lines of the triangle (now the loop legs) versus the line of the circle (the table top). As minimum distance in code space comes into the right ratio with the covering error graph (the tetrahedron) and its nearest neighbor average distances, so the triangle compresses to the area of the circle (the legs retract up towards the table top), representing attaining the Shannon limit. We have maximized the coding gain by making the the circle line indistinguishable from the triangle sides (the loopy legs have FULLLY retracted, like an airplane’s wheels).

Posted in coding theory

From sequency to modern acoustic device identification–Berlin Embassy

Back at http://yorkporc.wordpress.com/2012/02/23/fundamenta-of-keystream-generation/ we took a look the sequency. Having created an additive signal from a set of individual weighted walsh functions taken from the hadamard matrix (or orthonormal basis functions), one learns how the inverse WHT identifies the weightings.

image

http://yorkporc.wordpress.com/2012/02/23/fundamenta-of-keystream-generation/

Two other things now occur to me.

First, look at the form of the sequence matrix, above. Imagine that on the right and bottom sides you have tunny wheel bits – those for Chi1 and those for Chi2, say. now recall how convergence was performed. Take the guessed patterns for one side, the start, and perform an inner product with each row/column (of probability info).Use the result to update the reliability weighting attached to the corresponding bit on the other wheel. Also, if the weighting improved that bit (or whatever was the rule), update the row in the rectangle by a using a swap that more probably aligns the average value with the rules of the mechanism.

Now look at the form of the sequency picture. One can imagine that the dependency between wheel bits, with respect to the cipherstream is represented by the area as ones eye moves from bottom right to top left. The area of probability increases… AS more and more bit values come into play and affect the average – in that coordinate.

As I looked at it, I though to myself: just look at the rate of zero-crossings in the high0rder bits. Don’t they remind one of the columns of the Tunny  alphabet, as ordered here?

image

http://yorkporc.wordpress.com/2012/02/23/fundamenta-of-keystream-generation/

 

Tuuny has the following sequence of zero-crossings, moving from left to right: (1, 2, 16, 4, 8) – remember this ordering is NOT there for wheel breaking but is there to help do 16-counts, 4- counts, etc, so that one may compare the proportion of dot-flows to cross-flows (and see if the proposed SETTING of that Chi wheel is correct).

 

image

when counting this means for a count of 16 chars at a time, producting 2 output:

image

for the bnext count, one has to imagine the rectangle is wrapped around  a stick and gummed one side to the other so that the 1/4 of contribution first/last columns (in this plane) combine.

image

The latter though reminds of gumming both edges to each other, making a donut – recalling the relationship between that complex torus and factorization.

see http://yorkporc.wordpress.com/2014/03/15/colossus-runtc-tonal-centroid/.

fingerprinting devices… using acoustics – the kind the microphones in phones would be picking up! Can we assume that an intel would be tuning the video signal uniquely for each CPU/motherboard to facilitate device identification in the IOT? Sounds like Intel! to me (pun pun).

Posted in colossus, spying

NSA/GCHQ packet staining vs crypto staining

There  seem to be 2 ways to exploit staining:

1) The method of http://cryptome.org/2013/10/packet-stain/packet-staining.htm in which IPv4 packets from “intelligence sources” are stained by putting them within an IPv6 tunnel whose headers are processed by carriers supporting NSA mission. Of course. Microsoft windows comes with just such tunneling capacity at the PC, too.

2) a cryptographic mechanism that does not depend on the carrier – except in the sense that the carriers cables are tapped. This model assumes that an NSA has to scan a lot of draw packet dumps, looking for those of interest, using “suitable hardware” for the search problem.

In either case, we have what PTK used to take DARPA money for: “active network” research!

We have to remember that all start with the reluctant victim having his PC compromised though the insertion of behavior, upon leveraging a suitable exploit. If we believe the snowdonia campaign from NSA, the victim visits  the radical site (e.g. NRA) whose javascript pages duly insert the bug back on the browsing PC.

So what kind of bug would support 1) and 2)?

To hide this all, it feels like a combination of forces would be applied. First, the compromised PC would be induced to use cryptographic staining, hidden to the non technical eye behind Reed-Solomon coding erasures, intending that these signals are detected by the PMO

image

Now, note the path take in the picture given above. From the PC in Y (the target of intel) there is a peer-peer relationship within the internet cloud – that is NOT assumed to pass a particular network. On visiting the radical website, infected by the botnet in autonomous system 666, Y’s PC eventually uses the internet and some of the packets between Y and Z will go over the “compromised” edge router at the carrier’s internet/backbone handoff point. This router, too, is owned by the Botnet – in the sense that the botnet is biasing its routing tables with AS-AS update via BGP, etc. Thus packets from a given IPv4 address hit the edge router’s first rule set – where we should recall  that the (relatively persistent DHCP) address was recently learned about Ys PC by the botnet listening in to Z’s visitor log (hosted at the compromised, radical site that Y just visited). The router directs the packet “flow” via the packet staining device that wraps the flow in IPv6 tunnels. NSA/GCHQ upstream will later leverage the staining tags… to help isolate these flows obtained from general purpose fiber taps fixed at certain locations that target Y now visiting google.com, in the US, say.

Now I know enough, being once, along with the DOD/WH folks I trained with, a certified CCSP specially trained in cisco HIDS/NIDS, to know how the cisco IOS world can apply policy-based routing  that does real time deep packet inspection. So assume that the first edge router is so tuned with policy-based routing up to detect cryptographic headers on the first hop (NOT SHOWN on the picture above). How might we accomplish this?

One thing we know is that in 1980 NASA is listening to a signal – whose power is less than that emitted by your watch, from a craft 2 billion miles away – delivering a data rate of about 20kbits/s. Think about that! This means that the phone in your pocket is MORE than able to use its microphone to listen to the channel between PC and screen, which has far greater powered consumption than your watch display and is probably at a distance of 2 yards (rather than 2 billion miles). These days, with 4G, the data rates of mobile phone circuits are excellent of course (making them an ideal ACTIVE SENSOR network for remote spying on the signals emitted by all devices of the world). If your is not on, there is no reason why not to use that of your neighbor, know also to be in proximity to the PC in the same coffee shop with its internet cafes (a favorite GCHQ targeting space).

So, lets say that a compromised PC of Y, now, is induced to DROP bits in the packet checksum. This is known an RS world as an erasure – for which the math is able to recover. Now assume that the dropping rate is ITSELF a unique code – or stain. That is, as the cisco router does what its supposed to do IN HARDWARE VSLI – error correction on packet checksums that THEMSELVES HAVE ERRORS  – the stats collected PER FLOW are themselves being analyzed by the IOS process that detects the timing signal within the drop rate – and thus detects the particular PC. Though it may correct the packet as it flows across interfaces, being a botnet-owned router assume that this is also enough to induce routing via the PMD. One can imagine that the mechanism might also be inducing the botnet to refocus its efforts – on the Y’s PC directly.

Posted in spying

UK is rattled over home router ssl; wavering public confidence; BBC malfeasance

 

image

photo credit: withheld at the request of multiple national security agencies.

In a major if somewhat technically embarrassing puff piece FOR GCHQ and co, the BBC does its duty as a state broadcaster: push the government line and cow the UK public.

“You would have to be a semi-professional to have…”…. sayeth the seer, a doctorate at (and even perhaps FROM) “Cambridge University”. No! You have to buy it at a supermarket – 2 aisles over from the cat food, or get it from the phone/cable company when they install it for you. ALL of them come with SSL capability. This is SUPERMARKET grade stuff,  valued at 5 pints of beer. For obvious reasons, at that price point one SHOULD NOT EXPECT … too much strength or assurance in the encryption!

Ah, but you’d have to be a technically-minded semi-professional to turn it all on – since its typically not on by default! Well, that IS true – and was probably the line the spooky Dr. was SUPPOSED to deliver. Perhaps the BBC journalist, wanting to join the rather posh BBC establishment, asked several questions – to get the quote she wanted; and then only published the one that fit the desired policy line. This is normal use of media, by spooks trained in the propaganda arts, leveraging their 1930s superman will that SHALL “control the internet”.

But even that is ONLY half true – as there are several variants of SSL used commonly in wifi routers. Because the cable company remotely controls the configuration of the router, if you have broadband service. That means the “semi-professional” technician *can* turn it on REMOTELY and with trivial levels of skill – without you being involved. And so can the spook, with or without the participation of the telco. Its normal exploit land to gain such access in the 5-pint-of-beer-grade wifi router (and then alter the configuration or the radio or crypt build into the firmware used by the programmable electronics).  Think of it as changing the circuit board in your car radio… to filter out Radio Moscow so one heareth not other than a voice of the BBC (british “bias” corporation?) – noting that these days the whole process of making a software-based-radio that tunes in to the spooks needing to store your porn usage/search history for a rainy day, when blackmail is called for, and also tune OUT any undesired voices …is about as hard as loading new music file onto your $10 mobile music player! (This better-end and interesting cheaper crypto device is obtained from the checkout line at the supermarket and is probably more crypto-capable that the wifi home router over by the cat food;  since music firms actually have something they don’t want you to have : copy power!

BUT YOU’D NOTICE IT (or the phone company charged with “PROTECTING YOU (sic)” would). Well two lies abound here. First, strange (free) BLACK MAGIC of self-signed certs WOULD WARD OFF GCHQ, being very frightening to them as they use their browsers to connect (sic). Second, having re-flashed the firmware the semi-professional screens  -that admittedly mom never uses – WOULD NOW SHOW that the feature had been turned on… giving the game away.

NOTE here the attempt to divert (away fcom certain technical areas onto to something semi-technical and VISIBLE, called the remote administration feature of home routers). The issue with home ROUTERS is not that one connects TO THEM (as ssl sites or servers) using browsers. Rather, PCs often induces the router to pen communication ports (including secure ports) to allow outsiders IN … to your PC – to do, ahem, let the KIDS play multi-player games …that arrange for the backchannel opening and the realtime play experience with voice and video (kid terrorists, assumed, of course).

Ah, GCHQ… leveraging the kids behaviours to snoop on others; such are the rights of children in the UK. Just another vector.

Secondly, home routers are typically now WIFI home routers – taking encrypted wireless signals and DECRYPTING them. Don’t forget that the spooks want THOSE decryption keys too (not that this has anything to do with openssl unless the wifi is using something called EAP-TLS…). Don’t forget how they rigged the original secure wifi standard – so it took, urr, 4s to cryptanalyze the keys – assuming, as can spooks, one send 45k malformed packets that rolled through the crypto period.

Now , realize that its HARDER to first-time exploit an uncompromised PC BEHIND the home wifi router doing the SSL than comrpomised the home router itself (though marginally harder). If the PC is doing the encryption, is harder to get an “in”. So you WANT the home router to be doing the SSL FOR your PCs (so that the stealing of the keys happens at the most vulnerable point). And here the nature of PC to router auto-configuration helps – as it turns out that PCS regularly configure the secure ports on your router for you (a SSL-handshake, delivered by guess-what… openssl typically. its an UDP-SSL handshake, if you care to kniow, that allows the PC to request the ports be opened)).

SO, in summary , you see GCHQ and its spook  friends in the BBC doing a typical  UK psychology job. Since it wont work, you will now see the NEXT Phase of UK policy – as it controls the issue. THREATS and FEAR; with a some DEMONONIZATION. One can be sure the BBC will be there to cover it (or the answers that fit the prepared script, anyways).

My advice? Beware the Cambridge doctor.

Posted in spying

WCF server, for JWT handling/validation

 

1 <system.identityModel> 2 <identityConfiguration> 3 <audienceUris mode="Never"> 4 <add value="http://localhost:1500/Service.svc" /> 5 <add value="https://rapmlsqa.com/TodoListService" /> 6 </audienceUris> 7 <issuerNameRegistry type="WcfServiceJWT.Utils.DatabaseIssuerNameRegistry, WcfServiceJWT" /> 8 <certificateValidation certificateValidationMode="None" /> 9 <securityTokenHandlers> 10 <add type="System.IdentityModel.Services.Tokens.MachineKeySessionSecurityTokenHandler, System.IdentityModel.Services, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /> 11 <remove type="System.IdentityModel.Tokens.SessionSecurityTokenHandler, System.IdentityModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /> 12 <add type="WcfServiceJWT.CustomJWT, WcfServiceJWT" /> 13 </securityTokenHandlers> 14 </identityConfiguration> 15 </system.identityModel>

and

1 using System; 2 using System.Collections.Generic; 3 using System.Linq; 4 using System.Web; 5 using System; 6 using System.Collections.Generic; 7 using System.Linq; 8 using System.Web; 9 using System.IdentityModel.Tokens; 10 using System.Security.Claims; 11 using System.Xml; 12 using System.Text; 13 using System.IO; 14 using System.IdentityModel.Metadata; 15 using System.Security.Cryptography.X509Certificates; 16 using System.ServiceModel.Security; 17 using System.IdentityModel.Services; 18 using System.Net.Http; 19 using System.Threading.Tasks; 20 using System.Web.Configuration; 21 using System.Threading; 22 using System.Net; 23 using System.IdentityModel.Selectors; 24 25 namespace WcfServiceJWT 26 { 27 public class CustomJWT : JwtSecurityTokenHandler 28 { 29 public override ClaimsPrincipal ValidateToken(JwtSecurityToken jwt) 30 { 31 ClaimsPrincipal v2; 32 string stsMetadataAddress = String.Format("https://login.windows.net/{0}/federationmetadata/2007-06/federationmetadata.xml", jwt.Payload["tid"]); 33 34 MetadataSerializer serializer = new MetadataSerializer() 35 { 36 CertificateValidationMode = X509CertificateValidationMode.None, 37 }; 38 List<X509SecurityToken> signingTokens = new List<X509SecurityToken>(); 39 40 MetadataBase metadata = serializer.ReadMetadata(XmlReader.Create(stsMetadataAddress)); 41 42 EntityDescriptor entityDescriptor = (EntityDescriptor)metadata; 43 44 // get the signing certs. 45 signingTokens = ReadSigningCertsFromMetadata(entityDescriptor); 46 47 var vparms = new TokenValidationParameters 48 { 49 ValidIssuer = entityDescriptor.EntityId.Id, 50 IssuerSigningTokens = signingTokens, 51 ValidAudiences = Configuration.AudienceRestriction.AllowedAudienceUris.Select(s => s.ToString()) 52 }; 53 try 54 { 55 v2 = base.ValidateToken(jwt, vparms); 56 } 57 catch (Exception ex) 58 { 59 throw new ApplicationException("didnt validate", ex); 60 } 61 return v2; 62 } 63 64 //public override ClaimsPrincipal ValidateToken(JwtSecurityToken jwt, TokenValidationParameters validationParameters) 65 //{ 66 // // set up valid issuers 67 // if ((validationParameters.ValidIssuer == null) && 68 // (validationParameters.ValidIssuers == null || !validationParameters.ValidIssuers.Any())) 69 // { 70 // validationParameters.ValidIssuers = new List<string> { ValidIssuerString }; 71 // } 72 // // and signing token. 73 // if (validationParameters.IssuerSigningToken == null) 74 // { 75 // var resolver = (System.IdentityModel.Tokens.NamedKeyIssuerTokenResolver)this.Configuration.IssuerTokenResolver; 76 // if (resolver.SecurityKeys != null) 77 // { 78 // IList<SecurityKey> skeys; 79 // if (resolver.SecurityKeys.TryGetValue(KeyName, out skeys)) 80 // { 81 // var tok = new NamedKeySecurityToken(KeyName, skeys); 82 // validationParameters.IssuerSigningToken = tok; 83 // } 84 // } 85 // } 86 // return base.ValidateToken(jwt, validationParameters); 87 //} 88 89 static List<X509SecurityToken> ReadSigningCertsFromMetadata(EntityDescriptor entityDescriptor) 90 { 91 List<X509SecurityToken> stsSigningTokens = new List<X509SecurityToken>(); 92 93 SecurityTokenServiceDescriptor stsd = entityDescriptor.RoleDescriptors.OfType<SecurityTokenServiceDescriptor>().First(); 94 95 if (stsd != null) 96 { 97 // read non-null X509Data keyInfo elements meant for Signing 98 IEnumerable<X509RawDataKeyIdentifierClause> x509DataClauses = stsd.Keys.Where(key => key.KeyInfo != null && (key.Use == KeyType.Signing || key.Use == KeyType.Unspecified)). 99 Select(key => key.KeyInfo.OfType<X509RawDataKeyIdentifierClause>().First()); 100 101 stsSigningTokens.AddRange(x509DataClauses.Select(token => new X509SecurityToken(new X509Certificate2(token.GetX509RawData())))); 102 } 103 else 104 { 105 throw new InvalidOperationException("There is no RoleDescriptor of type SecurityTokenServiceType in the metadata"); 106 } 107 108 return stsSigningTokens; 109 } 110 } 111 }

Posted in office365

client side of WCF using JWT for bearer

 

image

 

1 // 2 // GET: /TodoList/ 3 public async Task<ActionResult> Index() 4 { 5 ServiceReference1.ServiceClient sc = new ServiceReference1.ServiceClient(); 6 7 sc.ClientCredentials.SupportInteractive = false; 8 sc.ClientCredentials.UserName.UserName = "support170"; 9 sc.ClientCredentials.UserName.Password = FRED"; 10 11 // var cssdf = sc.GetData(45); 12 13 // 14 // Retrieve the user's tenantID and access token since they are parameters used 15 // to call the To Do service. 16 // 17 string tenantId = ClaimsPrincipal.Current.FindFirst(TenantIdClaimType).Value; 18 string accessToken = TokenCacheUtils.GetAccessTokenFromCacheOrRefreshToken(tenantId, todoListResourceId); 19 20 var tokenHandler = new JwtSecurityTokenHandler(); 21 SecurityToken st = tokenHandler.ReadToken(accessToken); 22 23 24 // from http://stackoverflow.com/questions/16312907/delivering-a-jwt-securitytoken-to-a-wcf-client 25 // 26 XmlDocument document = new XmlDocument(); 27 XmlElement element = document.CreateElement("wsse", "BinarySecurityToken", 28 "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"); 29 element.SetAttribute("ValueType", "urn:ietf:params:oauth:token-type:jwt"); 30 element.SetAttribute("EncodingType", 31 "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary"); 32 UTF8Encoding encoding = new UTF8Encoding(); 33 element.InnerText = Convert.ToBase64String(encoding.GetBytes(accessToken)); 34 35 GenericXmlSecurityToken genericst = new GenericXmlSecurityToken( 36 element, 37 null, 38 st.ValidFrom, 39 st.ValidTo, 40 null, 41 null, 42 null); 43 44 WS2007FederationHttpBinding fedbinding 45 = new WS2007FederationHttpBinding("WS2007FederationHttpBinding_IService"); 46 fedbinding.Security.Mode = WSFederationHttpSecurityMode.TransportWithMessageCredential; 47 fedbinding.Security.Message.IssuedKeyType = System.IdentityModel.Tokens.SecurityKeyType.BearerKey; 48 fedbinding.Security.Message.IssuedTokenType = "urn:ietf:params:oauth:token-type:jwt"; 49 fedbinding.Security.Message.EstablishSecurityContext = false; 50 fedbinding.Security.Message.NegotiateServiceCredential = false; 51 UriBuilder ub = new UriBuilder(sc.Endpoint.Address.Uri); 52 ub.Scheme = Uri.UriSchemeHttps; 53 ub.Port = 44307; 54 ub.Host = "localhost"; 55 56 ServiceReference1.ServiceClient scbearer 57 = new ServiceReference1.ServiceClient(fedbinding, new EndpointAddress(ub.Uri)); 58 59 var svcChannel = scbearer.ChannelFactory.CreateChannelWithIssuedToken(genericst); 60 var cssdf2 = svcChannel.GetData(45); 61 62 } 63

Posted in office365

webmatrix and azure AD based organizational IDs

Not sure why, but we logged into webmatrix, hosting joomla, using our Azure AD-integrated accocunt

Screenshot (94)

then we enabled SSL.

image

So is this vulnerable?

Well, the IIS express seemed to be the entity deliering the SSL (which means windows is doing the work). It didn’t SEEM to be joomla doing its own.

On the matter of logging into Azure AD, the publication to an azure website was lovely. Well done microsoft azure!

Posted in rant

and as it happens (NSA) word games: more evidence of public untrustworthiness.

image

doing one thing “ensuring integrity” just happens to requires to something else (that the NEW PROCESS NOT SUBJECT TO THE CIOUT ORDER – AS DEFINED AT MOMENT ONE AFTER THE COUTE ORDER CAME INTO EFFECT – happens to do the exact, urrrrrr.,  opposite. But , that’s “modern” NSA method! Word games within word games.

Now we know. Stasi.

Posted in rant

my heart bleeds for NSA and GCHQ, wholly still able to steal your passwords

So folks are happily patching the exploit-laden openssl NSA engineered into open source  couple of years ago. Of course, it dumped memory. Now, folks are happily upgrading to the new openssl NSA engineered exploit, since the old one is widely know to others. And, lots of boondoogle vendors are telling to “check” which server centers have or have not updated (i.e. don’t use those who have not!), and change your password!

Of course, then your home router, which has the same bug, and is not patched and never will be is still open to, ahem, a memory dumping mechanism on your passwords AS they transit over to the server farm.

OF COURSE, the (compromised home) router cannot see anything of the cleartext, since its got its SSL passthrough ports enabled and they duly pass through the information from the browser encrypted end-end by server and browser!

Which is fine until you realize that the typical corporate browser learns its connect proxy automatically. Strange that, no! And it’s the corporate browser NSA wants (it wants you in work mode, not social mode, while socializing with other “workers of interest”)

So what is a connect proxy? it’s a way of offloading SSL to the (home) router, in the clear. The train tunnel starts at the home router (and heads for the server), that is. The path between your browser and your router is clear, and the memory of the router is full of  the plaintext and the cryptovariable used to THEN establish the forward tunnel.

In general American home routers are connected to broadband. Just like NASA Ames ran a huge intelligence collection infrastructure for NSA in the 1980s (to BRING BACK the exfil data) by having dedicated management ports on the then-internet backbone router (think admin port!) so too home routers are managed by the cable company – who can reflash the firmware whenever they want. This means they may participate on demand in connect path discovery, assuming the corp browser is so set to be willing to try to find a happy spying port – which they are all!

What is fun about the US approach to stasification is the SHEER degree of the penetration, at multiple levels, through the society and its vendors. The UK approach is much less sophisticated technologically – and relies much more on deception  and social engineering.

Which probably explains so much money has been thrown in the UK at cybercenters hiring computer science-related psychologists.

Posted in spying