bitlocker recovery on windows surface 3

I got to experience a bitlocker recovery process, on a Microsoft surface pro 3 computer.

On booting, the bootloader reported that the cng driver is corrupt (and none of rest of the rest of the loading sequence would work. Traditionally, though it may no longer be the case, cng stood for crypt next generation.

Aha, I thought. Perhaps the recent firmware updated – talking to the TPM – didn’t work well, leaving things in a strange state.

373 374

So I learned to boot the machine into the UEFI setup manager, and disabled both TPM and secure boot. On rebooting, the machine detected that it could not decrypt the bit-locked file, and induced me to find my bit-locked recovery keys.

Presumably, FBI can do the same… with or without my cooperation. More useless American assurances in the crypto regime.

374 375

I wonder what state my PC is in, re TPM etc?

Capture

Posted in coding theory

Simple inner products

Outer products are intuitively obvious, being the combinatric explosion of each branch.

Inner products can also be intuitive, when considering the product of one branch node and another. Its just the length of the edge connecting the two points.

If u think about the v made by branching, and the nodes at the next branching points, to connect them we can go across the inside of the v, or outside and around.

Posted in coding theory

Realty and mifi/wifi

Folks should have figured by now that when you reinstall a PC these days, it comes back with the personal settings you once had (before the crash). The cloud stores a copy of the settings (ready for recovery). On a tablet, that includes the apps that you once downloaded (and may need re-downloading).

And, similarly, it stores your apps data – such as contact lists – to rebuild the local device’s store of app data (after the app’s re-installation, or the app’s second installation on a second or nth device).

One of the more fun modern side effect of “local” contact data in contact-powered apps sharing concerns “wifi sharing”. Thinking of the wifi on your PC/tablet/phone now as an “app”, its app data is of course, the SSIDs and logon passwords you use (and want your contacts to have, so they too can connect when in range). The cloud copy of those access credentials, stored encrypted in Microsoft cloud, can now be auto-shared with your named contacts (of certain contact apps, that based their contact manager on office365 APIs)

One of the big points about this is your contacts access to that access point (using your, now borrowed-by-them via the cloud, access credentials) is controlled by a terms and conditions process, that their device AUTO SIGNs for them, and which terms and legal consequences are SPECIFIC to that wifi access point.

The point is this: there may be a world when you as realtor client visit realtor R in the broker’s office, whereupon you might be added as a contact in R’s contact app, and thus automatically be enabled to have one’s phone now bind to the broker’s office wifi – or the realtors mifi box, similarly. This act of binding your phone to the mifi/wifi, which happens automatically as the phone searches out free connectivity, establishes certain legal protocols (that would otherwise be an old fashioned process of signing bits of papers, copying disclosures etc). the terms and condition scan be whatever the wifi access point operator wants, being generic (like starbucks) or business specific.

The concept of automatically-connecting to wifi , whether to save cell tower packet costs or otherwise, is actually quite an area of innovation – being a “very local” event, guarded by contact processes.

As we know, nothing will change the root business of realty, or MLS. But, its always useful to see how technology changes the business services one might offer, to the every churning realtors group and their short term relationships with an ever revolving group of realtor clients.

Posted in coding theory

IOT, home wifis and openid discovery in a world of automatic T&Cs

One of the big things in the opened connect world, compared to its forerunner designs, concerns the automatic connection to metadata sources and, indeed, the discovery of endpoints offering the metadata. We can apply this to real estate.

In the modern phone world, voice-calling apps want to bind to WiFi access points (rather than cell towers). Part of a long term DARPA plan, this WiFi-enabling of internet apps has a long pedigree of thought behind it.

If we think of an IOT idea for a business venture, it could be this.

The home router is probably not going to be the gateway for internet packet flow. But, it could be the IOT representative, and performs discovery support for everyone who is in its locality.

We know that windows phones and tablets are now willing to connect to access points that are guarded by login terms/conditions, automatically providing login credentials and click-through sign out to the legal arrangement.

Lets now assume that such a handshake, imposing legal obligations, occurs at any home router (broadcasting WiFi). Its not that the home router than grants internet access – it merely imposes terms and conditions. Its a wandering, almost universal notary, that is.

So, assume I’m an MLS website and I want you to agree to certain terms and conditions. Perhaps I can require you to have first visited a certain class of WiFi access point (that imposes the conditions, on behalf of the MLS, as a side effect of trying to get connecte3d to voice or data services through that WiFi access point).

So, If I now think like a security architect, I want to obtain lots of opened connect discovery information, from keying authorities OTHER than keys from the usual cloud candidate STSs. I want my MLS STS metadata made available for learning (and without needing those clouds vendors having any say).

Here we have the value proposition. Not only is the MLS delivering its terms and conditions (which of course its being doing for years, by other means), its tying the learning of and then the use of the MLS’s opened discovery information into that now-localized agreement process.

Posted in coding theory

Office on android phones, and nsa-friendly token caches due to DARPA foresight

It’s fascinating to see how the world changed in telematic security for open systems, over 20 years. Gone are the international standards bodies (in favor of IETF/IESG, which are proxies for American interests); and largely gone are the type of telematics standards designs that thought about everyone’s needs (not just the land of the exceptional, and its perhaps “friends and allies” – providing they spy on each other).

If I think back 20 years, when folks were thinking about how the security architecture of today could be so “structured” to preserve information and telecommunications dominance (without getting in the way of innovation, job and wealth creation), I recall how the goal was to maintain a gap. No, not a gap of being better or faster. But a gap as in gap analysis – within a security model. Somehow, SOMETHING is not in the standard (that should be). Its a lie by omission.

What is that, then? Its the failure to deliver distributed cache protection. This is the intended gap – the lack of countermeasures of which facilitate the desire to subvert and contaminate the security critical cache contents, the cache addressing model, and the caches own integrity and trustworthiness. The absence of the countermeasures is put there so that one –  with unique information dominance – has the potential now to subvert all the other countermeasures (lauded on silly IETF RFCs, pompously marketed as security standards for everyone).

When you look at technology  of layer-7 application building today, vs 20 years ago, of one notes that the platform has changed. I expect today to be using an android phone, soon, and indeed logging into the phone (and its native apps) using a Gmail account accessed via opened connect protocol. But, in the phone I obtain I expect it to come bundled with office 365 Apps too – from Microsoft. These apps will not talk at launch time to googles security infrastructure – despite being on a Google controlled phone. They will talk to AZURE AD (AAD), albeit in much the same way that the Google apps on the Google controlled phone talk to Google’s cloud based security services.

But, its a pain to have to login again and again to each office 365 app, particular now when using a live or organizational/work credential rather than the Google credential one just used to open up the app desktop replete with a mixture of Google native, play store, office, and third party apps that may use Google or Microsoft cloud security infrastructure.

As the Microsoft identity story comes out, we can see how its all being addressed – for both personal and work model phones.

Through caching, apps in a sub-family such “office apps” on an android phone can simulate single sign on. Launch a second app in the family, there will be no second login challenge. Of course, launching the first app, especially the very first time,  may well present a challenge that typically would demand one’s live or work account credentials. AAD would cooperate with one of those IDPs to act now as an Authorization Server, and duly mint tokens that get stored in that cache, that control app loading, and app behavior when talking to remote APIs.

Now, if I were creating a “hole” in the internet identity infrastructure as a DARPA thinker –  doing his job of having foresight, 20 years ago – Id be knowing (today) that the token cache is the gating factor on opening apps (and their local data store), and their interaction with web services. And, Id also know that tokens, minted by AAD, may not only be being supported by the live or organizational IDPs any longer. It may be, be user consent, be the Google IDP (or the Google AS token already on the phone systems-cache) that is leverages by AAD to grant opening power to the office apps … on the Google phone!

Ok, we see how the world of the us national identity infrastructure works now. And we see how Microsoft further allow non office apps and non office sites to also fit into this “office-family” …resident on the Google phone. Since AAD is the gatekeeper on office app culture, other third party apps (e.g. MLS and real estate apps) also tied into AAD might also be authorized to access the windows-app-only cache (on android) and thus get all the same single sign on, opening, data wiping…. features as do the Microsoft-delivered apps of word and excel.

So, clearly, the cache is king.

And it DARPA is playing king maker, as is its function, it will be doing what we scoped out 20 years ago as a “useful gap” (albeit for the tokens and protocols of the X.500 world, now aped and updated a bit by opened connect and AAD).

What I should not really talk about is how we thought about countering the strategic gap vulnerability (at a technical level, since obviously we were powerless to prevent the strategic vulnerability insertion). That’s because it requires an enhanced https – that protects the cache as a function of the security connections that derive from its keying/token material.

 

 

Posted in coding theory

ASP.Net session as web app token store

The note looks at storing tokens in ASP.net session, when deposited by the owin pipeline rather than a page handler.

Screenshot (55)

see https://github.com/AzureADSamples/WebApp-WebAPI-OAuth2-UserIdentity-DotNet

The project seems to hint that once upon a time it stored tokens in asp.net session. But, it seems to persist tokens in a custom database.

 

Posted in coding theory

Opened connect messages via dot net library (vs owin!)

Sometimes you don’t want pipeline such as owing to be doing protocol handling. You just want a library to help your formulate requests and response messages. For openid connect, this is what the library show next does:

 

Screenshot (56)

https://github.com/AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet/blob/master/src/Microsoft.IdentityModel.Protocol.Extensions/OpenIdConnectMessage.cs

 

 

Posted in coding theory

oauth2 tgt and graph API resource

In the ASP.NET/OWIN world, I was missing the right mental model when distinguishing what was so special about converting an auth code (and appid and appkey (to a token with GraphAPI permissions (on the directory just used for SSO authn) and doing oauth2 client_credentials grant citing the same appid and appkey to get a token dtargeting ones own webAPI guarded resources.

A video helped – with the mental model. It puts together the whole:

Capture

TGT

One should think of the process of converting an auth_code to an graph-capable access token as something that is “pipeline-time” . Its put there because its really pre-caching the TGT that WILL get the latest token suitable for graph API, for your API, of office API, or whatever API. In oauth2 world, the TGT is the refresh token obtained by swapping the auth code.

Evidently, only auth code grants get the refresh token (and TGT).  Or at least, only the auth code grant gets a TGT that is multi-API capable.

This really puts the graphAPI, and its TGT and its cache0-side proxy, in the role of a API meta-guard – allowing the graph to really decide which API tokens are issued PER USER. Some users may get more API tokens than others, gated on what the “app-tuned” graph service (not the baseline resource security model) indicates.

Hmm. I bet this doesn’t sound right to folks who don’t have a well developed understanding of security models, when evaluating security software. It makes good sense, if you do!

The disclosure is not new (see http://www.cloudidentity.com/blog/2013/10/14/adal-windows-azure-ad-and-multi-resource-refresh-tokens/). What is new is the placement of the TGT obtaining process in the owin pipeline itself – before app code fires.

 

Posted in coding theory

SSOLogon from owin cookie middleware. The End() problem

This updates

Note how I add false to such as Response.Redirect(strRedirect, false); Otherwise, the SSOlogon page gets visited multiple times.


using Microsoft.Owin.Security;
using Microsoft.Owin.Security.Cookies;
using System;
using System.Security.Claims;
using System.Web;
using System.Web.UI;


namespace WebApplication2
{
    public partial class _SSOLOGON : Page
    {
        protected void Page_Load(object sender, EventArgs e)
        {

            HttpContext.Current.GetOwinContext().Authentication.SignIn(
                new AuthenticationProperties { IsPersistent = true },
                new ClaimsIdentity(new[] {
                                            new Claim(ClaimsIdentity.DefaultNameClaimType, "peter")
                                          },
                                    CookieAuthenticationDefaults.AuthenticationType)
            );

            var strRedirect = Request["ReturnUrl"];

            if (strRedirect == null)
                Response.Redirect("AccessDenied.aspx", false);

            Response.Redirect(strRedirect, false);
        }
    }
}

Posted in coding theory

dns, the cybercrime off-switch

Capture

http://www.circleid.com/posts/20150513_diving_into_the_dns/

Posted in coding theory

page_prerender and owin opened connect

In the earlier article we saw how to amend a simple web forms project, built in modern style from owin components. Rather than use forms auth and WIF to guard authenticated access to a web form, the site uses OWIN cookie middleware and OWIN opened connect middleware (talking to AAD) similarly. The architecture is one in which the site has both a local signin page, activated by the cookie middleware, and an optional opened connect resource that invokes the sending of oauth2 requests and the processing of the grants that come back.

Screenshot (48)

Now, we want to reapply a pattern we learned from a WIF SDK sample. This code lets a resource invoke the signin login page, as a consequence of the interaction of URL authorization and forms authn interceptors. However, the signing page acts as a protocol gateway rather than a credential validator. It invokes SSO requests (albeit WIF style request APIs, in its era). So can we take some of the knowhow, and in OUR projects signin page make it too acts as an openid connect gateway (gatewaying to owing-managed tickets/cookies!)

Well, the interesting bit of the sample is in login page (invoked by forms authn module(), where we see a packaged control deliver the gatewaying component:

Screenshot (49)

A more interesting sample unwraps the component. This we get by installing the templates into visual studio 2015 RC (per advice here), and running the make claims aware web SITE template:

 

Screenshot (50)


protected void Page_PreRender( object sender, EventArgs e )
    {
        string action = Request.QueryString[WSFederationConstants.Parameters.Action];
        try
        {
            if ( action == WSFederationConstants.Actions.SignIn )
            {
                // Process signin request.
                SignInRequestMessage requestMessage = (SignInRequestMessage)WSFederationMessage.CreateFromUri( Request.Url );
                if ( User != null && User.Identity != null && User.Identity.IsAuthenticated )
                {
                    SecurityTokenService sts = new CustomSecurityTokenService( CustomSecurityTokenServiceConfiguration.Current );
                    SignInResponseMessage responseMessage = FederatedPassiveSecurityTokenServiceOperations.ProcessSignInRequest( requestMessage, User, sts );
                    FederatedPassiveSecurityTokenServiceOperations.ProcessSignInResponse( responseMessage, Response );
                }
                else
                {
                    throw new UnauthorizedAccessException();
                }
            }
            else if ( action == WSFederationConstants.Actions.SignOut )
            {
                // Process signout request.
                SignOutRequestMessage requestMessage = (SignOutRequestMessage)WSFederationMessage.CreateFromUri( Request.Url );
                FederatedPassiveSecurityTokenServiceOperations.ProcessSignOutRequest( requestMessage, User, requestMessage.Reply, Response );
            }
            else
            {
                throw new InvalidOperationException(
                    String.Format( CultureInfo.InvariantCulture,
                                   "The action '{0}' (Request.QueryString['{1}']) is unexpected. Expected actions are: '{2}' or '{3}'.",
                                   String.IsNullOrEmpty(action) ? "" : action,
                                   WSFederationConstants.Parameters.Action,
                                   WSFederationConstants.Actions.SignIn,
                                   WSFederationConstants.Actions.SignOut ) );
            }
        }
        catch (System.Threading.ThreadAbortException) { } // Thrown by redirect, safe to ignore
        catch ( Exception exception )
        {
            throw new Exception( "An unexpected error occurred when processing the request. See inner exception for details.", exception );
        }
    }

From the federated website example (Where an FP proxies ws-fedp request to several IDPs), we get other code patterns:


//-----------------------------------------------------------------------------
//
// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
// PARTICULAR PURPOSE.
//
// Copyright (c) Microsoft Corporation. All rights reserved.
//
//
//-----------------------------------------------------------------------------


using System;
using System.Web.UI;
using Microsoft.IdentityModel.Claims;
using Microsoft.IdentityModel.Configuration;
using Microsoft.IdentityModel.Protocols.WSFederation;
using Microsoft.IdentityModel.Protocols.WSTrust;
using Microsoft.IdentityModel.SecurityTokenService;
using Microsoft.IdentityModel.Web;
using System.Threading;

namespace PassiveFlowSTS
{
    public partial class _Default : System.Web.UI.Page
    {
        /// 
        /// Returns whether the user is authenticated or not. 
        /// 
        private bool IsAuthenticatedUser
        {
            get
            {
                return ( ( Page.User ) != null && ( Page.User.Identity ) != null && ( Page.User.Identity.IsAuthenticated ) );
            }
        }


        /// 
        /// We perform WS-Federation passive protocol logic in this method and call out to the appropriate request handlers. 
        /// 
        protected void Page_PreRender( object sender, EventArgs e )
        {
            if ( IsAuthenticatedUser )
            {
                IClaimsIdentity id = (IClaimsIdentity)( (IClaimsPrincipal)Thread.CurrentPrincipal ).Identities[0];

                // Use the WSFederationMessage.CreateFromUri to parse the request and create a SignInRequestMessage object.
                Uri requestUri = new Uri( Request.Url.ToString() + "?" + Request.Params["wctx"] );
                SignInRequestMessage requestMessage = WSFederationMessage.CreateFromUri( requestUri ) as SignInRequestMessage;

                if ( requestMessage != null )
                {
                    // Process the sign in request. 
                    SignInResponseMessage responseMessage = ProcessSignInRequest( requestMessage );
                    // Write the response message. 
                    responseMessage.Context = requestMessage.Context;
                    responseMessage.Write( Page.Response.Output );
                    Response.Flush();
                    Response.End();
                }
                string action = Request.QueryString["wa"];

                if ( action == WSFederationConstants.Actions.SignOut )
                {
                    FederatedAuthentication.SessionAuthenticationModule.CookieHandler.Delete();
                    string redirectUrl = Request.QueryString["wreply"];
                    if ( IsValidReplyUrl( redirectUrl ) )
                    {
                        Response.Redirect( redirectUrl );
                    }
                }   

            }
            // forward to an identity provider
            else
            {
                // Check if 'whr' parameter is specified.
                string identityProviderUri = Request.QueryString["whr"];
                string action = Request.QueryString["wa"];
                                              
                if ( action == WSFederationConstants.Actions.SignIn )
                {
                    if ( String.IsNullOrEmpty( identityProviderUri ) )
                    {
                        // Forward the user to the IdetityProvider selection page.
                        identityProviderUri = "https://localhost/PassiveFPSTS/homeRealmSelectionPage.aspx";
                    }
                    SignInRequestMessage signInMessage = new SignInRequestMessage( new Uri( identityProviderUri ), "https://localhost/PassiveFPSTS/Default.aspx" );
                    signInMessage.Context = Request.QueryString.ToString();
                   Response.Redirect(signInMessage.RequestUrl);
                }                              
            }
        }

        /// 
        /// Validate the URL received in a wreply query string.
        /// 
        /// The desired URL.
        /// true if URL is valid
        /// 
        /// Because the wreply directs the STS to redirect to a supplied value, it can
        /// be used as a tool to obscure redirects to malicious sites. For this reason,
        /// the STS should make some attempt to validate the value of reply request.
        /// 
        /// DO NOT use this sample code ‘as is’ in production code.
        /// This is only sample code. A production site would need to validate the request
        /// against known Urls for the current connection.
        /// 
        bool IsValidReplyUrl( string reply )    
        {
            Uri replyUri;
            if ( Uri.TryCreate( reply, UriKind.Absolute, out replyUri ) )
            {
                if ( replyUri.Host.Equals( "localhost", StringComparison.OrdinalIgnoreCase ) )
                {
                    return true;
                }
            }

            return false;
        }

        /// 
        /// Call the STS to get an appropriate token for a request and build a response.
        /// 
        /// 
        /// The 
        private SignInResponseMessage ProcessSignInRequest( SignInRequestMessage requestMessage )
        {
            // Ensure that the requestMessage has the required wtrealm parameter
            if ( String.IsNullOrEmpty( requestMessage.Realm ) )
            {
                throw new InvalidOperationException( "Missing realm" );
            }

            SecurityTokenServiceConfiguration stsconfig = new SecurityTokenServiceConfiguration( "PassiveFlowSTS" );

            // Create our STS backend
            SecurityTokenService sts = new CustomSecurityTokenService( stsconfig );

            // Create the WS-Federation serializer to process the request and create the response
            WSFederationSerializer federationSerializer = new WSFederationSerializer();

            // Create RST from the request
            RequestSecurityToken request = federationSerializer.CreateRequest( requestMessage, new WSTrustSerializationContext() );

            // Get RSTR from our STS backend
            RequestSecurityTokenResponse response = sts.Issue( (IClaimsPrincipal)Thread.CurrentPrincipal, request );

            // Create Response message from the RSTR
            return new SignInResponseMessage( new Uri( response.ReplyTo ),
                    federationSerializer.GetResponseAsString( response, new WSTrustSerializationContext() ) );

        }

    }
}

Posted in coding theory

Azure disk encryption

http://blogs.msdn.com/b/azuresecurity/archive/2015/05/11/azure-disk-encryption-management-for-windows-and-linux-virtual-machines.aspx

http://channel9.msdn.com/Events/Ignite/2015/BRK3490

Has some info on how someone hosting their own cardholder environment (on an azure VM) can leverage azures own penetration testing, scanning, etch, in their own DSS/PCI audit

Capture

Posted in coding theory

pci dss backgrounders on amazons iass

Posted in coding theory

Simple web forms cookies and openid connect

Assume things are as they once were (before things like ASP.NET identity came along). That is, you want a simple web forms site with a logon page (that sets cookies based on validating a couple of form field values), and optionally another page open to anyone that can induce an opened connect handshake with an IDP … whose grant types ultimately induce the site to create the same kind of cookie as was produced by the local logon page.

Here is what we learned to do (not that its perfect).

In owin startup class:


using System;
using System.Threading.Tasks;
using Microsoft.Owin;
using Owin;
using System.Web;
using System.IO;
using Microsoft.Owin.Extensions;
using Microsoft.Owin.Security;
using Microsoft.Owin.Security.Cookies;
using Microsoft.Owin.Security.OpenIdConnect;
using System.IdentityModel.Tokens;
using System.Security.Claims;

[assembly: OwinStartup(typeof(WebApplication2.Startup1))]

namespace WebApplication2
{
    public class Startup1
    {

        public void Configuration(IAppBuilder app)
        {
            app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType);

            app.UseCookieAuthentication(new CookieAuthenticationOptions
            {
                AuthenticationType = CookieAuthenticationDefaults.AuthenticationType,
                LoginPath = new PathString("/SSOLogon.aspx"),
                CookieName = ".ASPX_AUTH_COOKIE",
                SlidingExpiration = true,
                ExpireTimeSpan = new TimeSpan(0, 90, 0)
            });

            app.UseOpenIdConnectAuthentication(
                new OpenIdConnectAuthenticationOptions
                {
                    ClientId = "2452697f-d1c2-431d-98ae-f93353f04f2e",
                    Authority = "https://login.windows.net/bcbf53cf-af9a-4584-b4c9-6d8b01b3781d",
                    SignInAsAuthenticationType = CookieAuthenticationDefaults.AuthenticationType,

                    TokenValidationParameters = new TokenValidationParameters
                    {
                        IssuerValidator = (issuer, token, parameters) =>
                        {
                            return issuer;
                        }
                    },
                    Notifications = new OpenIdConnectAuthenticationNotifications()
                    {
                        RedirectToIdentityProvider = (context) =>
                        {
                            string appBaseUrl = context.Request.Scheme + "://" + context.Request.Host + context.Request.PathBase;
                            context.ProtocolMessage.RedirectUri = appBaseUrl;
                            return Task.FromResult(0);
                        }
                        ,
                        SecurityTokenValidated = (context) =>
                      {
                          ClaimsIdentity t = context.AuthenticationTicket.Identity;

                          t.AddClaims(new[] {
                                new Claim("authnContext", "UserAuthenticated"),
                                new Claim("RapAuthnContext", "UserAuthenticated")
                          });
                          return Task.FromResult(0);
                      }
                    }
                });           

            app.Use((context, next) =>
            {
                PrintCurrentIntegratedPipelineStage(context, "3rd MW");
                return next.Invoke();
            });

        }

        private void PrintCurrentIntegratedPipelineStage(IOwinContext context, string msg)
        {
            var currentIntegratedpipelineStage = HttpContext.Current.CurrentNotification;
            context.Get("host.TraceOutput").WriteLine(
                "Current IIS event: " + currentIntegratedpipelineStage
                + " Msg: " + msg);
        }

    }
}


Note the use of SignInAsAuthenticationType. This says, mint the claimsidentity resulting from the openid connect handshake so that its treated as if the result of presenting the cookie. In this way, we are “injecting” the result into the cookie handling process, just as if we had minted it locally using a local login page.

Now the urlauthorization module of aps.net partially works, in the sense that a page resource tagged with “deny ?” (deny anonymous users) can intercept an authenticated claims principal.

What urlauthz does not do, despite issuing a 401, is invoke the own pipeline to issue an opened connect request (even if it is marked active, uniquely). It will, alternatively and when the cookie module is given a login property that redirect unauthenticated users go to the local page, do said redirect … to the login page that might itself have code to issue the session cookie.

Posted in coding theory

Manual FormsAuthenticationTicket creation … in a OWIN Cookie middleware context

Traditionally, the urlauthorization module ofAPS.NET induced the forms authn module to search for session cookies, the lack of which would redirect to ones login page. Having validated credentials, once would use the library to issue a forms auth ticket (that would persist in the cookie).

So how do we do that in a OWIN cookie middleware environment (assuming the owin pipeline is sanely configured)?

Here is my attempt:


using Microsoft.Owin.Security;
using Microsoft.Owin.Security.Cookies;
using System;
using System.Security.Claims;
using System.Web;
using System.Web.UI;


namespace WebApplication2
{
    public partial class _SSOLOGON : Page
    {
        protected void Page_Load(object sender, EventArgs e)
        {

            HttpContext.Current.GetOwinContext().Authentication.SignIn(
                new AuthenticationProperties { IsPersistent = true },
                new ClaimsIdentity(new[] {
                                            new Claim(ClaimsIdentity.DefaultNameClaimType, "peter")
                                          },
                                    CookieAuthenticationDefaults.AuthenticationType)
            );
        }
    }
}

Posted in coding theory

Leveraging the Azure Service Management REST API with Azure Active Directory and PowerShell / List Azure Administrators – KeithMayer.com – Site Home – TechNet Blogs

http://blogs.technet.com/b/keithmayer/archive/2014/12/30/authenticating-to-the-azure-service-management-rest-api-using-azure-active-directory-via-powershell-list-azure-administrators.aspx

Posted in AAD

Qsa building on azure own attestations

http://neohapsis.com/data_sheets/get_file/pci-services-for-msft-azure

Posted in coding theory

PCI Compliance in the Public IaaS Cloud: How I Did It

http://www.rightscale.com/blog/cloud-management-best-practices/pci-compliance-public-iaas-cloud-how-i-did-it

Posted in coding theory

Peter upsetting gchq

Pretty obvious that I’ve upset some gchqer, now full of vim and verve.

Twitter hacked.

Windows 10 login hacked.

Quantum reset of email tls.

Wait till next memo. Then they will double down.

Posted in coding theory

Rets webapi odata for Andy Rapattoni

Screenshot (42) Screenshot (44) Screenshot (45) Screenshot (44) Screenshot (45) Screenshot (46)

Posted in coding theory

extending AAD schema

We explore, once again, the directory extensions project at

https://github.com/AzureADSamples/WebApp-GraphAPI-DirectoryExtensions-DotNet

Screenshot (22)

Screenshot (21)

 

Per the instructions, we configure the software to leverage an application, using that from the last post (which explore group claims and graph objects). We use graphexplorer to learn the value of the appobjectid:

Screenshot (19)

and use clientid/clientsecret from the azure portals page about that application:

Screenshot (20)

we build the project, once we have managed the various nuget dependencies (by adding them to the projects)

Screenshot (23)

while the project doesn’t seem to work (on windows 10 beta) first time, we do see that it can get credentials as we debug.

Screenshot (24)

to debug, we first figure that we need to alter the canned user for our tenant (hardcoded to be “admin”).

Screenshot (28)

since we get a runtime error related to extensions binding, we note next that there are no extensions for this user class (which PERHAPS explains the issue).

Screenshot (29)

see https://graph.windows.net/rapmlsqa.com/applications/4079c44a-e4c0-4e45-9ec3-e11d7f259f74/extensionProperties (SEE BELOW FOR NOTES ON THIS TOPIC)

anyways, we fiddle around, and eventually get the form to load for the default user. We then add a form parameters so we can focus on our chosen user:

Screenshot (32)

Some notes (we add application permissions, that may or may NOT be relevant…)

Screenshot (26)

We also learned that the appobjectid is the objectid value from the graphexplorer view. We amend our project.

Screenshot (35)Screenshot (38)

 

We can now look at graph extensions in the graphexplorer site and also we can run the tool to create new schema extensions:

 

Screenshot (36)

 


Screenshot (38)

 

having created skypeid as a user class extension property and allowed this application to use it, we can populate a value for rapstaff user using a form that cues off the schema/metadata of the (now extended) user class:

Screenshot (39)

Obviously, each MLS tenant can be its own “application” and receive different user class extensions, with local names, etc.

Posted in AAD

AAD group claims in real estate

We use two users rapstaff and rapagent from a given AAD tenant and common namespace within that tenant : rapmlsqa.com.

Screenshot (6)

when users sign in via the federated management process, their AAD record is updated – including the leaf groups to which they belong. For example billtypeA.

We then define a super group, for example billtypeB. B includes whatever A include, plus some nominated users.

In our case, we have rapstaff in billtypeA, and define B to include A and rapagent. Thus B is in effect rapstaff and rapagent. But, note the inclusion relationship is not symmetric.

Screenshot (7)Screenshot (8)

When we run the sample code at https://github.com/AzureADSamples/WebApp-GroupClaims-DotNet

Screenshot (9)

we get to see such as what happens with rapagent shares with group A

Screenshot (12)then, rapstaff shares with group B (noting that the display already shows the task shared with those in A, from rapagent)

Screenshot (11)

 

if we logon to rapagent, we see the sharing just accomplished:

Screenshot (12)a

One notes how the particular app design presents share objects, that one is not owner of, in a form that does not allow one to share further.

if we use a user from a different tenant (rapstaff@metrolistrapmlsqa.com)

Screenshot (13)

Screenshot (14)

 

if we add rapstaff@metrolistmlsqa.com to the billtypeB group

Screenshot (16)

 

we see suddenly that the task share with metrolist rapstaff includes:

Screenshot (17)

(after a signout/signin process, that refreshes the group claims in the users session)

When we look at a JWT from the id token, we see that the group claims are presents – as OIDs.

Screenshot (18)

 

Posted in Azure AD

Hacked twitter account

Skimming from RSA conference processes, no doubt.

Posted in coding theory

Final thought on rsa 2015

It’s become a farce. But one ill be back to.

The biggest takeaways were from a frank Alexander and a finally happy Carney. Some guy now in charge of rsa bleated on like the CEOs Alexander and charbey most respect: Those who are ex military and take a classified briefing resulting in key escrow and log sharing policies based on reasons that that cannot even be shared with lawyers (and certainly not shareholders!)

What i want to know is how dirt is used to bully or punish CEOs who don’t tow line.

Remembering Hoovers pogroms against gays in 1950s government, presumably they have a lot to fear.

Posted in rsa conf 2015

Annual isc event

Lets see if its some old timer praising his office staff . It if there us “something” afoot that could require conferring. Urr, any ideas?

Or perhaps isc is just there to lead (on key escrow)

Posted in coding theory

moscone connivance with NSA surveillance

If you are going to socially engineer anyone (as a means to get elsewhere) you might as well be skimming RSA conference attendees

Normal gchq technique, preparing ground for cryptanalysis (based on exploiting cipher clerk (or any other human) weaknesses)

There is other evidence that Moscone/RSA are using undisclosed RFID tracking (since they can tell to which plenary session you went, and didnt attend in person).

Must be america.

Posted in rsa conf 2015

using public azure trust to bootstrap private trust

http://www.cloudidentity.com/blog/2015/02/06/requesting-an-aad-token-with-a-certificate-without-adal

Shows how to exchange a privately signed blob for a publicly signed blob.

Think about that, again.

Two peers may wish to use their privately signed blobs to create a private trust channels (that induces secure channels, when such as the ssl handshake leverages that authenticated key distribution).

To initialize that private trust, one borrows (and then abandons) the public trust that introduces the security critical private trust parameters to the parties.

Assume SSL handshake uses the private blobs. To get the verification keys into the trust stores of the peers, borrow the resigning of the blob by a public trust provider. Then drop further use of the bootstrap token.

Posted in coding theory

Microsoft azure NSA cop out

Played word game distinguishing between customer controlled leaf and NSA access *where allowed to inform *

Useless transparency.

Sorry Microsoft.

Posted in rsa conf 2015

RSA show indoctrination

The show so far indicates that us vendors gave kowtowed to “cybersecurity” and key escrow. You can see it in the fear (that the end is nigh for the current business of sham countermeasures). They have to make money now from being the enforcer (of govt policy)

You see the usual instruments, perfected in Australia already.

. Liability for non compliance
. Vendors spying backdoors
. Expectation of total monitoring

Just fascinating to see the internet turn into a huge control system. DARPA will be proud.

Posted in coding theory

Propaganda feel to RSA conference 2015

The initial impression of the conference is one in which it had changed tone, this year.

I see lots of thought leadership, with a common theme (insular America is under attack from thought crimes).

Posted in rsa conf 2015