rapmlsqa cloud membership system

 

our cloud membership system (based on netmagic and mls) can be viewed at http://graphexplorer.cloudapp.net/. It’s a very technical view; not a customer view. its login is rather raw, in UI terms, but functional.

To login, click signin and identify yourself as rapstaff@rapmlsqa.com

eventually you will see a familiar logon screen, requiring rapstaff password

alternatively, you can play with the metrolist view of this, using rapstaff@metrolistmlsqa.com (and the metrolist rapstaff password ).

you can see the underlying user records, in their single record cloud form now, by copy into the resource box the URI https://graph.windows.net/rapmlsqa.com/users

image

Posted in AAD

owin debugging

image

http://coding.abel.nu/2014/06/understanding-the-owin-external-authentication-pipeline/

Simply made a modern visual studio 2013 (updated) MVC app, with individual authentication. This gives us an owin-pipeline based application. To this, we then added openid connect in order to talk to our IDP

 

app.UseOpenIdConnectAuthentication(
    new OpenIdConnectAuthenticationOptions
    {
        ClientId = “b16d9e8c-3a9e-4eac-a4ca-6400da4f5367″,
        RedirectUri = “https://localhost:44305/”,
        MetadataAddress = “https://login.windows.net/rapmlsqa.com/.well-known/openid-configuration”
    });

app.Use(async (Context, next) =>
{
    await next.Invoke();
});

rereading the architectural primer on owin also helped: http://www.cloudidentity.com/blog/2014/05/11/openid-connect-and-ws-fed-owin-components-design-principles-object-model-and-pipeline/

The look and feel is such that our AAD-proxied IDP is treated as if another external IDP – similar to google.

image

The difference between this and “organizational authentication” is subtle – since there are two meanings of that term. If one creates the MVC project USING the organization authentication option, one gets a WIF pipeline – talks to the AAD/IDP. Here we are talking to the same AAD/IDP using the owin pipeline (and using the opened connect protocol, if it matters).

We see local account linking talk place,

image

image

image

Posted in owin

Individual accounts and oauth as

Dissecting the Web API Individual Accounts Template–Part 1: Overview http://leastprivilege.com/2013/11/25/dissecting-the-web-api-individual-accounts-templatepart-1-overview/

Posted in coding theory

Cert based token granting, for azure hosted webapi

https://msdn.microsoft.com/en-us/magazine/dn948107.aspx

Posted in AAD

cordova with adal on windows 10 preview build

Could not figure how to get the custom android platform to be created, on following instructions at https://github.com/AzureADSamples/NativeClient-MultiTarget-Cordova

On building the windows version (using the appropriate build tool chain), we can debug the windows project, seeing that it struggles to find (on the latest windows 10 preview build) the token cache classes it expects.

image

Presumably, it works on windows 8.1.

image

Posted in Azure AD

English food in America

Done well.

Spouts are awful, carrots soggy, peas undercooked.

The ship. Fullerton, ca.

Authentically bad food.

Posted in coding theory

firetv app, simple sample

Having purchased and expensed a fire tv device, we played with the sample app

image

https://github.com/amzn/web-app-starter-kit-for-fire-tv

Eventually, we installed all the build prerequisites and the build tooling. Running it, we built the sample projects and hosted the “simple” site in the npm server hosting environment (as instructed)

image

to make windows chrome  on windows 10 preview (with ripple tools installed) show the device options, had to invoke developer options an d then use the device button to invoke device mode:

 

image

To run this in a visual studio iix express project, we created an empty(2012) website (not web application.) we used the solution explorer to view ALL files, and imported the directories js/ and assets/

image

Using chrome to view the site fails, as fiddler shows iis express does not serve the data configuration file (something.json)

image

image

With that, we can see in chrome’s device emulator the site, as we saw earlier when hosted in npm’s server (site hosting) application.

image

we can post this to azure websites:

image

And we add it to our new app service (hosting plan), so logically it share a security context with other server side apps.

image

which launches our now public site, hosted at

image

http://firetv2.azurewebsites.net/

This struggles to stream the mp4 files, note; which we ignore for now (see below for fix).

image

using our fire tablet, we run the web app testing app – to which we have added the azure URI hosting our now published site/content.

WP_20150405_14_52_33_Pro[1]

Then henwe also arm the developer mode (that allows a chrome browser on the PC to connect to the app’s debug monitor, running on the wifi-connected device) and use its own debugging tools while updating the display of the device. This we do ONCE we launch/test the app, and its running in the device browser.

image

PC view, above

image

WP_20150405_14_58_31_Pro[1]

WP_20150405_14_59_01_Rich[1]

we fix the mp4 issue by doing as advised

image

WP_20150405_15_10_22_Rich[1]

Posted in firetv

hamlet and cryptanalysis (ivy league version)

image

it gets weirder by the day.

Posted in aliens

kindle fire app from azure mobile, via android studio and AAD

We fixed the strange inline error from the app shown below – which is evidently caused by the device having no internet connection.

WP_20150309_11_33_30_Pro

default user discovery screens, from AAD

 

WP_20150309_11_34_02_Pro

user types UPN of identity whose AAD tenant endpoints are  to be discovered – for using in oauth2 handshake

 

WP_20150309_11_34_11_Pro

Net result of discovery is redirection to Metrolist SSO Portal (in rapmlsqa)

 

WP_20150309_11_34_36_Pro

we see the mobile screen rendering, done quite nicely for the screen size encountered

 

WP_20150309_11_37_55_Pro

we see the password screen (though the submit button needs to be more agile, being hard to press with keyboard covering it)

 

WP_20150309_11_38_10_Pro

we see password processing (once the keyboard removes itself)

 

WP_20150309_11_38_26_Pro

And we see the particular apps indication that login succcessed, creating a connection between this client and a backend web app supporting the mobile app with data services.

 

WP_20150309_11_38_44_Pro

This particular app auto provisions a data service for the rapstaff user, to which we add a todo record

 

WP_20150309_11_38_57_Pro

… which is shown to have been stored , when all records are enumerated.

 

Nothing requires an app vendor to maintain the particular User Experience shown the app design described above. That design simply showed that one can make an android application using the new metrolist login for APPs feature. Other user experience concepts are possible , including the elimination of the user discovery process.

Posted in Azure AD, azuremobile

kinde fire usv driver on windows 10 preview; android studio based azure mobile project

 

image

We managed to install it, after some difficulties.

We had to run the android SDK manager tool as administrator, to succeed to install the kindle fire prerequisites.

We found that we could not install the driver, using the instructions given.

We ended up delete all devices related to fire/android, INCLUDING hidden devices.

Upon enumerating the android kindle, again, we used device manager to install/update the device drivers. To do that we searched for the drivers

image

 

First we installed the composite and then the amazon driver. This required a PC hardware restart.

When the PC restarted, we saw that the kindle fire device did prompt us to accept the PC as a trusted ADB debugging partner.

running android studio as administrator we can now run the app apparently, on the now listed device – after another prompt on the device itself.

image

we can now at least run our azure mobile app:

WP_20150309_11_27_33_Pro__highres[1]

Posted in Azure AD

metrolistmlsqa server-side app authentication flow

Azure mobile provides a nice easy way to create an andoid app, using android studio. It also host a backend (javascript) website that supports that app, particularly when doing “server-side login”.

image

image

image

 

This flow is one in which the code provided by the AS as a result of the app-invoked embedded browser is delivered to a server-side endpoint – via an HTTP redirect. This last step is espied by the app, which shuts down the browser and interacts with the website t0- get itself a local session, app to site. The site’s own webapp process has, meantime, converted the code to a token, to support the maintenance of the local session. This interaction is configured by setting identity values in the mobile app configuration to align with that of an associated AAD application (and vice versa):

we configure the mobile backend site to talk to the “homespotter webapp” AAD application registration, in our netmagic.onmicrosoft.com AAD tenant.

image

image

Posted in AAD, azuremobile

metrolist custom domain of netmagic.onmicrosoft.com with app

We added a second custom domain to our AAD tenant and bug rfixed our IDP for the metrolist (IDP) tenant’s particular UX.

We then build a oauth-based application using the metrolist oauth configuration. The screen shots show the app’s login screen, in the windows UI, and a form showing records, after login, for the authenticated user. This app emulates what a metrolist vendor will do, similarly, when building an IOS or Android application.

image

image

At the server API supporting the client, the server code shown below indicates that the client provided a authorization token on the api call to GET records – having obtained that token previous from metrolist’s infrastructure using the oauth/openidconnect handshake.

image

 

A trace of the embedded browser talking to the AS is show below:

image

Posted in AAD

visual studio CTP 6 and cordova for woodgrove claims sample

The visual studio 2015 CTP6 edition of Windows Server 2102 hosted in Azure comes with a set of code samples – for the cordova tool chain. One of these woodgrove claims – a project we learned about back here: https://yorkporc.wordpress.com/2014/05/29/office365-adal-with-backbone-mvvc-and-cordova-build/

We got the project to load in the latest CTP, once we changed the project file to add the term ”Apache” in front of CordovaTools in a couple of path names.

We did make it talk to the authorization endpoint of AAD, but the ripple emulator could not successfully invoke the token swap, at the token endpoint of AAD. This could be a project or a CTP6 issue – I don’t know.

Posted in cordova

getting api manager to use AAD STS, finally

image

image

The key was NOT to have the proxy web app use an authorization code grant.

It must use use only a client credentials grant:

image

The awful azure api manager documentation didn’t help, sending me the wrong way, for a week

image

Posted in api manager

debugging azure api managers use of the AAD token endpoint

 

To see what parameters are send to AAD STS, for token issuing as part of the oauth2 handshake,  we configured API manager to use an azure website – rather than AAD – as the token issuer.

image

 

The website is just a hosted ASp.NET web form project, running in the remote debugger, to which we added a beginrequest handler – so we can inspect the request

image

 

image

The posted form has the following parameters:
       {grant_type=authorization_code&code=AAABAAAAvPM1KaPlrEqdFSBzjqfTGKuDarHWcVaE_gAW-T8bBEhryxGinvUkA66Jt-uxbmRl8J5rjUc0aFJi94eDUoO9Bnt-6NR2sXVQYRXhygLUhjdLYrV9UmlBKZu2U_WZFSXO1_6oeIr-1Phz7VoooKJm0Vmh-N4lfUYdPTsbpgbWMhqA60jkFdiGbAwL0ocUrPw-4V8-8PwddLb1mcFOcGERx1jKa62ffZ9L22tJwkAgHhQPvk4K4TDAq60YY1JMWMgUeL9zc3oT_C6AXv6BkiK-cDm6mE9vx3ZTqz6oHP6LdUqE4QO6hukp7ptcr2Tl15WpJus-Ro4hM4gmdXer7hlwBVM22RLdPBKKZOsm649q12SokmOTdhgHcUX0y2aDxqNPhcTwy0z1QNj6pdZ4PiEVJ9i-qxvZrdB2MUSUNrJ7Lw5bEvzD1rM_eSOPjx-rKwu6gSWqYTNFbXcaBgEoQA6m8PULBdItUNwVwjcyeXTHvEhqrYJLBGdhjpucFGTDYqiteM5zyhFj-GiRkS–9x0kv4vg9TbYl0fLFv8bJwjkG19yZIwVKCVelzZ3TVvsQfyT9srcFCCv6BGu2QnLgA-la0Vksu9NnXHh1hpnO1drt7QLXj6p2FTHhCIDEKv1EobQJwFol8yrsTSdi4wJnYa-dvObvFmXn_8nBw57qKFRp-ogAA&redirect_uri=https%3a%2f%2frapmlsqa.portal.azure-api.net%2fdocs%2fservices%2f54e4f45e73c60f106453dac3%2fconsole%2foauth2%2fauthorizationcode%2fcallback&client_id=0bc904ae-3f2c-4ec7-8b71-40f7207112f0&client_secret=fV1OJsfRFOTDdIqTzs%2fdZCRJkHvcPr9fZGJhWo1dQNg%3d}  

Posted in api manager

TOTAL INFORMATION UNAWARENESS–Gemalto style

image 

http://www.cnet.com/news/sim-card-maker-gemalto-says-its-cards-are-secure-despite-hack/

Folks in the know may recall TIA: total information awareness. This was the codeword for metadata collection and scanning (the process by which one becomes aware about information, described by the metadata).

 

its general practice for the CISO to be officially unaware (of what his NSA-paid staff do). That is, deniability is a built in protocol. I can be unqware, and totally uinware (where the ‘totally’ is a codeword for “I have ‘legal cover’ for saying I’m unaware – or have “no reason” to believe, because its not part of my job description FORMALLY to make such utterances that perpetuate deniability – truthfully or otherwise). What matters is that I do not lie (not that the statement is truthful). This is DARPA at its best (indoctrinating folks in the distinction between telling a a lie and stating some thing [useful to a policy position] that you don’t know to be truthful – or otherwise).

“to the best of his knowledge”, quote the journalist – without even commenting on the PR protocol being invoked.

Our SIMS are secure! says Gemalto (the supplier of NSA/DoD’s 67464C CAC cards). That the purpose of the card (making private calls) doesn’t work has NOTHING To do with the claim of the “security” of the technical countermeasure (note the doublespeak).

Posted in Computers and Internet

hogwash in the self-signed cert arena for OEMs

bad journalism and self-centered experts. That’s all I see

image

http://www.cnet.com/news/superfish-torments-lenovo-owners-with-more-than-adware/

It was always intended that OEMs and browser distributors should insert their own trust points into the windows trust stores.

If Sprint licensed the original netscape browser and tune things up so it was easy to dial up Spring ISP internet access points, it was    absolutely intended that Spring would also tune up the root list for SSL – adding and removing entries as they saw fit, for their value-added network. If they wished to offer an SSL CONNECT Proxy service (to only those consumer who dialed up internet via sprint ISP) they were absolutely entitled and expected to offer a CONNECT service, with their own trusted roots.

This foofoo is all about nothing, with an OEM doing what its supposed to be doing  _ selecting trusted drivers, vetting them, and this includes “CAs”.

If you “buy” a corporate laptop, it comes with the enterprise root too, all setup for MITM and spying on you (the employee) by the evil Enterprise. Whether iuts OEM or Enterprise, the model is the same. You trust the source of the hardware to vet things, and they will probably vet themselves highly – mostly so they can spy or otherwise add value (like shove ads in your face).

Posted in spying, ssl

nsa version of XNS

was actually quite interesting, since it mixed type I waveform protections with type II ciphering.

not as interesting but very nostalgic is the picture that shows a workgroup supported by a “300 MByte” file server.

image

2015-0133.pdf (via cryptome’s site)

Which is less than the store on your phone, full of your photos probably!

ok ok, neither the XT nor the fileserver was never used for cryptanalytical jobs – even though the first Manchester computer (built 1946), with drum memory for storing colossus tape – had less io than the XT + 4Mbps ethernet.

Posted in Computers and Internet

solving ki with tls

“The only effective way for individuals to protect themselves from Ki theft-enabled surveillance is to use secure communications software, rather than relying on SIM card-based security. Secure software includes email and other apps that use Transport Layer Security (TLS), the mechanism underlying the secure HTTPS web protocol. The email clients included with Android phones and iPhones support TLS, as do large email providers like Yahoo and Google.”

https://firstlook.org/theintercept/2015/02/19/great-sim-heist/

 

hah hah hah.

 

““We need to stop assuming that the phone companies will provide us with a secure method of making calls or exchanging text messages,” says Soghoian.”

well durr!

And don’t assume – unless youare wholly stupid – the “new” Telco “TLS suppliers” are any different to the old Telcos. Real Google, think compromised by design. Read Microsoft, read compromise by praxis.

Posted in dunno

microsoft azure ws-fedp login, and FISMA/FEDRAMP

The login button on the azure portal screen invokes oauth (and its openid connect profile) in order to land on the “old” management portal site:-

image

image

We see that this process uses an authorization server, with authorization grant endpoint, at https://login.microsoftonline.com/common/oauth2/authorize?response_type=code+id_token&redirect_uri=https%3a%2f%2fmanage.windowsazure.com%2f&client_id=00000013-0000-0000-c000-000000000000&resource=https%3a%2f%2fmanagement.core.windows.net%2f&scope=user_impersonation+openid&nonce=172c6ab7-7231-4759-ad39-35d9650894c6&domain_hint=&site_id=500879&response_mode=query tales an id_token  parameter for response type.

image\

The server leverages user discovery

image

which generates the ws-fedp request (with its mix of proprietary and standard query string parameters).

As per the office 365 login experience, the site with web resources uses microsoftonline AS (that invokes ws-fedp websso on our IDP).

 

We can contrast this site login experience to the api management “manager site” (having just logged into the azure management portal, note).

 

image

here we see that the site uses an authorization server at  https://login.windows.net/common/oauth2/authorize?client_id=aad2a9dc-2d78-4a34-a6ea-8b535b177cd9&response_mode=form_post&response_type=code+id_token&scope=openid+profile&state=OpenIdConnect.AuthenticationProperties%3dAQAAANCMnd8BFdERjHoAwE_Cl-sBAAAAAZIdGDtugkSxUpRQIN9TaQAAAAACAAAAAAAQZgAAAAEAACAAAAC6kFh2Ew2gnA_lt7qgl9cQgkatiwwZLr3h4jftLw8WPQAAAAAOgAAAAAIAACAAAADmfY1WdHg4ve5Q4UJR2kB9eBK9hsEwPmJqG7MLskx-bkAAAAAWdW7-gkC9APajNuDG5yD0RTf14rcJ2M1ajbPkrTs0quWG_heb0jKaGyvp6vD_c2QJvHC-BZ_owkNcS1NSQNgmQAAAALe55wqpF8jbWMpJBAmnUozok7XZ_1Ftm-eSPzEKROF_UgDTt4lViOKkWLuwRa7l4x_GzvuON3jmRCj8t4U92gg&nonce=635594561610091501.NzA5MTQxZmEtM2ZjNC00MThmLWE2YzItZDRiMjU3YWY0NzQwZmZmZjg4NzctNjQ2NS00MWFiLTlmOTMtY2MyYzZjMjZhOWQy

image

by means unknown, this server knows it has a session (associated with microsoftonline.com’s AS, presumably)

If we delete cookies in the browser, we see microsoft.net AS that invoke’s microsoftonline.com ws-fedp responder (note well, this is not the AS used earlier), which invokes our IDP in a IDP proxying model.

image

image

This duly lands the user (who now visits the IDP) on the api management site:

image

Posted in Azure AD

VRSN, Azure logging, cybersecurity exec order, NSA data center

SO the US did nothing other today than it promised a while ago: start to build an next generation comsec economy. This follows the Api economy (that failed). The comsec economy is a cold war gamble – and likely  to succeed.

image

http://www.nasdaq.com/symbol/vrsn/real-time

The chart of VeriSign’s stock price is one indicator of the  just how well the plan is working; since DNS authority is at the heart of metadata scanning, in the new world of the comsec economy.

WE can point out Azure machine learning tools, too – the raw infrastructure of processing huge numbers of (rather mundane) log file events. Of course, events that log activity are better known as metadata.

image

http://azure.microsoft.com/en-us/services/machine-learning/

The focus by Microsoft on this area is similar to the focus a few years ago on office 365 infrastructure – assuming that government agencies would eventually outsource their (legacy) mail and intranet servers (and voice switches, ideally). This time, the assumption is that each such agency will be outsourcing its log processing,

The executive order binding stooge companies (bank of America, my bank, amongst them) to the federal initiative to collect log statistics, apply machine learning, and specifically predict cyber attacks aimed at American private firms… is now a reality. This has been in the works for years.

image

http://www.whitehouse.gov/the-press-office/2015/02/12/fact-sheet-executive-order-promoting-private-sector-cybersecurity-inform

A word of warning on that initiative, since it sounds almost identical to the post clinton exec order, after FBI were denied mandatory key escrow: the establishment of a MITRE-led initiative to create the metadata scanning program (instead).

In the audit arena, folks have been gearing up for this cybersecurity focus for a while, led by the revitalized FISMA process:

image

http://csrc.nist.gov/groups/SMA/fisma/Risk-Management-Framework/index.html

note how the changes or orientation towards cybersecurity –motivated logging are now downplayed, in light of increased public understanding of metadata scanning threats. Its quite hard now to linkup fisma (revised) with fedramp:

image

http://www.gsa.gov/portal/category/102375

one notes the outcome of the NSA vs DHS war, on who is to control metadata scanning!

image

http://www.dhs.gov/news/2010/11/18/dhs-highlights-two-cybersecurity-initiatives-enhance-coordination-state-and-local

of course, this is all dominated by the investment in Utah

image

https://nsa.gov1.info/utah-data-center/

All of which goes back years, to the decision to use subversion (of internet security programs) rather than mandatory key escrow.

image

https://www.schneier.com/crypto-gram/archives/1999/1015.html

Posted in Computers and Internet

using AAD and oauth2 for Azure API Management’s Developer console need for access tokens

In the last post, we noted how one configures the api manager in azure to communicate with the AAD authorization server for the purpose of enabling developers to logon to the api developer site using their federated credentials. Of course, behind the scenes one is simply configuring the site to talk to AAD using oauth2 – as augmented for multi-tenant access by Microsoft.

In this post, we configure another aspect of api management: oauth2 authorization servers for the purpose of supporting authenticated API requests. That is, the api client and authorization server will communicate to get access tokens that are then attached to the request. The additional authorization headers in the request is processed by the server as an access token, as normal.

image

image

 

we guess at the right parameters and then move on to configure the AAD application so that we obtain the missing clientid/secretkey values.

image

image

 

moving to AAD application configuration…

image

image

https://rapmlsqa.portal.azure-api.net/docs/services/54dd05ad73c60f1110d802b1/console/oauth2/authorizationcode/callback

image

we enabled all app and delegation permissions (not knowing any better)

clientid: 0bc904ae-3f2c-4ec7-8b71-40f7207112f0

key: p4BAp2OLoQf4PR7AAMIR6TY+j7/033DgrVhb4A80XFc=

image

 

giving

image

Now we arm the api endpoints to demand the access token and interact with an authorization server, such as that configured above. Using the API screen, in the management console:

image

image

To test the setup, we follow the instructions and user the developer portal, to invoke a client of the my echo API:

image

selecting authentication code induces display of the websso experience at the authorization server (which is good!)

image

image

 

image

once we fix a bug or two, we can continue this to completion.

image

note the audience fields (doesn’t feel right!)

note how the wtrealm field (provided by azure’s initiator) becomes confused in the IDP:

image

image

Posted in Azure AD

Azure AD and API enforcement after an openid connect (oauth2) handshake

Azure offers a simple webapi proxying service that consumes authenticated requests and relays responses to clients.

We created the “rapmlsqa” portal for api management VIA THE AZURE CLOUD; where we see a default api – provided as an element of the ‘starter product’

image

image

We learn to invoke this api using the developer console. We also learned, first, to obtain from the management console “users” panel the subscription key for the “starter product”.

image

image

image

we see that the invocation requires a subscription key, even though no (proxy) security requirement is set.

The first use we make of AAD – and its oauth2/openid-connect handshake is for the purpose of user login to the management/developer portal site:

image

http://azure.microsoft.com/en-us/documentation/articles/api-management-howto-aad/

image

https://rapmlsqa.portal.azure-api.net/signin-aad

image

image

image

image

image

RpJnlhSWfRkFCg9h04rmk5N3INmCpNzmua/Uh7UWQjg= (key)

aad2a9dc-2d78-4a34-a6ea-8b535b177cd9 (clientid)

image

This gives us a signin experience setup for local account and AAD (rapmlsqa federated) login:

image

image

https://rapmlsqa.portal.azure-api.net/signin

After the usual azure cloud login experience, we  land on the site which has auto-created a (federated-local) account

image

image

back on the management site, we can exploit our admin privileges to modify the subscriptions to which the new user has access

image

image

we see that this new (federated user) – with its distinct subscription ids now – can invoke the API:

image

Of course, this is not an oauth2 handshake (to access the API). in  realty terms, we have a portal that enables us to assign vendorids (to MLS federated users) and grant said user subscription power to certain APIs (eg RETS).

In the next effort, we WILL APPLY oauth2 handshakes between API CLIENT and API SERVER/PROXY

Posted in azure, Azure AD

Iiw and crypto subversion

Is a reference to a document whose content describes how lots of americans (mostly) were cooperating to define a path for identity management using, so-called, community consensus processes.

Under the assumption that the “would if we could” sector of America representing natsec interests will have subverted such processes to suborn them (probably covertly), we might ask: who in the iiw process is (secretly) an nsa/fbi/dod/cia/etc subversive?

Not all standards related groups need subversion. Iesg has no problem being infiltrated by parties whose goals are at odds with stated positions. Ansi security committees are well known for being openly stuffed with stooges, who openly flaunt their loyalty ti us natsec interests.

Posted in coding theory

Ps and Qs (and an unexceptional understanding of KEA)

http://blog.cryptographyengineering.com/2015/01/hopefully-last-post-ill-ever-write-on.html is interesting for its discussion of Ps and Qs.

Long ago, I got a briefing (from NSA folk) on the security architecture and crypto algorithms used in the security mechanisms for the type II version of MSP – the layer 7 forerunner of S/MIME (in DoD). One might recall how DoD certs, in the fortezza era, contained signing and KEA keys. KEA has a DH-like computation, and cooperates with a particular block formation process to compute a master session key. Its really quite cute, and shows that NSA has a lot of class. At the time, I was quite in awe of NSA ability to subtly tune up algorithms , mechanisms  and services (and still am, truth be told).

Now, in the KEA process based in discrete log based problems (vs curve based problems) the so-called random numbers from the certs’ keying blocks were augmented by valuyes supplied by the protocol …. from which master keys were derived (after certain blocking processes were used, and signing processes verified, note well). The keys from the UKM list (a list of secondary “randoms” that “changed” the master key for the crypto periods desired) were the source  of the protocols-contribution of (partial) keyimg material, generating the master secret.

When used in interactive protocols (such as TLS these days), both parties would contribute the KEA key and a nominated value from their UKM. In the case of email protocols, only the sender would contribute such material, with values from the devices RNG satisfying the formal requirements.

It qas made quite clear that the KEA values in the certs were public keys, that could be treated as random numbers at one level (and public keys at another).

While the security mechanism were identical, the security services that result from these different procedures for leveraging keying material were different. And, there was nothing held back about why such differences were necessary – based on pretty obvious (but also subtle) requiremnets analysis.  This was “security engineering” – that I learned from the agency (and loved to learn, from folks evidently well skilled).  That the fortezza card could be applied to the difference security services was, furthermore, absolutely intended – since folks wanted different communication security semantics, to suite the different operational theatres the devices was intended to serve.

I kind of miss professional NSA security engineering, given the drivel I read from the academic sector.

Posted in crypto

HSM, EAL4 and worthless assurances from Microsoft Azure on crypto

image

http://azure.microsoft.com/blog/2015/01/08/azure-is-now-bigger-faster-more-open-and-more-secure/

Trust is hard to obtain and maintain – especially in a world now  well-tuned into the cynicism that expects large corporations to be working with governments in the execution of systemic, trust-based deception plan. If you must, call it a national identity program, or national cybersecurity plan. Anything (so long as not PKI – joke).

If I put 15 more deadbolts on my front door, I am truly “more protected”. Yes, noone lied. But its still worthless (despite being being “better” than merely 1 deadlock).

Why? Because one smashes the weak door jamb down (not the door) when one “disables” all the 15 $20 deadbolts with the $10 sledgehammar weilded by an ape with an exceptionalist attitude. That is: the deadlocks make no difference to security. They just sell worthless American assurances.

A FIPS level-2 HSM is a nice to have; but about as useful to crypto-security and confidentiality as the windows security manager in your desktop when faced with an FBI sledgehammer. Level 2 means it relies on a password scheme (and anyone knows that a “little bit of legal duress” can induce you to handover your password.

It’s true that having HSMs to protect certs and signing keys in a cloud-based HSM, are nicer than on-premise HSMs. And one sees how it supports encryption by a hosted SQL server of tuples. But, as for “security and safety”, the HSM feature is as worthless as any other American-sourced crypto/security product – since the vendor (microsoft) is under the thumb of the crypto regulator.

Sorry Microsoft. Right idea, wrong marketing. You must admit that – like a front door  and its deadbolt – the security is illusory.

your own trustworthiness depends on saying what some don’t want you to say – including the above clear statement of limits. Each and every attempt to use marketing terms and clever parsing to hide the reality leads one to put microsoft in the american bucket (its just another deception program a few exceptional seeking to mislead the rest of humanity 0- the non-exceptionals – on crypto).

Posted in crypto

windows technical preview tablet mode: missing toolbars

I like it – in general.

Well compared to the mode switch that came with windows 8!

also interesting to see bugs/issues, wrt older win32 apps –even  one’s “properly engineered (back then)” by Microsoft itself to work properly with future OS changes.

image

Posted in dunno

steve crocker, darpa, x500 and DNS and trust

Steve croker was head of IETF internet security activities at the time when I was involved in standards work. He was also a founder and thought leader of the internet movement itself – that civilian form of the military internet he helped foment. As a DARPA program manger, he fit the bill perfectly: strong willed, academic, and able to think 25 years ahead.

We have  learned what DARPA was aiming for (25 years ago), with the internet; and, in particular, what it was aiming for in how the name servers (DNS) were to evolve, so that the civilian world (of 25 years hence) would aid and enable the military posture of the USA (and nominally its allies). Steve was one who despised the ISO/CCITT form of name serving (the X500 system), wanting “anything US and internet” to win (at all costs) in the name serving arena. Stuff from international standards bodies was essentially evil, per se (not being American).

TO BE FAIR, in the technical arena of name serving I think he really believed in thewisdom of using packet technologies to scale services (like DNS) – and its 100% true that the DNS we have to day is very much an  internet-era, packet-switched technology that leverages the internet architecture to deliver its feature set. What we have learned since, however, is that the benefit  of this gunboat approach to design has a sharp kickback when fired – specially when the gunboat is all about American exceptionalism. The gunboat is really  a spying ship.

I’m prompted to write the missive having just overheard a starbucks barista castigate a customer wanting to pay $1.50 for a paper newspaper, rather than use the “free” internet to obtains news. This is the world of 25 years later; the world that Steve wanted to foment and indeed did foment – acting  as that special class of exceptional: the DARPA program manager.

DARPA indoctrinated its folks in deception –  the use of intelligence and foresign to aim “to win” at all costs by leveraging deep, programmatic deception to win hearts and minds through use of technology seeding to spread american ideas themselves. We see now that Steve, as with many of the DARPA paid folks of the era, did their job well – inducing civilian infrastructure to be co-joined with american military infrastructure. Knowing that open communications for the masses would interfere with military spying, it was critical to ensure that internet (vs  milnet) technologies in name serving would aid and abet a military advantage to the USA. Since international standards provided no such advantage, they were to be skewered, denied, and undermined – particularly by spying-related agencies such as NASA.

Its not my place to discuss how (what was known as ) milnet and DNS works – to protect name serving via packet technologies in a way that the DNS service is not protected in the internet form of packet switching. After all, if there is a real war – I will be ON NSAs side doing whatever it takes to exploit an intelligence advantage. As a civilian today, however, with an technical political opinion, I can properly ask questions, and point out the issues.

It comes down to trust. And America is not doing a good job of being seen to be trustworthy. The likes of DARPA are now perceived to be the anti-thesis of trust – being seen to have been verily for the very purpose of executing technology-based deception plans, at a systemic level. The internet, and the thesis of open technology dependency, is the evidence.

Posted in dunno, rant

someone attacks my phone and pc, overnight

And, they are good.

While attacking a US sourced phone is not hard, since all the firmware and motherboard vendors work with NSA/CIA to enable remote disablement of the core chips (let alone the OS running on the CPU), disabling the PC was quite a feat. It required remote start from the wifi.

This was a military capability, leveraging preparedness to attack civilian infrastructure. This was beyond FBI (own) capabilities

Wonder why!

Using such against me is really not on. First off, I’m not a target on which its worth wasting such knowhow. This leaves only the desire to collect dirt, plant something, etc – which is a CIA function (not NSA).

Posted in dunno

1940s crypto secrets

Things that were considered worthy of being classified (as 1930s-era secrets in the field of cryptanalysis) included what we would now call iterative methods (in algorithm design).

One example of an iterative method the highly-crypt0-centric method  of banburismus – the updating of Bayesian estimates concerning likelihood of potential solutions in a search set to help direct search directions. One thinks also of counter-intelligence examples in use in the 1940 era when planes were sent to search, being intended to be observed by the enemy as they conducted searches using known-search methods; that hid more advanced search processes used only in well-protected crypto facilities

Iterative methods used for calculating eigenvalue/vectors from approximations, for quantum mechanics related matrices/operators, were also “guarded secrets”. The secrets concerned the raw “methods” themselves along with the ”means” used to complete the processes in a cost-effective time. One things of processes based on rayley quotients and minimization to find the minimum eigenvalue (and the second least eigenvalue) – given the special role this distance has in certain groups when allowing convergence to only occur after almost the maximum number of iterations.

The special space-relations that exist between solution-spaces and search spaces were known to characterize complexity and difficulty metrics – allowing development of key-wrapping ciphers that would measure the security of a key in terms of how long it WOULD take to reverse the mathematics (assuming the ciphertext data had been compromised by capture). This obviously gives a maximum operational window for use of the keying material, in a worst case analysis world.

Posted in crypto