power spectral density curve; LTI filtering book

image

Smoothing, Filtering and Prediction – Estimating The Past, Present and Future

Edited by Garry A. Einicke, ISBN 978-953-307-752-9, Hard cover, 276 pages, Publisher: InTech, Published: February 24, 2012 under CC BY 3.0 license, in subject Electrical and Electronic Engineering
DOI: 10.5772/2706

http://www.intechopen.com/books/smoothing-filtering-and-prediction-estimating-the-past-present-and-future/continuous-time-minimum-mean-square-error-filtering

its interesting to reflect on the “conjugate form” of the definition – recalling how much we see this type of argument being used in rotor-machine cryptanalysis.

The book seems to be published in online form for all to consume:

Chapter 1 Continuous-Time Minimum-Mean-Square-Error Filtering

Chapter 2 Discrete-Time Minimum-Mean-Square-Error Filtering

Chapter 3 Continuous-Time Minimum-Variance Filtering

Chapter 4 Discrete-Time Minimum-Variance Prediction and Filtering

Chapter 5 Discrete-Time Steady-State Minimum-Variance Prediction and Filtering

Chapter 6 Continuous-Time Smoothing

Chapter 7 Discrete-Time Smoothing

Chapter 8 Parameter Estimation

Chapter 9 Robust Prediction, Filtering and Smoothing

Chapter 10 Nonlinear Prediction, Filtering and Smoothing

Posted in coding theory

power spectral density curve; LTI filtering book

image

Smoothing, Filtering and Prediction – Estimating The Past, Present and Future

Edited by Garry A. Einicke, ISBN 978-953-307-752-9, Hard cover, 276 pages, Publisher: InTech, Published: February 24, 2012 under CC BY 3.0 license, in subject Electrical and Electronic Engineering
DOI: 10.5772/2706

http://www.intechopen.com/books/smoothing-filtering-and-prediction-estimating-the-past-present-and-future/continuous-time-minimum-mean-square-error-filtering

its interesting to reflect on the “conjugate form” of the definition – recalling how much we see this type of argument being used in rotor-machine cryptanalysis.

The book seems to be published in online form for all to consume:

Chapter 1 Continuous-Time Minimum-Mean-Square-Error Filtering

Chapter 2 Discrete-Time Minimum-Mean-Square-Error Filtering

Chapter 3 Continuous-Time Minimum-Variance Filtering

Chapter 4 Discrete-Time Minimum-Variance Prediction and Filtering

Chapter 5 Discrete-Time Steady-State Minimum-Variance Prediction and Filtering

Chapter 6 Continuous-Time Smoothing

Chapter 7 Discrete-Time Smoothing

Chapter 8 Parameter Estimation

Chapter 9 Robust Prediction, Filtering and Smoothing

Chapter 10 Nonlinear Prediction, Filtering and Smoothing

Posted in coding theory

FT of data, checksum constraint

image

image

http://arxiv.org/ftp/cs/papers/0511/0511100.pdf

Now we say that in Tunny, 1945 – though NOT in the context of defining the check node operation:

image

http://www.ellsbury.com/tunny/tunny-075.htm

Now what Ive not seen is the continuation – in a modern context. That is, when the boolean function is itself a PB function:

image

http://www.ellsbury.com/tunny/tunny-075.htm

Let’s remember how PB is defined – as a deviation of its bulge from the random case defined by beta = .5. Also, lets recall the context of the Tunny discussion : wanting to apply PB algebra to a conditional probability expression (linking Plaintext to hidden Dechi) in the absence of knowing the prior distribution accurately.

But, I do think I better understand the reasoning (finally!) of the last step in X5

image

… given

image

This make us understand the “horizontal sum” instruction of the Manchester computer, much better. see https://yorkporc.wordpress.com/2012/07/21/finding-the-bulge-in-the-waist-of-the-ferranti-2/

And we get to see a limited view of working in and extending the model to G4:

image

image

See MaCkay, at http://www.inference.phy.cam.ac.uk/mackay/itila/

(recall preperata, and gray map, before doing PB4*().)

Concerning actual bulge calculation (and sigma calculation, for sampling) for a particular colossus run:

image

http://www.ellsbury.com/tunny/tunny-098.htm

Posted in coding theory, colossus

real estate and foaf. Is it time?

For various reasons, ‘starting to get back into thinking about foaf protocols (for realty). In part it’s because we have a focus to now apply the research output of about 2-3 years ago (culminating in JSON APIs + OAUTH). With that issue settled (for this year’s dev agenda) what gets added to the queue next becomes the pertinent question. Is linked-data, semantic web, or just some lightweight foaf protocol reaching “maturity”?

By playing with my wordpress press blog, as a site administrator and blogger, I’ve slowly come to understand the webby design principles behind it. while not exactly direct representation of pure foaf culture, one can see the parallels. The webid exercise showed me how NOT to get entrapped in idealism.

While Kingsley probably knew 10 years ago what I’m about to write, for me the praxis of it is brand new. Its all relevant as its now trivial for us, operating backroom real estate services, to run a [public/private/hybrid] cloud – instrumented by a Microsoft System Center cloud and Azure-based cloud platforms. And, it’s then trivial to run a million wordpress instances, with instances hosted in our private cloud one day, and public cloud the next. Being wordpress, at the end of the data… a data export takes you into a linux cloud (assuming a “consumer” wordpress user could care less).

  1. In short, wordpress plus cloud is a game changer, and thus the design principles behind wordpress become more important to understand. Of course, these are analogous to foaf principles – long discussed here. As is typical, the pure research model of something (like the semantic web) may take a form upon adoption that is “rather distinct” from the ideal model. But that worries me not, being so typical. So long as the core principles are there, I don’t mind dropping research models. I expect electricity once invented to have its 120v or 230v proponents, and still have different plug/socket designs per protectionist market. This is all “reality”.
  2. having made a blog post, with title->URI T, I often change the title. Other blog posts have already referred to the old URI. This change doesn’t worry wordpress, having built in a polyarchical naming model. Without worrying about the computer science, the commenting, the liking, the referencing, the identification, the addressing, the persistent identification-ing, “all works, Naturally”.
  3. on “connecting” to an graph-API (say Google) via OAUTH, the purpose is not to facilitate said person’s websso logon as a wordpress user. Rather, the point is merely to now search the profiles of “friends” (defined externally to wordpress) for properties that imply they have a wordpress blog. This then auto-populates the blog reader app , and even the data widget on one’s own site that now auto-refers to friends’ blog sites. This differentiation is important. The main purpose is to configure a reader-app, for the site’s registered users. Only wordpress-account holders can be users (unlike blogger), distinguished from friends, connected-persons, liking/following persons, etc.
  4. following and liking are nothing to do with any of the above… Specifically, one can follow without becoming a site user. And one can follow without the site owner connecting up to some graph API to “learn” who is likely to be a followable party (due to friendship and likely compatibility of interests). Following tends to integrate with email events, Liking and reputation derived-from liking/previous-commenting comes down to rights to make comments (without review).
  5. users of the site (not merely connected graph-API endpoints, or liked, or followed, or comment-enabled persons) can then setup more advanced webhooks, enabling server-server event signaling. At heart, this facilitate the value-add economy (and integration with legacy constructs).
  6. those who are into “app-culture” can connect up webhooks and OAUTH-enablement to allow a third-party site to i) receive an event for some named event and resource, ii) pull the resource, iii) sign it, and iv) store it under legal-repository-semantics under a novel, now crypto-identified URI (persistent-id) that claims to be tied to the users private signing key (activated, originally-authorized and originally-consented via OAUTH).

Now lots of these points 1-6 make sense for me (having take 2 years of actually blogging (as a blogger user). I think I can now comprehend what this “webbiness” is all about – without getting engaged in endless debate, as found in those using research methods.

When I look at real estate now, I want to stay away from foaf-cards (and webid). What I can see a need for, when making the page/post in wordpress is to be using semantic markup (using the foaf vocab). At least then, upon receiving a webhook event, the party pulling the resource can exploit their OAUTH empowerments from the user to then do semantic web app mashups. Such composite resources can then themselves be web-hooked (allowing the signing “app”) to be signing the composition of resources. I think this means simply extending wordpress with SOMETHING that auto-generated semantic markup – hopefully using the same kind of proxy-resource model so wonderfully exposited by Kingsley and his linked-data view of markup.

So, given understanding from playing with it all and then realizing the change in scaling due to wordpress-in-cloud, things are looking up again. As always, the trick is to focus on that formulation of the problem…whose very structure implies the solution that just “fits”.

Posted in FOAF

“Condensation points”, Haar Measures, Turing’s On permutations

In his On Permutations manuscript (title probably “censored” by Gandy to avoid the UK crypto-censors), Turing uses the term “condensation point”. going back to 1925 lectures, we see where the term originates. Of course, we recall that this is back in the days when ALL of Cambridge is in the “foundations of mathematics” mode, seeking a logical definition of what a real number even is.

image

image

http://turingarchive.org/viewer/?id=506&title=o

When reading all that for the first time, it helps to see it in modern terminology: http://press.princeton.edu/chapters/s8008.pdf. and http://www.docstoc.com/docs/69816261/Mathematical-Analysis-by-Apostol.

For the purposes of applicability in Turing’s paper, its becomes ultra clear from:

image

http://math.harvard.edu/~ctm/home/text/class/harvard/212a/03/html/home/course/course.pdf

Having grasped the topic area, its then useful to go into the mind of those working in 1925, and figure what “research grade” mathematicians were conceiving – that 1945-era engineers would end up applying.

Of course, we recall the concentration on L2 spaces and Gallagher’s focus the Lebesgue measure (remembering his intuitive model of how it differs from the kind of integration we all learned at 14).

Looking at Turing’s construction in the last section of On Permutations where he links up his group element generator with markov chains and measure theory, it feels more like a Haar measure:

image

http://en.wikipedia.org/wiki/Haar_measure

Now what we are missing is some material that links permutation groups to probability distributions and thence to measures on real functions over point spaces. Of course, Turing is ultra familiar with the linkup in quantum mechanics, linear algebra, and probability theory.

The article on Haar measure give a hint of why Turing changed his mind and turned away from modeling the difference sets built into his rotors as the appropriate way to model the source of his probabilities/ratios. (This may have been a consequence of reverse engineering the early American sigaba machine’s probability stream generator, too.)

image

http://turingarchive.org/viewer/?id=133&title=27

One also notes how little Turing himself uses probability theory formalisms himself, using always reasoning about geometric ratios. This kind of links up nicely with the “natural” form of expression of channel-based probability models, as in

image

See https://yorkporc.wordpress.com/2012/08/24/conjugate-priors-and-colossus/

Given the relationship of detecting harmonic cycles via convolutional code decoding for cryptanalysis, one can also see how conjugate priors play in professional cryptanalysis when “operator theory” is rebuilt, now enabling one to be “using a hyperprior, and corresponds to using a mixture density of conjugate priors, rather than a single conjugate prior.” [wikipedia].

Posted in coding theory, early computing

Quantum Shannon theory

20120825-170143.jpg

While resigning in an old wild west saloon, Virginia city.

Timeless..

Image | Posted on by

ratio

image

image_thumb3

http://www.inference.phy.cam.ac.uk/mackay/itila/

The “ratio” 247/371 captures the two tap sequences, expressed in octal. That it has ratio form implies the feedback relationship. we have 1 bit of feedback. In some sense, we also have the rational number 1 + 247/371.

This reminds us, now we have found it, of

image

http://www.intechopen.com/books/smoothing-filtering-and-prediction-estimating-the-past-present-and-future/continuous-time-minimum-mean-square-error-filtering (11)

Can we look at that as, in some weird space, as 1 + SNR? Or, rather, the log odds for 2 hypotheses about D – the source bits?

image

MacKay, see https://yorkporc.wordpress.com/2012/08/23/entropy-to-learning/

Would also be interesting to have a octal-version of the farey number sequence, given the distribution is uniform.

but getting back to MacKays book, we can see how the ratio itself relates to the 2-tapping register:

image

image_thumb3

http://www.inference.phy.cam.ac.uk/mackay/itila/

From this one gets to the general case for the turbo-code construction, in which each source input – treated as a block – gets permuted (if only by identity):

image

image_thumb3

http://www.inference.phy.cam.ac.uk/mackay/itila/

Each permutation transformation is an auto-morphism I supposed. The collection of such morphisms is what is interesting, though.

Assuming now that C1 is a recursive encoder, we get harmonics – which we can think of as a phase offset in the complex lattice points being generated. C2 then does the same, with some other phase generator, and lattice overlay – giving us a “mixed” Gaussian response, I suppose. Each is a “factor” for the composite mapper. Its like taking two farey sequences each starting point identified by the source block and doing a cross-correlation, defining a unique frequency response for the mixer.

Any ways, the second picture below, just strikes me as akin to how DES works, satisfying all the constraints in parallel, while the two stages of the feistel cycle “represent” the leaving of the parity process (and facilitate half a block getting permuted). The second picture in particular shows the effect of a particular sbox+permutation that systematically drops bits.

image

image_thumb3

http://www.inference.phy.cam.ac.uk/mackay/itila/

Presumably, in DES, the action of the permutation+sbox to play the role labeled 1 is a function of the previous found.

In the Canadian world, it makes sense to have a subcycle of feistel cycles for a long block, playing the role of turbo-code with more now 4 constraint sets…

Hmm. Quite fun.

Posted in coding theory

home_pw@msn.com:

webhooks may be my first encounter with the events paradigm for webby systems.

Originally posted on blogrium:

Oh, webhooks. What have you become? Just another buzzword for the rising realtime web trend? I admit, without a spec or obvious definition, it seems to lend itself to such a fate. It’s kind of like AJAX. Although, those that know the true meaning of AJAX exists mostly in first letter, and know that it is actually a significant and useful pattern, should know the same of webhooks.

In fact, those that are familiar with the heart of AJAX can even compare webhooks mechanically to AJAX. It’s like an inverted, backend, server-to-server version. Yeah? Okay, that’s a stretch. Maybe I won’t open my SXSW talk with that description.

Actually, I do want to start my SXSW talk backwards. What I usually leave until the end of my webhooks talks, and as such tend to skim over for lack of time, I’m going to make the focus on my next…

View original 520 more words

Posted in coding theory

AUTOMORPHISM for 12 year olds

image

http://unapologetic.wordpress.com/2007/02/10/group-homomorphisms/

The author does a superb job, particular when using a permutation domain to a binary domain in the first example.

The three tenets are clearly motivated: that the closure notion is sustained between G the group and H the subgroup even when the group operators are distinct as in f3; that the set types of G and H may or may not be the same; and then epi-/mono-/iso-morphism build up much like sur-/inj-/bi-jection build up.

As I remember, we get a quick lesson on it when I was 13/14. I can remember the room now. We were our (new) math teachers first ever class (and he actually took all the way through to 18).

It’s just so clearly laid out in this article, without the faffy stuff from textbooks or research papers.

I think Il summarize when he captured about ideals, kernels, and co-kernels too. Once one thinks of H in terms of parity matrices for dual spaces. ..and then sum-product algorithms and conditional probabilities for inference, it all becomes rather more interesting!

Posted in coding theory

LI–VeriSign as spy for America

Track 2: ISS for Social Network Monitoring and Investigations

This track is for Law Enforcement, Intelligence and Public Safety Authorities who are responsible for cyber investigations, monitoring, analysis and creating actionable intelligence from today’s social networks including Facebook and Twitter. Note some of these sessions denoted by “LEA and Intel Only” are only open to Law Enforcement, Public Safety and the Government Intelligence Community

Tuesday, 14 February 2012

9:00-10:00

15:30-16:00 Session B

Cloud Developments and LI/RD as a Service Opportunities
Tony Rutkowski, VP, Yaana Technology

Posted in rant

my first cloud (and vm)

Using the system center 2012 VP1 CTP 2 VHD distribution, Ive managed to build out a “managed cloud”. This cloud happens to host the system center APP center – which can now talk to the (rest of) the cloud, and move VMs from it to Azure.

image

It was not easy. The tool still feels like a better wrapper for the hyper-v RSAT tool at times, trying to have vsphere-style profiles. But, then it also tries to introduce some workflows, with the notion of service. And, then, it tries to be a cloud manager, exposing itself to APP Center (and hosting providers).

So formally, now I have a vm sitting in a cloud supported by a load balancer, with network virtualization. Hmm. So far it just feels like a cloud wrapper around the fabric.

Having dominate all the screens and the tools, perhaps it time to go find a “worked example” build for such a cloud, architected from the ground up. Before that, it would be nice just to move a vm to azure!

Posted in cloud

conjugate priors, dynamical generalization; Tunny

As nice as it is seeing the communication angle on the topics, its also nice to see the “experimental design” angle:

image

image

http://www.gatsby.ucl.ac.uk/~ywteh/teaching/probmodels/lecture1.pdf

We remember the part about odds, stated less analytically, from the General Report on Tunny:

image

http://www.ellsbury.com/tunny/tunny-038.htm

(Note I always struggled with elementary Bayesian probability not understanding how a posterior (the view of the channel from the perspective of the receiver) could depend not only on the source encoder but on the “hypothesis,” too. I bet this confused 90% of the class, being improperly distinguished._

Very obviously- and very well-stated is the relationship between the constraint form (for message passing) and the state machine form:

image

image

http://www.gatsby.ucl.ac.uk/~ywteh/teaching/probmodels/lecture2simple.pdf

All the links seem great!

You feel like your are programming in 1930s, where the programing language is only there to characterize a state machine – which is itself a code-ification of an inference process that can be analyzed in terms of Bayesian networks. If we want to smooth we do this kind of graph design and reduction; if we wish to predict, we do the other. Etc.

Obviously, the Tunny-break method using Colossus was a first implementation of that concept set. Now the question was… did the concept set precede 1939?

Posted in coding theory, colossus, early computing

ha ha ha (it hurts): symantec CA

image

http://news.cnet.com/2300-11386_3-10013290.html#2300-11386_3-10013290-15.html?&_suid=134576535210506810445168908688ha

After a bunch of picture about physical security (found in 1000 data centers around the world), Symantec fesses up to using a 10 year old chip security solution (an alleged ripoof from a close NSA partner, none the less! Jan’s firm!). And then it uses the same Pin reader I have in my garage (laying around on the floor). There is even a picture of the same (really old, and no longer supported) Luna box! Honestly.

Boy that’s sad. There is even a KSD-64 on show (yes that 64 bits of flash memory…once a world record – in 1980…)

God that journalist is brain dead. Presumably, s/he got a stock option, american style.

There is even … wait for it… a picture of an empty plastic bag (with a serial number printed in ink on the side).

Posted in rant

enrolling for certs using ws-trust

a year ago, I learned to use the default enrollment policy for domain computers and invoke the UI for certificate requesting – as built into Windows. If one recalls, I used this to formulate custom requests, against a custom cert template, all for the CA on the same machine as that used to make the requests. What I didn’t understand well enough was how to distributed some of that functionality. Now, with Windows 2012, I managed to do it (finally!)

Lets make a suitable template for issuing, and auto-enrollment specifically:

imageimage

Then we arm the template, using the CA snapin – by using the cert template to issue option (and selector)”:

image

So, on a machine with no AD CS role, I installed the cep and ces IIS applications. CEP is the policy webservice, and CES is the enrollment service. The latter is essentially a custom ws-trust issuing endpoint (minting certs, as a kind of token). The former basically enumerates templates for cert issuing, that drive the UI wizard.

Instructions in windows 2012 for setting things up are good. One can point to the real CA – listening on DCOM ports – from the IIS machine hosting the endpoints, ensuring that the computer account that will be the DCOM client can delegate the credentials suitably – to authenticate the DCOM requests, suitably.

Now we use mmc to see the certificate store of the user, and in the personal store seek to request a new cert:

image

On what is a domain-machine, we wish to leverage our new enrollment points. SO we start by specifying the CEP endpoint. One figures that… according to instructions by consulting the app properties of the virtual directory for the policy endpoint:

image

Its https://winhy85.rappaw.com/rappaw-WINHY81-CA_CEP_UsernamePassword/service.svc/CEP on my box.

We also get the URI of the enrollment endpoint, this time by … again merely following instructions …by consulting the config of the CA itself.

image

Its https://winhy85.rappaw.com/rappaw-WINHY81-CA_CES_UsernamePassword/service.svc/CES on my box.

Getting back to the req UI, we type in the URI and slect the CORRECT authentication type (it doesn’t “find” the endpoint, if you don’t make it match the binding…)

image

I now create an account with reversible encryption (so windows can do basic auth)

image

which is far as I can get. The wizard tries to pass on the authn challenge values, but the endpoint is not recognized (probably because the credentials cannot be validated). Hmm.

Should I go back to windowsidentities – at least to getting started?

SO how do I reinstall the CEP? Well it can be done (and reconfigured). The net result is we now have the CEP at https://winhy85.rappaw.com/ADPolicyProvider_CEP_Kerberos/service.svc/CEPand the cert CN for the domain even matches (just in case that’s relevant).

It’s not. something is wrong.

Posted in cloud

entropy, to learning

In his book, MacKay did a good job motivating the definition of H2 – before introducing H. The point is … that raw combinatorics preceded any engineering worried about signal power, etc.

image

image

http://www.inference.phy.cam.ac.uk/mackay/itila/

He then goes on to say what american text books don’t: that the core “unit” of information _content_ is about surprise in the face of a hypothesis – and always a hypothesis (or two). It is not just the minimum width of bits into which one might reliably compress a datum, encoded. He is distinguishing between units for the (information) content of the container and the container as a formal store for the mere ‘data’ component of the information.

image

image

http://www.inference.phy.cam.ac.uk/mackay/itila/

So, from H2 and the world of combinatorics we get up and into the world of probability, and the world of relative odd version of probability at that.

Its interesting that Forney and co do a better job than MacKay of linking those two perspectives – acting much as probably Turing was thinking in 1935. Forney introduces formalist models of signal spaces, multi-dimensional representations, transforms… and allows us to keep going with coding theory…into and onto the quantum computing world. Interesting, by studying how folks though in 1930 about space-time etc allows us to more easily modern technology. Its like the computer took us backwards, once it made us think mostly in terms only of computable functions.

image

image

image

http://www.gatsby.ucl.ac.uk/~dayan/papers/huysnips09.pdf

We seem to now be heading for the reasoning behind the breaking of the bigram tables, for naval enigma. The goals is to identity valid “bigrams” by noting that it constrain the statistics of the underlying trigrams – taken from a limited set and such that there are repeats.  The actual study is about the whether the experimental model actually reveals something, of course

more at http://www.gatsby.ucl.ac.uk/~ywteh/teaching/npbayes/mlss2007.pdf

Posted in coding theory | 2 Comments

updating banburismus for DES era

IF we were teaching a 12 year old the essence of banburimus, we’d be focusing on the fact that counting up the matches where two depths have coinciding characters turns out to be what we need  t0 break the key. Of course, they have to learn that the rotor order is the part of the key being broken – and yet more work is required for the remaining elements.

Back in the world of DES, assume that tricky-dicky has orchestrated standards so that once again – for some pretty normal mode of a communication protocol applying DES – we are in a position to ask: do the two pieces of data correspond to a machine in the same state? Assume the protocols and their use of crypto modes is contrived so as to create the data streams that can be treated in this manner.

If you think about it, the role of rotor order is played by the n keys of triple des – and/or the particular sequence of operations performed in some triple-DES scheme.

Now what else do we make our 12 years recite? we require they know the concidence properties only get us to the discriminating statistical model WHEN one knows the language characteristics of the plaintext. Thus, the coincidences are in some sense a statistical reflection of those underlying characteristics. then we can get the kids to have the idea that there is a different weight, in that model, for different tetragrams etc of roman=character text. And, its easy to visualize Wehrmacht-german using the typewriter conventions, for numbers, shifts, double chars ec, as the the cause of why there are so many such predictable bigrams, trigrams, etc … in this “military/coded German” language.

So how would tricky-dicky arrange for modern IP protocols to have such plaintext characteristics? If the protocol framework is CMS and signing and enveloping primitives for data objects, can we figure how even in layer 7 world how interleaved sequence of such primitives when combined with stereotyped source patterns would allow a source-coded world to combine with a channel coding world and then an message-passing algorithm focusing on the likelihood model to get the DES-break problem down to a complexity where in linear cryptanalysis can take over?

Posted in DES

on quantum codes and networks

notes for later

image

image

http://www.ma.rhul.ac.uk/static/techrep/2009/RHUL-MA-2009-11.pdf

Posted in quantum

UK spying on Assange; spying on those who spy.

If the UK can be shown to be spying on conversations between a medic and a patient (e.g. Assange and his doctor), or counsel and counseled (e.g. Assange and his legal team), what position does that place the UK? Isn’t it a bit like a lawyer for the defense having evidence of guilt – and thus being unable to be true to the court?

Surely all one has to do now is prove the UK has (covert) spying apparatus in place – that contaminates its own ability to present a case to the Royal Courts (as a dis-interested party).

So its time for Assange to get naked, in front of his doctor, and establish that the UK is breaching his confidentiality – with little grounds that there is some national security prerogative at stake (because its on notice that it’s a minor sex crime, at worst). If it does invoke national security prerogative (because of his wikileaks activities), then we again have a not-disinterested party that is in a position to, and would have the benefit of, confidential information abuse.

Ok the trick now has to be to get physical proof of the spying (by getting to the cameras in the wall.) One can start with a simple mirror that scans the obvious placement points, and look for the glint of a reflection. AS one starts to look, assume the material will be withdrawn – which will create sound.

Is there  any reason why Assange and co cannot be using mics to track those on the other side of the wall? Any reason why  they cannot poke a hole on the wall the other way…? to gather evidence of the intent to compromise a medical confidentiality, say?

Posted in assange

PCI and Window 2012 with System Center 2012 SP1 (CTP2)

In an earlier post, we discussed getting easily to a PCI-friendly platform – built on the raw roles available in the RC build of the Windows 2012 server (datacenter edition). We left off contemplating where System Center 2012 components would fit in … to PCI preparation; and also wondered just where cloud-related considerations would play there, too.

In this memo, we can briefly consider our improvements, having (i) dominated the multi-server monitoring concept of Windows 2012 itself, and then (ii) having deployed a number of System Center components including SCOM and VMM, and also SC and APP Server.

The key to basic PCI and better audit frameworks such as the FISMA FedRAMP process is to have an “enterprise architecture”. This means something simpler than, but of the ilk of

image

See http://channel9.msdn.com/Events/TechEd/NorthAmerica/2011/SIM212

At heart it all means being able to manage a number of servers as a group, under policy control. To start with, we now have the ability to see the state of any server-class machine – measured by best practices, events and performance thresholds. We pass the pertinent PCI criteria on the basis that we have (working) dashboards, that is.

image

The next issue is to be able from one management PC (logically in a NOC room) to be able to see any such machine – where access to such dashboards is restricted to administrative-class users. Folks have to be able to at least rdp – to those machines – with an SSO experience – and review the dashboard at the host’s console. Or, folks can use summary tools from a review PC that launches remote tools to the same effect – with components running remotely on the hosts of the cluster.

image

The red arrows show us being able to look at subsets of servers (and launch one or other tool targeting some selection); and look at a role-based selection of computers (with said role). One such role (in blue) is all the hosts running hyper-v – the baseline for our little cloud.

Now the key to hyper-v is was to use the NetApp storage array (and a multi-path’ed iscsi target) to mount a remote volume on one hyper-v host (acting as file-server, obviously). Once the volume is shared, with permissions on share and volume GRANTED TO the machines in the cluster, one can rdp to each cluster host for VMs and launch its hyper-v configuration tool. There, one creates VMs whose config is all stored on the netapp volume (for cluster members other than that host acting as the SMB3 file server). To configure the some host’s VMs from the NOC machine, set up constrained delegation (its not hard…) so one gets an “integrated” PCI-like, enterprise-grade configuration and performance management solution.

We used the above knowhow to install system center 2012 SP2 severs, running on various hosts in the cluster – some of which had their vm’s hard drives together or split up on various local and remote volumes, where in some cases there volumes were the VHD share on the NetApp! With a gig channel to the netapp, things worked fine in IO terms. Of course system center goes beyond the minimal (but PCI-satisfying) enterprise monitoring concept.

Since firewalls a a big thing in PCI, we used group policy to set the rule that no domain-firewall rules are present, but private and internet policies are in effect. This area of firewalling does not meet PCI (which has special rules on what IP address and channels are visible and armed to what machines in different enclosure – so as to enforce the really old-fashioned bastion host firewall concept). Oh well. We will let the cloud fix that, with its vlans and network virtualization.

Since PCI places great store on baselining, we leveraged the (non system center) PXE server and baselining feature of the 2012 platform: WDS. This allows us to create a VM, on a netapp devices that auto-backups up the vm drives in SAN-land, and simply let PXE boot establish the baseline instance.

image

As stated before, the update service is responsible for reporting on and managing the patching and update process, allowing admins to first test out patches and changes. This is all standard enterprise update.

image

So far, we have not made much use of the more advanced solution of System Center SCCM. And so, onto the value-add of system center- now a “cloud-enabling concept”.

In PCI inventory management is a critical management control, so we see SCOM 2012 playing the basic role enabling us to see the windows-center aspects of the operating system. Other WMI features (from the baseboard controllers, and to do with asset tags, firmware etc) are elsewhere.

image

Of course, we need a view of the assurance available from the “management infrastructure too” to ensure we are not deceiving our self concerning its own effectiveness.

 

image

To ensure Administrators operate without root passwords and with minimal privilege, and with segmentation of duties ,we see what we can do (and have NOT done yet, NOTE!) to leverage the runAs capability (for managing privileged user). Yes, System Center stores the passwords of admins to non SSO-enabled devices (e.g. a cisco router, or oracle server), segregating the classes of admin.

image

For helping the auditor gather evidence at audit time for the last 6 months, we can go back in time and look at all configuration events, for a given machine… aiming to display compliance with the configuration control policy objectives:

image

Now, in terms of showing control over resource planning, we leverage our cloud and vm strategy. The VM replication allows us to send vm images to the remote data center (and test the start up of the latest replica whenever we want), and the core resource limiting features show what PCI cares about – concerning availability planning.

we get to link up the more advanced virtual machine manager (focussed now on private and public cloud uses of VMs, rather than merely hosting) with the operations manager:

image

From VMM manager we get to the “state of the mainframe” at a high level:

image

If I was running this all on real server class hardware, with properly setup networking and IPMI and ip acesss to the drac and blade array’s baseboard controller chip, we can get at the real motherboards of the hosts, too – allowing a decent PCI audit to evaluate the state of the firmware and drives and BIOS, etc. This goes beyond see the logicaly devices assigned to the VM, given the host.

image

For VM-baselining controls, we see how to run a library server (and baseline configuration of bare metal hardware):

image

The cloud-centric aspects of this fall a little beyond PCI (But fully into FISMA and FedRamp). There we get to showcase how standard service models and service desks come into the picture, with run books etc. But, none of that is required by PCI.

For our next trick, we will now really get to grips with the APP manager component of System Center, so we can migrate a vm to and from our cloud to the Azure cloud.

Posted in cloud

Criptopunk drivel – now in Spanish

From http://www.revistaenie.clarin.com/ideas/tecnologia-comunicacion/Cultura-digital-criptopunks_0_757724237.html (via cryptome)

Trasladado a un contexto virtual donde la existencia se rige por la construcción y el control
Translated to a virtual context where existence itself rules by the building and the control

de la información, en ese orden de poder subterráneo, casi burocrático y a la vez
of information, in that order of subterranean power, almost bureaucratic and at times

estructural donde seguros, ambulancias y comidas se mezclan con secretos militares,
structural where securities, ambulances and meals mix with secrets military,

financieros y gubernamentales, el criptopunk es, precisamente, la clase de persona con la
financial and governmental, the cryptopunk is, precisely, the class of person with

que uno nunca querría meterse. Y no porque su amenaza sea física.
whom one never should wish to place oneself. And not because their threat be physical. 

Los criptopunks son científicos: matemáticos e ingenieros en sistemas de información
The cryptopunks are scientists: mathematicians and engineers in information systems

especializados en criptografía que dividen su tiempo entre manuales de programación –
specialized for cryptography who divide their time between programming manuals -

escribiéndolos antes que leyéndolos– y el teclado a través del cual materializan su poder
writing them before reading them–and typing the means by which materializes their power

dando forma a ese universo plástico y omnicomprensivo que llamamos Internet.
giving form to that plastic and all-comprehensive universe that we call the Internet.

 

When is this false robin hood characterization going to stop? The rest of the essay is the usual cryptopolitics. Cryptome gets a mention as “founder”, since from 1996 he has republished the same govt documents you used to find openly distributed by the USG itself (as part of open government). Eventually folks figured that the gestalt of the information being released about daily *operations* led to a classical opsec threat (and thus the modern versions almost the same documents (ever recycled) with little or no valuable content by themselves now – with great james bond pomposity and “bloviation” .. are “leaked” and re-published – precisely by those who object to such bloviation.

Posted in assange

Sweden meta-gropes Ecuador

The Kingdom Of Sweden has certain formality inaccuracies concerning “British proceedings”. These may reflect the essential contempt held by Sweden for Assange’s rights – possibly conspiring with Britain to dupe the Royal Courts of Justice (High Courts).

First Assange left Britain – on entering the Ecuadorian consulate. Whatever “proceedings” there have been since that point, they are not those of Britain. A “proceeding” is a court term, and one needs to get it right (since its part of the duping of the Royal Courts).

image

http://www.aklagare.se/In-English/Media/The-Assange-Matter/The-Assange-Matter/

Though Sweden and Ecuador have been in contact, seeking a perfectly reasonably assurance concerning extradition to third countries (assumed part of the alleged conspiracy to dupe the Royal Courts by the HM Government), Sweden is not involved in “proceedings”. Granting diplomatic (not political) asylum is a sovereign act (by those that inherit the mantle of the Inca).

One then notes how Sweden refuses to use the term diplomatic asylum, showing its contempt for the Ecuadorian act. Like Britain, and probably in coordination, it seeks to deny the legitimacy of the act formally. An act of asylum granting is recognized, not that the nature of the asylum is recognized.

Sweden has every opportunity to now seek extradition of Assange from  Ecuador. Failing to do so… has a legal consequence. The extradition request sent to Britain becomes increasingly legally void, the more one fails to followup the events.

Balzar needs (and surely will) be tuned into these subtleties, being “matters of barristers” that involve invoking the contorted argumentation protocols of the Royal Courts (and their obligation to consider, unamerican like, all due “equities” – including failure to followup in a timely fashion).

One can assume the UK justice dept is contriving to prevent barristers argue the equity cases, showing itself a party to the action.

Posted in assange

from edi exchange, to diagonal exchange

image

http://infoscience.epfl.ch/record/126412/files/GueritaudFuter.pdf

Posted in coding theory

sum, product, quantum

image

From image

and

image

http://www.theory.caltech.edu/people/preskill/ph229/notes/chap1.pdf

In the sum-product case, one goes from counting the down-up transitions and the up-down transitions to simply seeing the effect of the superimposed transitions (i.e. the square wave) on the sign. In the quantum case, one wishes each of the two hamming-weights to independently influence its associated qu-bit.

Posted in coding theory | 1 Comment

Eisenstein integers and complex planes

So I was reasonably well taught applied math, at high school. Concerning complex numbers, we did away with the real and imaginary rubbish quickly. We learned to calculate angular X, by looking a “radar screen” phasors wandering around the circle, projecting shadows onto an axis; calculating first in terms of cos/sin and then later e to the something i. In the e**i/n form the n naturally related to how one wandered through the phase space (at what period… or fractional movement of the phasor through 2pi, etc).

image

http://en.wikipedia.org/wiki/Eisenstein_integer

The phase space “dissection” mental model supports forming a custom plotting surface from the omega thingy, above – we know that what matters is just the a, and b – a couple of integers “to be plotted” (much like one had long plotted x and y)

Later we saw a more modern radar tube now acting as an oscilloscope, with each x and y impulse controlled independently giving us lissajous figures, being invited to go beyond the 2d scope and imagine the hyper-figure (in the nth dimension). This consideration of complex plotting paper and plotting on a cube made out of such paper … was all preparation for the core math… for the basic quantum mechanics element of the physics course.

http://rlv.zcache.com/lissajous_box_poster-r439f97ae81f74c0ba47d507842e1d705_tos_152.jpg

So it comes as no surprise that back in the 2-d place, a lattice build from complex vectors creates a point space on the “surface”. If one connects the dots, the dots are always at lattice points. One denotes the point not in terms of x and y, but in terms of how many path connectors one must traverse from 0 to reach the point, and then how many fundamental lattice “units” of “displacement” are involved.

We knew that as we wandered along the paths, we were wandering along a vector “component” in the next dimension of the nth dimensional plane, one per path element. Regardless of its scale, each path element was “a unit”.

We also knew that in a pseudo-cartesian manner, we now had captured the notion of phase space – on which one could plot things (and calculate) like any other.

It now gets very easy to conceive of quantum computations. Having marked in red one particular path to a point, the cube above turns itself inside out in one “transformation”. AS the illustration shows, now shine a light and see where the shadow lands. One can view the pattern laid down as the result.

Of course, it lays down on lots of planes, in the n-ary case, including “internal’ planes formed from sub-dimensional spaces of the space itself – allowing for “intrinsic” measurement.

Posted in coding theory | 1 Comment

erasure as gas compression

image

http://www.theory.caltech.edu/people/preskill/ph229/notes/chap1.pdf

we remember from the sum-product algorithm how there are cycles of left-right-left compression – as trajectories are forced to coalesce, with the constraints on trajectories flowing through a discrete finite state system trapping paths having more than a certain threshold of evidence in favor of a decision to only get close to a nearby attractor state (in a chaotic system).

One can imagine with DES compacting the gas so it loses information, as each path factor falls into a nearby discontinuity, heat is lost, and erasure of bits occurs. Upon expansion by adding in deterministic noise generated from the key, the process is repeated until the information component has been reduced to the shannon limit, with time-sequence state-specifying  information captured in the sequence of additive noise states.

Hmm.

Posted in coding theory, DES

golden rations with quaternion algebras, for lattice geometry descriptions

Quoting from

image

Thanks for Forney, we get the basic gist now of a two parameter performance curve:

image

hanks To Turing for phyllotaxis geometries and then signal spaces using complex planes, we have some model to understand:

 

image

image

as I don’t really fully dominate good ol modem modulation, best not go towards wireless, yet.

Pretty obvious, Turing (masking codes work in Phyllotaxis papers) was in the same area.

Posted in coding theory

Forney tells us to grasp the group of units…why?

image

http://en.wikipedia.org/wiki/Group_of_units#Group_of_units

well, apart from its intuitive appeal, we see it linking up with the study of Turing’s example of the using of the Quaternion group to form up a suitable algebra for his modulation scheme:

image

http://en.wikipedia.org/wiki/Lipschitz_integer

For all its complexity, its just an abstract integer… in a funky space. So we have 1 as unit, and 2..n forming a group, a ring for indexing purposes and labelling/ordering equations, …

Turing was probably still highly under the influence of quaternion thinking from his Cambridge tripos days

So does this suggest that the drums on the Deliah mixer are  generating linear expressions in a quaternion algebra (rather than enigma-style permutations)? Why not…! are there 4 sets of 2-drums (1 per I, per J)? After all, we are “Channel coding” wanting to use the average properties of the probability mass function to act as a way of getting the (geometric) coding of sequences to its limiting rate. The quaternion algebra has the same commutator nature as enigma, giving conjugates, etc.

This all seems a special case of

image

http://www.isrn.com/journals/algebra/2012/956017/#B2

Its interesting that Turing’s reasoning starts out with classical enigma themes (how to make a characteristic function of the linear equations – producing “invariants” -, using difference set notions), considers mass functions and markov conditions for sequences being en/coded by a finite state machine, and evolves into geometric lattices for 4-d space – with monomorphisms, even.

http://upload.wikimedia.org/wikipedia/commons/f/f4/24-cell.gif

Posted in coding theory

spying on Assange (or Ecuador, rather)

http://worldnews.nbcnews.com/_news/2012/08/19/13360956-assange-in-balcony-appeal-to-obama-release-leak-suspect-bradley-manning?lite

So when has one “invaded” another country (vs. merely spied upon it)?

I don’t have a lot of sympathy for Assange. On his honour to the same system of jurisprudence we (British ) all live by, he made a promise, and failed to live up to it by reneging on the liberal concept of bail. For that particular delito, alone, he can stay in self-imposed house arrest for the rest of his life. He is not a “fugitive” – from that bail. For one, I can accept on claim alone that some kind of asylum trumps bail and even extradition. But, it only holds as a non-act of “fleeing” while he is in Ecuador – albeit a tiny corner of Ecuador in a backstreet of London. The moment he steps into Britain, he gets fugitive status having received a demand to present himself – and her majesty’s copper force can deal with him much as the Americans deal with Manning – being unrepentant and entirely likely to repeat…the act of showing contempt.

Now, more interestingly, what were Her Majesty’s Coppers doing in the heating/cooling vents of the Building?

Well of course, they were bugging the place; probably with optical fiber cameras stuffed through holes made in the walls. The fiber optical cable itself, with camera on the end, will be 1mm inside the wall (plaster probably) covered by a film (ahem transparent) of Ecuadorian dust, so one is not *formally* spying from within the country. One should assume the sewer line, the cold water line and any other tubular “line” has a cable stuck up it… by now. Assume the place is being irradiated too – much like some contract firms irradiate houses during a “home inspection”.

So assume every single last movement of the rest of his “not-incarcerated” life is being filmed, from 50 angles. And this includes, as  I find proper in these “foreign affairs” circumstances (being now an agent of Ecuador), monitoring of his “foreign” control of wikileaks activities – which can be properly now be called subversive. Every last keystroke, every last video frame from his computer screen… all video-taped from the eye in the ceiling (and the floor), and the window, and the …

So Julian COULD now play…hunt the camera…which should not be THAT difficult. But HOW many are there? And don’t they easily multiply?

Now, in terms of microphony, lets assume that the beams for the floor above the Ecuadorian space have been “converted” into a set of microphones –listening for electronic noise. The type that comes from the electrical signal of a keyboard say. Is there any reason why folks should not “noise irradiate” the space – with a beam targeted at HIS computer (versus the functionary trying to process someone’s visa to go see the Galapagos)?

So finally to torture. If one wants to not-torture someone in a confined space, and that someone is rather obsessed with his privacy (which is why I assume he didn’t want to do the aids test in the first place, nominally to resist digital fingerprinting), one could not-torture by “privacy–leakage”.

Why cannot HMC (her majesty’s coppers), much like the Americans broadcast the BP spill, now broadcast on the web … his EVERY last moment of his life, including pooping. Let there be No respite from privacy invasion. None. Never. For the rest of of his time in that large apartment, for the rest of his life. If folks build a room within a room, let the very sound be transmitted…

One cannot say its degrading – or that its torture. But psychological pressure it is. And, I’m sure it would get to him.

If you want to pressure a hacker type you have to get inside their core belief system about themselves – and deny them their anonymity.

A few months of that and Im sure a standard cell in wandsworth prison will seem like heaven, before being off to sunny viking land (and then a US cell, with … mics and video all around one…).

Lets now see what our Spanish ex-judge is made of. He has to make it so embarrassing and unpalatable for Britain, that they take the Queen route (and have her “use executive privilege” or “royal prerogative”) to dispose of the problem on the grounds that Britain’s better off. If the affair causes lots of prominent UK folks to go look-see-ing for the MI5 bugs in their premises…(once they know what to look for) …this could cause LOTS of embarrassment. I doubt the UK could stand much of that – being seen to be less than Olympic, behind closed doors. The Olympic after glow, having burnished the vision of a suffragist Britain, might present a different image once the a glare of a rather grungy deer in the headlights comes into full view.

Posted in assange

quaternion algebras

Some notes:

image

See http://www.isrn.com/journals/algebra/2012/956017/#B2

Posted in webid

H, H1, Quaternion Group, Group of functions, factor groups; phyllotaxis

Boy is there a lot of math knowhow assumed in http://turingarchive.org/viewer/?id=133&title=31. And, you are supposed to cue into the knowledge sets underpinning the arguments from the cues – the letters chosen, and the context set by special terms used in the sentence before that one. If on arguing about the ratios of geometric means one suddenly talks about units, didn’t you know that was a reference to continuity theorems (that will be then applied in the next sentence, not that its exactly pointed out).

So lets assume we are in a world of what we would call QAM modulation. That is, a complex space of signal points, arranged in a constellation pattern. Turing doesn’t use such recognizable terms, not having been invented yet.

But, we are in a world of generators of a group, just like the 3 generators of the classical hamming code expressed in matrix form. ANd, any one generator can generate a cyclical group, of course.

We are also in a world in which real function-groups exist, where the elements of the group (say H, or its subgroups) are real functions. In suitably metric’ed spaces, real numbers add, multiply etc – and one may assume nicely map into points in the 4 quadrants about the origin. For what we would call alpha (the length of a side in such point space), Turing might use K, note.

Now assume that the space is either complex or real, but even if complex the group attached to H is self -conjugate. Or, perhaps better stated, assume that a coset of H1 has the self-conjugacy property as a function of n-copies of a single generator (with thus a common set of factors – enigma permutation cycles for n upright, for the generator) the being in “most-refined” form.

Now assume there is for a group (e.g. H) a characteristic function – the so-called indicator function that can reduce any group element in cycle-notation to a real-metric form, irrespective of which order the factors are presented. Thus though we may be reasoning about an H that is say the quaternion group, any group element has a real function attached to it (from the indicator argument).

Now, indicator functions are only useful is there is a unique mapping – and for this one may want to discuss what we would now call tutte polynomials, or weight enumerator polynomials – or rather the relationship of a particular enumerator set of the world of Designs – where a meta-algebra “about” codewords exists, considering auto-morphic forms of the group – built by considering permutations of coordinates, etc. We might show that certain groups gain the unique mapping required – to help us out with signal point ideas, mapping group elements (in hamming space, say, in some hypercube) onto Euclidean space.

Now, we throw in some theory from the world of finite state machines, relating walks through state and branch spaces as a probability mass distribution. Rather than reduce things to a matrix and do matrix operations, keep things algebraic and conceive of each point (signal point, ultimately) having a functional/probablistic contribution to the overall mass function, each one modeled independently (and randomly) as the random walk of a point through the combinatoric state-space laid out by the finite state machine.

Considering the average effect of such walks, viewed as a 1930s operator acting upon some simple function of ratios, use ratio-arguments from geometry to consider limiting functions of geometric means (or squared means, rather).

Establish conditions should two particuules be modelled as sequence (where each exists as some ratio of possibile particuls in a q-ary world), that there is no dependency between what happens to the second, given the first. we have sequential independence that is.

Given such a world (a markov-condition-like world, note), fashion a world of geometric mean limits, in which a sequence of operators acting on the combinatoric space only work to average out the noise (information content) as each acts on the results of the previous operator’s results.

The final part of the puzzle seems to be when considering that back in group land, those group elements (subject to real function characterization, and being related to average probablistic modelling) are H1, a subgroup of H. H1 thus has cosets, and a particular generator (U1) may be considered in all its self-multiples (cyclic form), and one may thus consider the terms of each of the cosets, partitioned by each multiple. They MAY cover H, entirely. IN the middle of this consider the inverse of the cyclic element, establishing that a group of cosets multiplying each other is itself a group… Of course, we may be interesting H/H1 being a cyclic (quotient) group, giving a nice “congruential” form to the signal space.

Now, go back to the underlying quaternion group as one that might be underpinning H, and consider (remembering its one of those non-abelian groups) the terms I and j to be 2 generators, U1 & U2. Having established that the real functions attached to the group elements generated have limiting distribution properties in the euclidean space as group elements (as particles) evolve, as the energy averages out (the information disperses/diffuses to its coding gain (and ultimately the “shannon limit”), consider that the limiting distribution converges equi-probably over the involved group elements as each operator “Round” is performed. One notes how the conjugate terms of (I, j) in the dual space (I’, j’) are part of this equi-distribution.

Don’t forget to throw back in some factors, of the generated lattices, so we see how there is a cycle of 2 in how each operator works either on ij, or lk.

This factorization has something to do with being “generated” by k -in that quarterion (ijk), non-abelain group.

Sigh!

Well at least I get the overall argument, now – even if 90% of the above is total crap, mathematically.

It’s like he was thinking in terms of PAM ( the 1d representation of the group generator), but holding back stating QAM (on generalizing the argument to the complex vs real plane).

Of course, but the time phyllotaxis comes along he is just working in lattices with complex generators, where his “surface coordinates” play the role of signal points.

Posted in coding theory