In an earlier post, we discussed getting easily to a PCI-friendly platform – built on the raw roles available in the RC build of the Windows 2012 server (datacenter edition). We left off contemplating where System Center 2012 components would fit in … to PCI preparation; and also wondered just where cloud-related considerations would play there, too.
In this memo, we can briefly consider our improvements, having (i) dominated the multi-server monitoring concept of Windows 2012 itself, and then (ii) having deployed a number of System Center components including SCOM and VMM, and also SC and APP Server.
The key to basic PCI and better audit frameworks such as the FISMA FedRAMP process is to have an “enterprise architecture”. This means something simpler than, but of the ilk of
At heart it all means being able to manage a number of servers as a group, under policy control. To start with, we now have the ability to see the state of any server-class machine – measured by best practices, events and performance thresholds. We pass the pertinent PCI criteria on the basis that we have (working) dashboards, that is.
The next issue is to be able from one management PC (logically in a NOC room) to be able to see any such machine – where access to such dashboards is restricted to administrative-class users. Folks have to be able to at least rdp – to those machines – with an SSO experience – and review the dashboard at the host’s console. Or, folks can use summary tools from a review PC that launches remote tools to the same effect – with components running remotely on the hosts of the cluster.
The red arrows show us being able to look at subsets of servers (and launch one or other tool targeting some selection); and look at a role-based selection of computers (with said role). One such role (in blue) is all the hosts running hyper-v – the baseline for our little cloud.
Now the key to hyper-v is was to use the NetApp storage array (and a multi-path’ed iscsi target) to mount a remote volume on one hyper-v host (acting as file-server, obviously). Once the volume is shared, with permissions on share and volume GRANTED TO the machines in the cluster, one can rdp to each cluster host for VMs and launch its hyper-v configuration tool. There, one creates VMs whose config is all stored on the netapp volume (for cluster members other than that host acting as the SMB3 file server). To configure the some host’s VMs from the NOC machine, set up constrained delegation (its not hard…) so one gets an “integrated” PCI-like, enterprise-grade configuration and performance management solution.
We used the above knowhow to install system center 2012 SP2 severs, running on various hosts in the cluster – some of which had their vm’s hard drives together or split up on various local and remote volumes, where in some cases there volumes were the VHD share on the NetApp! With a gig channel to the netapp, things worked fine in IO terms. Of course system center goes beyond the minimal (but PCI-satisfying) enterprise monitoring concept.
Since firewalls a a big thing in PCI, we used group policy to set the rule that no domain-firewall rules are present, but private and internet policies are in effect. This area of firewalling does not meet PCI (which has special rules on what IP address and channels are visible and armed to what machines in different enclosure – so as to enforce the really old-fashioned bastion host firewall concept). Oh well. We will let the cloud fix that, with its vlans and network virtualization.
Since PCI places great store on baselining, we leveraged the (non system center) PXE server and baselining feature of the 2012 platform: WDS. This allows us to create a VM, on a netapp devices that auto-backups up the vm drives in SAN-land, and simply let PXE boot establish the baseline instance.
As stated before, the update service is responsible for reporting on and managing the patching and update process, allowing admins to first test out patches and changes. This is all standard enterprise update.
So far, we have not made much use of the more advanced solution of System Center SCCM. And so, onto the value-add of system center- now a “cloud-enabling concept”.
In PCI inventory management is a critical management control, so we see SCOM 2012 playing the basic role enabling us to see the windows-center aspects of the operating system. Other WMI features (from the baseboard controllers, and to do with asset tags, firmware etc) are elsewhere.
Of course, we need a view of the assurance available from the “management infrastructure too” to ensure we are not deceiving our self concerning its own effectiveness.
To ensure Administrators operate without root passwords and with minimal privilege, and with segmentation of duties ,we see what we can do (and have NOT done yet, NOTE!) to leverage the runAs capability (for managing privileged user). Yes, System Center stores the passwords of admins to non SSO-enabled devices (e.g. a cisco router, or oracle server), segregating the classes of admin.
For helping the auditor gather evidence at audit time for the last 6 months, we can go back in time and look at all configuration events, for a given machine… aiming to display compliance with the configuration control policy objectives:
Now, in terms of showing control over resource planning, we leverage our cloud and vm strategy. The VM replication allows us to send vm images to the remote data center (and test the start up of the latest replica whenever we want), and the core resource limiting features show what PCI cares about – concerning availability planning.
we get to link up the more advanced virtual machine manager (focussed now on private and public cloud uses of VMs, rather than merely hosting) with the operations manager:
From VMM manager we get to the “state of the mainframe” at a high level:
If I was running this all on real server class hardware, with properly setup networking and IPMI and ip acesss to the drac and blade array’s baseboard controller chip, we can get at the real motherboards of the hosts, too – allowing a decent PCI audit to evaluate the state of the firmware and drives and BIOS, etc. This goes beyond see the logicaly devices assigned to the VM, given the host.
For VM-baselining controls, we see how to run a library server (and baseline configuration of bare metal hardware):
The cloud-centric aspects of this fall a little beyond PCI (But fully into FISMA and FedRamp). There we get to showcase how standard service models and service desks come into the picture, with run books etc. But, none of that is required by PCI.
For our next trick, we will now really get to grips with the APP manager component of System Center, so we can migrate a vm to and from our cloud to the Azure cloud.