Wednesday, June 25, 2014

Alternative Transaction Logging Technique for Faster Legal/Exacting Searches of Custom Application Databases

Due to developments over the last several years in compliance (due to the 2007 financial fallout) and internal auditing (due to the increase in data breaches), it is becoming much more challenging for organizations to address the time and cost of discovery and documentation.  These searches may require exact information of exactly what was done, by who, when, on which server and from what station and in what sequence.  Given that searches may require historical queries going back as long as 12 or more years (due to compliance requirements), and the tendency for enterprise systems to evolve, this may require thousands of man-hours to reload the appropriate system backups (if they are available) and roll through transaction logs, all required to done by certifiably competent specialists, with lawyers and auditors looking over their shoulders reviewing everything to ensure that all requirements have been met.


As far back as 2001 I had been bothered by this issue and have been mulling over practical solutions but kept putting the problem back on the shelf as other projects took priority.  Finally, a year or so ago I  had to address this specific issue again while providing consulting to a startup here in Chicago.  Due to the nature of the business, it was imperative that they have the ability to meet every possibly auditing and search need with the minimum cost.  The solution had to be universal to every line of business application we employed, and work across database and server platforms.  It also had to have a structure that was flexible enough to survive unaltered into the foreseeable future.


While the startup never launched, and we never got any farther than basic workflows for the applications, I finally found a solution to this conundrum when I started writing a ITILv3 support package last year.


To make it work required:
  • Implementation in a relational database environment.
  • No routine customization (common code for the every use for the entire implementation).
  • Searchable variable metadata content.
  • Needed to work seamlessly for complex transactions.
  • Needed to integrate with the RAD tools I was using.
I began by selecting a suitable RAD tool which employed a customizable coding template in a language with which I was comfortable.  For me this was IronSpeed Designer.  This RAD tool permits the user to build fairly complex Web applications from MS SQL, Oracle and other databases in either the VB or C# language.  As it is template driven, using boilerplate code for all components (presentation, data view, business rules, etc.), and can address complex views, it was the perfect tool for me.


I started by determining the audit log structure.  This is short synopsis of what I came up with as the base requirements:


  • User Identification
    • User ID
      • User SID
      • User Name
    • Workstation ID
      • HostName
      • IP
    • Server Information
      • IP
      • HostName
  • Time/Date of Transaction
  • Package Name
  • Originating Program
    • Name
    • Version
    • Last Update Date
  • Database Platform
    • Database Name
  • Transaction Information
    • Table Name
    • Transaction Type
    • Key Field Names
    • Key Field Data
    • Changed Field Names
    • Changed Field Data
In order to address the variable number of elements and their contents in the structure, in addition to the fact that I might want to add to it in the future, I decided to fold the polymorphic data into an XML datagram.  This datagram was then treated as a single large binary field within the relational record.


On the programming side there were several challenges:
  • Writing the necessary code only once (all transactions employ the same code).
  • Integrating it into the support libraries for IronSpeed
  • Ensuring that proper error processing was not compromised (critical!).Writing the code in VB (IronSpeed source code libraries were provided in VB only).
I was able to solve the first challenge rather elegantly, thanks to the excellent engineering of the IronSpeed team, by employing reflection at the core data access library routine level.  This permitted a data-structure invariant solution that worked for both simple and complex (multi-table) transactions.


Integrating this custom module was very straight forward once I waded through the reference material on how to do this with IronSpeed.


Ensuring that the proper error processing was not compromised was simply a matter of ensuring that all possible errors were addressed, even if it was with an unknown error code.  No direct exits to the debugger permitted.


The most challenging aspect turned out writing the new support library in VB which is a language that I despise.




By assigning relational table fields to  the relevant search fields (database ,table name, user, date/time, server, package, IP, etc), one can now do a fast search across entire platforms.  This search capability could even be extended to the XML datagram for an appropriate string search (say a person's name).  Once the desired dataset has been selected, a reporting program can decode the XML to display the relevant fields in a single comprehensive multi-tiered report/spreadsheet or dataset as appropriate.




One element my specific implementation lacked was recording the key element data for new records.  As I was doing this in VB as opposed to TSQL, there would have been significant overhead in moving the data between the database server, webserver and back again to accomplish this.  If it were done in the core stored procedures for the tables, than it would be practical to add those fields to the audit log.




If you are interested in researching this further, you can find the relevant libraries and support documentation on the IronSpeed site at:




http://sjc.ironspeed.com/search?searchid=33017161&showas=post

Friday, June 13, 2014

Compliance and IT



Was only going to do one blog today but decided that this was a bit too relevant for it to wait.





I recently met with a manager of a local service provider. The senior manager was quite concerned about two things their previous IT Manager had not addressed:
 




  • Compliance (HIPPA/HITECH)
  • Business Continuity

As a result of that conversation I put together the following for her (modified for this audience) to help her deal with her concerns about her needs and her future IT Manager.
 










BC Plan Requirements
  • Must be compliant (HITECH, HIPPA, et al):
    •  The organization needs to be compliant with, at a minimum, HITECH and HIPPA. There are probably others that have not surfaced as of yet.
  • Must be maintainable and maintained.
  • As requirements, processes and personnel change, the plan must be maintained.
  • It must work.
  • You need to be able to, at any time, demonstrate the efficacy of the plan.


Compliance
Almost every IT compliance addresses the following issues:


  • Availability
  • Applications
  • Reporting
  • Disaster Recovery
  • Business Continuity
  • Access - as needed basis only - should require signoff by appropriate department management
  • Security:
    • User
    • Internal
    • External
    • Partners
    • Providers
    • Customers/Clients/Members
    • Network
    • Between Applications and Resources
    • Between Applications and Users
    • Between Departments
    • Between Internal users, external users and the public (Internet)
    • Between sites (WAN)
  • Application
  • Need to know
  • Auditing:
    • Internal
    • Periodic
    • Legal Search
    • Organizational, Professional, Business, Government Audits
  • Documentation:
    • Systems
    • Services
    • Procedures/Workflows
    • Remediation
    • Verification
    • Structure
    • Procedures
    • Functionality
  •  Required as part of BC Implementation:
    • Hot Site - IT and Office


In order to implement the plan, it will be necessary to set up a hot site capability.
 
 


Electronic Document Library


In order to ensure that all requisite documentation is available, it will be necessary to establish an electronic document library which has appropriate security and redundancy. A possible viable solution is already in place, but research needs to be done to insure that appropriate access control and redundancy measures are in active to address BC/DR issues.
 


As a byproduct of establishing the document library, it might be appropriate to create a position for a dedicated librarian to manage metadata maintenance and assist with search and audit requirements.
 


Recovery Time Limit (to be determined)


Recommendations to Address Business Continuity Requirements


  • Workflows
    • Workflows are already established for business processes.
    • A regular review of existing and new processes needs to be implemented.
  • Provider contact Info
    • All service and material providers will need to be notified in event the primary location is unavailable for any length of time.
  • Partner contact info
    • All partners will need to be notified in event the primary location is unavailable for any length of time.
  • Staff Redundancy
    • All departments should be populated and have trained personnel on staff to address any short or long term loss of employees for all job functions of a critical or sensitive nature, or requiring specific skills, to include management.
  • Full Systems and process Documentation – maintainable
    • A completely maintainable documentation of the all systems processes and procedures must be maintained which will require a regular, periodic review.
  • For IT, it would be desirable if ITILv3 compliance was followed as this would bring the organization into alignment with most major financial institutions and simplify and changeover in personnel.
Food for thought. Feel free to reply with comments, questions or any omissions/errors.





To all Organization Senior Management, repeat after me: You can outsource jobs but you cannot outsource responsibility (read that last as liability).

Apologies for being away so long, but I have seen so many incidents of terrible management lately that I had to address them and also hold my tongue until I could think objectively again.


As to those incidents:
  • Corporate security breaches too numerous to mention.
  • Overzealous spying by our government on everyone.
  • Three and four hour waits to discover that not only that you can't pay your monthly bill on-line via credit card, you can't do it by phone either - however you can give your bank routing info over the phone to someone you don't know and who already has your social security, address and phone info (here, take everything I own).
  • Government launch of a healthcare site that is incomplete and can't work for the majority of the people it was designed to help.
Corporate security breaches to numerous to mention.
Given the publicity, I am not going add anything else to the mix.  You all know (or should know) the details about these.


Overzealous spying by our government on everyone
My only observation here is why was anyone surprised?  Behave and act accordingly.


Inability to make credit card payments
A very large healthcare provider, for over a year (since at least version 10 if not before), has not been allowing its users who employed Internet explorer to make payments via credit card.  Issue was a website design problem and when you tried to make a payment, you received a GoogleApps error.  Given the number of people who use this tool (600 Million or  if I am recalling correctly), one can expect that to be a problem.  I made them aware of this (via the usual tech support conduit) over a year ago. 


Recently I got fed up enough to ask for a call with their CEO.  The conversation, while pointed was as pleasant as these things go and I think I got the point across.  It went something like this:


  • You have a problem.
  • It is a bad problem.
  • People can't pay their bills using their credit cards on your website.
  • It has been going on for a long time.
  • Your tech support people are well aware of it.
  • Trying to make a payment via phone takes several hours - and - involves giving your bank routing info to someone who already has your social security #, address and phone contact info whom you don't know.  Not a good thing.
  • So you were not aware of this.  Not good.
  • Oh, you have outsourced development of the website?  You have no control?
  • Well someone in the organization must be managing them.  I think you have a problem there that you need to address. 
Long story short. about a month later they had a fix for the problem.  They used an outsourced company to handle the web based credit card payments - for an additional charge.  Questionable, given the recent scandals of data breach with several of the credit transaction clearing houses that have occurred recently, but a step in the right direction.


Federal healthcare website
Everyone knows about the problems with this site, crashes, links to nowhere, etc.  These were all results of bad management.  Both by project managers and the client (read federal government agency bureaucracies).  One I found especially glaring.


Sometime late in the development of the website, it was decided by the powers that be that they wanted specific personal information about applicants before they could begin shopping.  I don't know the exact reasons for this, but suspect that it involved the IRS and the various companies providing the policies so that they could provided options that they had thought would serve them (meaning the provider) best (as opposed to showing everything that was available at the applicants locale and letting them decide).  One of the key pieces of information that was required was income YTD for 2013.  Which was a big problem...  Why?


A large number of people who would be using the site don't work for medium or large businesses which tend to provide corporate healthcare  plans.  These are people who either had no full time or reliable employment or are self employed, which means that, in general, they wouldn't know their 2013 income until their taxes were done (if they did their taxes).  There was no way to say I don't know my income.  You simply got a failure to process message and could not continue.  Same effect if you entered a zero income (which a lot of people, particularly in Chicago and Detroit have these days).  So the site failed at its primary goal - to provide a way for everyone to get health insurance.


I was dismayed by the fact that the site tech support was aware of this oversight and unable to address it up the ladder.  So earlier this year I injected myself into the process to educate the projects top manager about the process (he lives at 1600 Penn Ave. in DC.) 


I sent him a rather pointed e-mail explaining the relevant points detailed above:
  • Failures in managing the project - in particular: 
    • Design changes late in the game
      • Why were these not addressed before the design signoff was given?
      • Did anyone in authority actually read the design doc?
    • Launching the site without thorough testing.
  • The primary users of the website, probably won't know their correct income, if they had any, until after the end of the year and possibly not until April 15th.
  • If you truly didn't know about the issues, than you need to find out who failed to pass it on and address that failure to communicate critical information.
There was a lot more to it, but the end results were:
  • Calls from the Dept. of Health and Human Services regarding the details of the specific issues mentioned above and how they might be addressed.
  • Removal of individuals from the project management.


So what is all of this caterwauling about?


It has come to the point where Information Technology operations can no longer be an area of ignorance, neglect, or treated simply as a cost center that drags down profits by the CEO and CFO. 
  • It has to be done safely, securely.
  • It has be managed by people who can not only manage projects, but have a thorough understanding of the platforms, concepts and tools with which their teams are working.
  • Contrarian views, and issues raised with the architecture and implementation, particularly concerning operations and security must be addressed and not suppressed.
  • There must be sufficient staffing of the operations department so that operations people can be thorough in their tasking, and alert in their vigilance as opposed to sleepwalking through their day.
  • Developers have to have sufficient time to analyze, review and test code so problems can be addressed before the code become operational.
  • Final Testing needs to include expert users.  So that important details don't get missed.
  • Everyone involved needs to be able to coherently communicate.  This can impede the use of offshore outsourcing, but can greatly reduce the potential for major delays and operational snafus.
It's very simple.  Senior Management has to learn that you cannot outsource responsibility.  Senior execs at retail chains and those in banking and finance are discovering this now.  Others will follow.
It's hard to build an empire on a house of cards - the slightest gust of wind can blow them away.




Thursday, October 10, 2013

Infrastructure Server Review: The SUPERMICRO SYS-5037MC-H8TRF 3U Rackmount Server

One of my servers recently started acting finicky (e.g. not wanting to boot up), and given that it was 7 years old, I figured it was time for a replacement.  I eventually decided on the SUPERMICRO SYS-5037MC-H8TRF 3U Rackmount Server barebones kit which I acquired from Newegg.com.

Over recent years, I have become a fan of nodal servers over the more prevalent blades that the major manufacturers having been shoving at us for the last 10 years.  The major advantage is that you actually have better control over the physical environment with this platform than with any blade system I have ever looked at. 

The reason for this is that each node can have it's own custom switching, only limited by the number of  NIC ports you can stick in a PCI slot, whereas the blades usually limit you to a max of 4 switches, and those are expensive switches.  Even with network virtualization, the bandwidth issues can be challenging.

Supermicro, in my opinion, is now the leader in the nodal arena.  They produce 1-5U units contain from 2 to 16 server nodes each.  Depending on the model, you can have single or dual processors per node.  This review provides some details regarding their 3U, 8 node unit for Intel Xeon E3-1200(V2) CPUs.

Features
This server provides 8 nodes, with 2 3.5" Sata drives available per node and includes RAID Support.
Drives are hot-swap, provided RAID 1 is setup, and nodes are individually powered.  IPMI is the primary maintenance mechanism, providing remote session capability which eases most basic maintenance issues.  Peripherals include: built-in video; 2 user NICs and 1 for IPMI; 2 USB ports, video and a serial port via a UIO cable, of which one is included with the system.  Appropriate Heatsinks are included.  Dual universal 1600W power supplies are also included.  There is an optional PCI Express 3.0x8 low rise slot, ideal for adding a multiport NIC/HBA card.  This is quite adequate for a Virtual hosting environment like mine.

Power supplies are redundant and hot-swappable. 

Additional Acquired Components
Processor:
I chose the Intel E3-1230-V2 processor given its capabilities (virtual support on board), and price point (given the recent release of the V3 processor version). 

Memory:
4 Kinston KVR1333D3E9S/8G ECC UDIMMs - Unbuffered is said to be required by the docs, but the ECC function apparently doesn't work with UDIMMS installed.

NIC:
Not acquired as of yet, but looking at a dual port INTEL or a proprietary Supermicro NIC, still researching.

Storage  (per node):
1 80GB Kingston SSDNow  KC300 60 GB SSD for the OS
1 960GB Crucial M500 960GB SSD for the virtual images
2 3.5" to 2.5" HD conversion kits.

Physical Setup:
About as simple as it can get.

Node
  • Slid a node out of the chassis.
  • Removed CPU protector cover from the socket.
  • Released the restraining clamp.
  • Inserted the CPU.
  • Locked the restraining clamp arm.
  • Screwed in the provided heatsink.
  • Installed the memory.
  • Removed the PCI slot cover.
Time: 10 minutes the first time, expect to shave a few minutes off for the rest).

Chassis
  • Unscrew the top plate over the fans.
  • Remove the protective plastic film over the plate, verify that the fans have unobstructed air flow.
  • Screw the plate back on.
Time: 5 minutes

Rackmounting
  • Separate the inner rails from the outer rails.
  • Snap inner rails on the Server at an appropriate height (remember to allow 1U of spaced above the server for cooling).
  • Remove all nodes, drives and the power supplies.
  • Snap outer rails onto rack.
  • Slide  chassis onto rails.
  • Reinstall nodes, drives and power supplies.
Time: 10 minutes.

Node Configuration
  • Connect UIO cable
  • Connect NIC and IPMI
  • Hookup monitor, keyboard and mouse (or KVM)
  • Power on the node.
  • Enter <DEL> for setup (be quick, you only have 2 seconds - this timeout is adjustable in the setup menu).
  • Modify the standard settings as needed.
  • IPMI setup - if required, enter manual IP info.  Else note address assigned via dhcp.
  • Save changes.
Time: 10 minutes

SSD install
  • Install SSD in conversion kit
  • Mount converted SSD into HD Holder
  • Slide into server.
Time: 3 minutes each


IPMI setup
  • Make sure that the latest version of the JRE is installed on your PC (32-bit).
  • Use your browser to access the IP address recorded above.
  • Login (User ID:  ADMIN, Password: ADMIN - not in manual)
  • Verify that all options, including the remote session capability are functional.
Time: 30 minutes including JRE install

Assembly Comments
Simple, almost foolproof.  The hardest part is getting the heatsink orientation correct (the enclosed setup diagrams help here).  Having the correct heatsinks included makes life simple.  The only issue I had was locating the 3.5 inch to 2.5 inch HD conversion kits.  I found an excellent source via Amazon.com marketplace:  Bravolink sells these in 5 packs for about $40.00.  They are spec for Dell units, but work just fine with both this Supermicro server and my Promise Tech SAN.

Thoughts on single versus multiple CPUs per node
I have implemented several nodal and blade systems through the years.  I believe that we have reached a point where a single CPU can adequately and most efficiently address most loads for virtual server host environments.  While AMD and Intel both have very good multiple CPU architectures, with up to 16 virtual cores per node, the overhead (heat, power, space) for supporting the multiple CPU model can be avoided in many cases.  This also can reduce the bandwidth bottleneck out of each physical server.

Perceived Performance

I have been running Hyper-V 2012 on these for about a month now, employing the built-in replication capability that comes with this server-core deployment.  In my development environment, I maintain anywhere from 8 to 20 virtualized Windows servers for ongoing projects, depending upon integration requirements.  With these split between two servers, and replication, some comments on performance:
  • Replication: No loading apparent in single users desktop sessions during replication for either initial copy or hourly updates.
  • Stability: Only hiccup is the occasional Linked-Layer Topology Discovery Mapper service collapse, but this was happening even in a straight physical environment (would love to hear from Microsoft about how to address this as all network services now seem to depend on it). 
  • Power consumption:  Two nodes draw less than 1 AMP @ 110V with this configuration.
  • Noise: Very quiet.  Not noticeable from an adjacent room with the door open (Fans running at 3125RPM).  Excellent for a home or small business office.
  • Heat generation: Very low.  To the point of being barely noticeable if you put your hand at the exhausts.  Core Temp (32C/90F), peripheral temp (42C/108F).
Hyper-V 2012
With the many new features added, and more to come with the R2 release, many organizations are considering moving hyper-v 2012 to the front line of their server deployments.  This makes particularly good sense for SMBs.  If you are migrating an existing physical server, the existing tools are there to make this pretty painless (particularly if you can take the migrating server out of production (not down, just not in transactional use) for the conversion time).  Even domain controllers can be implemented successfully (with proper care) now that there exists a good reference library for power-shell scripts to handle VM startup and shutdown.  Given that this is a free product (when used with existing licensed servers), it is a hard deal to beat - particularly with legacy systems.  When implemented with the full version of Windows Server 2012, Hyper-V also offers some very nice virtual desktop implementation tools, particularly if the datacenter version is acquired.  When implemented with thin clients, this provides a user environment that is easy to fulfill (simply add a new user to the appropriate groups), administer and support.

Conclusion
Given the low cost per implementation of high performance nodes (less than $1200/node as configured here), the high density and performance capabilities, and the reduced management headaches, I believe that this is a platform that most datacenter architects should be looking at very seriously in assembling their next upgrade plan.

Tuesday, June 18, 2013

Audit trails for financial systems - supporting the business

One of the most costly factors involving financial systems is ensuring the ability to provide an accurate audit capability so  that, when needed, a legal discovery can be performed that satisfies the opposition's and court's concerns about the accuracy and reliability of the information.  This is especially important when the possibility of malfeasance exists.

The major cost factor in legal discovery is the lawyer's time - for which the losing side will eventually end up paying.  While there now exists some commercial tools available to improve the efficiency of e-mail and document searches (e.g. Proofpoint), the ability to perform efficient searches at a system's transaction level (to determine exactly what was done in what order), is still time consuming and labor intensive.  Particularly when the incident involves data that was altered that was not part of a standard business transaction process (e.g. a sale), but instead was from the core database (e.g. customer table, inventory table).

The problem can be best summed up by the fact that most logging systems trap only the user id and a timedate stamp of when a change occurred.  Most deletion logs only show the record id along with that timestamp.  While this can point to a possible incident point, it requires the replaying of the database transaction logs from backups (possibly from several years ago) to determine exactly what was done.  Which takes time and resources that most businesses can better spend on other things.

One can keep a transaction log which mirrors the database, essentially identical data content of the table in question, with a primary key containing a timedate stamp and the user id of the person making the change.  However, that would still leave the problem of searching the log for every table within the timeframe in question, a still laborious task.

It would be better to have a single journal that encompasses the entire database that could be readily searched for all changes.  Unfortunately, relational databases don't handle polymorphic data very well.  However, XML does.   A solution may be found by using both in concert.

One of the capabilities of a number of modern programming languages (e.g. C#, Java) is the ability to employ introspection - meaning the ability to examine a generic structure to determine its characteristics.  This permits the programmer to employ a generic routine to process data of all types.

This routine could, for example, parse a record to be updated (inserted or deleted too), and construct an XML construct containing the relevant information (e.g. the primary index, the primary index field names, the changed field names, and their old and new values.).  This XML construct could be saved as a binary field in a standard RDS journal record which would contain as other fields: a timestamp, user id,  table name and a primary key.  This permits a single search to focus in on users, tables and time periods.  Also, as we capture the before and after values and are using an identity seeded primary key for the journal table, the effort required to successfully illicitly alter the data has increased significantly.  Additionally, one could move the primary key value itself into  the RDS structure where it might be helpful for more focused searches.

Ideally, this would be implemented on the database server in the appropriate Trigger Stored Procedures for Insert, Delete and Update.  That would maximize the efficiency of the operation and greatly increases the difficulty of any player attempting to manipulate the data.  For those instances were one is addressing cross database operations, usually occurring where custom applications have to integrate with canned applications (such as an in-house app to ERP interface), it may make more sense to address it programmatically in the app hosted on the webserver.

A custom reporting utility would be required to decode the XML and return it in an easy to read format.  A generic report could separate the changed field data into columns for field name, old, and new values.


Many Rapid Deployment Tools, such as IronSpeed Designer, as they employ code templates, lend themselves to the programmatic approach.  By altering the templates used for code generation, along with a support library to address the XML  formatting, one could easily make this a default operation for the code generator, eliminating the need for unit testing on up to 90% of the code and functionality.

One recommendation related to the journal table itself - remove the update stored procedure, or alter it to only send a warning to the sysops of an illicit data change, and capturing the relevant information (user, timedate, table, fields altered and the relevant old and new data.) in a security breach report.

Friday, December 7, 2012

Hyper-V 2012

Like most of us these days, I am trying to get the most out of a limited IT budget.  I run three physical servers, enough for development and proof of concept evaluations. Their host OS environments change as often as new hypervisors appear.   When the latest version of Microsoft's Hyper-V 2012 server became available I decided to try it out.

This release is a free, slimmed down version of Windows 2012 with Hyper-V with minimal GUI suport.  In fact, you will want another machine (Window 2012, Windows 8 or System Center) to handle the Virtual Machine monitoring and management).  There are good reasons for this.

  • Security - in the world of the public cloud, you don't want datacenter admins peeking into your corporate data.
  • Performance - All that graphic fluff costs memory, disk access (time), power, cooling and cycles.
Some really nice features include a better High Availability option - clustering no longer requires a shared data segment.  But to get the full details, you can go to these links: 

http://blogs.technet.com/b/keithmayer/archive/2012/09/07/getting-started-with-hyper-v-server-2012-hyperv-virtualization-itpro.aspx.

http://technet.microsoft.com/en-us/library/hh833682.aspx

I have been doing long-term evaluations of Citrix Xen and VMware ESXi for the last several years.  With this new Hyper-V release, I decided to to add this to the mix.  After several weeks of experimentation, I ported my development environment (about 10 virtualized servers - 2003 and 2008R2) and haven't looked back.

What I liked:
  • Improved Networking
    • virtual switches
  • Improved Security
    • By removing GUI support (among other things) it becomes harder for datacenter workers to steal data.
  • Better performance
    • While slower to startup then its major competitors, once the virtual machines are up and running, and an app or service accessed for the first time, user perceived performance was much better then competitors.
  • Better resource management
    • Dynamic memory permitted better resource planning and allocation.
    • Processor resource management is now on par with VMWare (personal opinion).
  • Ease of setup
    • Total install time was under an hour.  This included setting up SAN based drives for the virtual image storage.  (Required significant usage of diskpart command and net share).
  • Scalability
    • Significantly larger memory and processor allowances than competitors for free product version:
      • 64 virtual processors per virtual machine.
      • 1 TB per virtual machine.
      • 64 TB per VHD.
      • 320 logical processors on the computer that runs Hyper-V.
      • 4 TB on the computer that runs Hyper-V.
      • 1024 virtual machines per host server.
  • Migration
    • Live migration.
    • Multiple concurrent migrations permitted in a clustered configuration.
What I didn't like:

Refused to reconnect to iSCSI stores after a reboot.  Had to go in and manually disconnect and reconnect to the SAN (about a 10 second process) after every reboot.

It didn't matter that it had been told to save the settings, or whether the connection was set up as a default or custom configured (exact initiator and target port specified, and initiator selected).  Likewise, setting up service dependancies (this should be an automatic component of the ISCSI process guys) didn't help.  However, as soon as I did the disconnect and reconnect, the drives came right up.  I note that this problem, which didn't exist with initial releases (around 2003), has been reported by a lot of people with some variation since Windows 2008 came out.  My guess, given that it isn't a universal problem, is that it is specific to the environments in question (non-HBA), but after doing an extensive web search, I haven't found a solution that works.  I do wonder if it has something to do with the added IPv6 support.  Fortunately, I do not recall seeing an instance of this where HBAs were employed.

With this cavaet, I would heartily recommend evaluating this platform for virtual machine hosting in your lab, if not an iSCSI based production environment.  The base features now rival that of more expensive competitors and management is also simpler.


Windows 2012 Remote Desktop Services

I haven't had much time to write lately as have been working on a major project.  But while performing recent evaluations for the project, I was surprised by my findings and thought they might be useful to someone else, hence this posting.

Windows 2012 now provides several flavors of VDI, depending on your needs.
  • Traditional VDI with a minimum single server footprint supporting multiple sessions for smaller or less resource intensive environments.
    • Small physical foot print - with Windows 2008, we employed this for a client as a single virtual machine for 40 users.
  • Advanced VDI employing multiple servers.  Best for very large scale. highly available or resource intensive environments.  Servers:
    • Required
      • Connection broker (Physical or virtual)
      • Web Access (Physical or virtual)
      • Hyper-V Host (Physical - either Hyper-V 2012 (free) or Windows 2012 Server with the Hyper-V option installed).
    • Optional
      • Gateway (Physical or virtual)
      • Licensing Server (Physical or virtual)
Likewise, some new or improved features:
  • RemoteFx
    • improved device transparency (USB).
    • improved 3-D graphics processing.
    • Multitouch support
    • Better performance over a wide range of network connections for the entire user experience, including video.
  • Single sign-on

I started experimenting with Windows 2012/8 VDI just over a week ago.  The first step was to define the server set to be employed,.  Currently there are two options, an all-in-one single server option and a three server option.  I decided to start with the 3 server option as this appeared to be the most scalable choice.  The servers required were:

  • Eval-CB - Connection Broker (Virtual)
  • Eval-WA - Web Access (Virtual)
  • Eval2012-HV - A Hyper-V 2012 server (this is the free version of Microsoft's 2012 Virtualizaton Host server)
Found several very good step by step guides on this (here is one: http://blog.itvce.com/?p=1569).  Some articles had you installing application support on the Connection Broker server and IIS on the Web Access server, but this is not required with the RTP release.  Even with the guides, there were some issues (user lack of sleep or incomplete documentation), and I had to restart it a number of times before everything worked according to plan.

The security environment was my 2008R2 development domain.  This caused some problems when I decided to apply roaming profiles later in the process, but I eventually found a solution to this. 

A summary of the process:
  1. Created the 3 servers.
  2. Assigned static IPs, and implemented manual DNS settings.
  3. Set the Time Zones (all)
  4. Joined the servers to the domain (all).
  5. Added the personalization feature (CB, WA).
  6. Configured the desktops (CB, WA)
  7. Started the Server Management process
  8. Collected these servers in the All server management view
  9. Selected the create a RDS option.
  10. Followed the wizard selecting the three server solution.
  11. Verified all the options and let it run.
  12. After reboots, ran the server manager again to set up the collection.
  13. I had previously set up a windows 8 template so I chose this option.  There is a wizard option to create a template employing an iso of the windows 8 media disk.  The template must be sysprepped before use by the collection builder process.
  14. Set up each desktop to start with 512M of memory in dynamic mode with a max of 1024M.
  15. Experimented with the user profile options (setting up the user profiles here as opposed to in Active Directory).
At this point it just ran, but there were issues:

Issues
  • Access website Certificate errors - expected and not a problem.
  • Profiles weren't functioning (I had to set up the profiles directories in the collection setup, the 2008 AD options didn't work.).  Also unchecked the reset on exit box in collection setup.  Make sure that the shared datastore directory where you store the profiles has the appropriate privileges (shared as full access for all (domain) users).
  • Connection to the RDS website was spotty.  Had to go into the Virtualized servers and set their power settings to High performance.
  • Also make sure that the Link Layer Topology Discovery service is started.
  • Had to reboot all the servers after creating the OU for the RDS user pool.  I got tired of waiting for the refresh to reach the servers and didn't feel like chancing a powershell typo at 3AM.
Performance
  • Connections were a bit slow as each session had to be integrated with its profile., but not terribly.
  • Once up, speed was quite good.  Almost as fast as working on a live machine.
Comments
  • More complicated setup than before, but everything needed is really covered by the wizard accept for establishing connections between users and specific RDS sessions.
  • Resource balancing is quite good.
 Coming up will be on Hyper-V 2012 setup (free version) and a user's perspective performance comparison between Hyper-V 2012 and ESXi version4.1.