This release is a free, slimmed down version of Windows 2012 with Hyper-V with minimal GUI suport. In fact, you will want another machine (Window 2012, Windows 8 or System Center) to handle the Virtual Machine monitoring and management). There are good reasons for this.
- Security - in the world of the public cloud, you don't want datacenter admins peeking into your corporate data.
- Performance - All that graphic fluff costs memory, disk access (time), power, cooling and cycles.
http://blogs.technet.com/b/keithmayer/archive/2012/09/07/getting-started-with-hyper-v-server-2012-hyperv-virtualization-itpro.aspx.
http://technet.microsoft.com/en-us/library/hh833682.aspx
I have been doing long-term evaluations of Citrix Xen and VMware ESXi for the last several years. With this new Hyper-V release, I decided to to add this to the mix. After several weeks of experimentation, I ported my development environment (about 10 virtualized servers - 2003 and 2008R2) and haven't looked back.
What I liked:
- Improved Networking
- virtual switches
- Improved Security
- By removing GUI support (among other things) it becomes harder for datacenter workers to steal data.
- Better performance
- While slower to startup then its major competitors, once the virtual machines are up and running, and an app or service accessed for the first time, user perceived performance was much better then competitors.
- Better resource management
- Dynamic memory permitted better resource planning and allocation.
- Processor resource management is now on par with VMWare (personal opinion).
- Ease of setup
- Total install time was under an hour. This included setting up SAN based drives for the virtual image storage. (Required significant usage of diskpart command and net share).
- Scalability
- Significantly larger memory and processor allowances than competitors for free product version:
- 64 virtual processors per virtual machine.
- 1 TB per virtual machine.
- 64 TB per VHD.
- 320 logical processors on the computer that runs Hyper-V.
- 4 TB on the computer that runs Hyper-V.
- 1024 virtual machines per host server.
- Migration
- Live migration.
- Multiple concurrent migrations permitted in a clustered configuration.
Refused to reconnect to iSCSI stores after a reboot. Had to go in and manually disconnect and reconnect to the SAN (about a 10 second process) after every reboot.
It didn't matter that it had been told to save the settings, or whether the connection was set up as a default or custom configured (exact initiator and target port specified, and initiator selected). Likewise, setting up service dependancies (this should be an automatic component of the ISCSI process guys) didn't help. However, as soon as I did the disconnect and reconnect, the drives came right up. I note that this problem, which didn't exist with initial releases (around 2003), has been reported by a lot of people with some variation since Windows 2008 came out. My guess, given that it isn't a universal problem, is that it is specific to the environments in question (non-HBA), but after doing an extensive web search, I haven't found a solution that works. I do wonder if it has something to do with the added IPv6 support. Fortunately, I do not recall seeing an instance of this where HBAs were employed.
With this cavaet, I would heartily recommend evaluating this platform for virtual machine hosting in your lab, if not an iSCSI based production environment. The base features now rival that of more expensive competitors and management is also simpler.