Over recent years, I have become a fan of nodal servers over the more prevalent blades that the major manufacturers having been shoving at us for the last 10 years. The major advantage is that you actually have better control over the physical environment with this platform than with any blade system I have ever looked at.
The reason for this is that each node can have it's own custom switching, only limited by the number of NIC ports you can stick in a PCI slot, whereas the blades usually limit you to a max of 4 switches, and those are expensive switches. Even with network virtualization, the bandwidth issues can be challenging.
Supermicro, in my opinion, is now the leader in the nodal arena. They produce 1-5U units contain from 2 to 16 server nodes each. Depending on the model, you can have single or dual processors per node. This review provides some details regarding their 3U, 8 node unit for Intel Xeon E3-1200(V2) CPUs.
Features
This server provides 8 nodes, with 2 3.5" Sata drives available per node and includes RAID Support.
Drives are hot-swap, provided RAID 1 is setup, and nodes are individually powered. IPMI is the primary maintenance mechanism, providing remote session capability which eases most basic maintenance issues. Peripherals include: built-in video; 2 user NICs and 1 for IPMI; 2 USB ports, video and a serial port via a UIO cable, of which one is included with the system. Appropriate Heatsinks are included. Dual universal 1600W power supplies are also included. There is an optional PCI Express 3.0x8 low rise slot, ideal for adding a multiport NIC/HBA card. This is quite adequate for a Virtual hosting environment like mine.
Power supplies are redundant and hot-swappable.
Additional Acquired Components
Processor:
I chose the Intel E3-1230-V2 processor given its capabilities (virtual support on board), and price point (given the recent release of the V3 processor version).
Memory:
4 Kinston KVR1333D3E9S/8G ECC UDIMMs - Unbuffered is said to be required by the docs, but the ECC function apparently doesn't work with UDIMMS installed.
NIC:
Not acquired as of yet, but looking at a dual port INTEL or a proprietary Supermicro NIC, still researching.
Storage (per node):
1 80GB Kingston SSDNow KC300 60 GB SSD for the OS
1 960GB Crucial M500 960GB SSD for the virtual images
2 3.5" to 2.5" HD conversion kits.
Physical Setup:
About as simple as it can get.
Node
- Slid a node out of the chassis.
- Removed CPU protector cover from the socket.
- Released the restraining clamp.
- Inserted the CPU.
- Locked the restraining clamp arm.
- Screwed in the provided heatsink.
- Installed the memory.
- Removed the PCI slot cover.
Chassis
- Unscrew the top plate over the fans.
- Remove the protective plastic film over the plate, verify that the fans have unobstructed air flow.
- Screw the plate back on.
Rackmounting
- Separate the inner rails from the outer rails.
- Snap inner rails on the Server at an appropriate height (remember to allow 1U of spaced above the server for cooling).
- Remove all nodes, drives and the power supplies.
- Snap outer rails onto rack.
- Slide chassis onto rails.
- Reinstall nodes, drives and power supplies.
Node Configuration
- Connect UIO cable
- Connect NIC and IPMI
- Hookup monitor, keyboard and mouse (or KVM)
- Power on the node.
- Enter <DEL> for setup (be quick, you only have 2 seconds - this timeout is adjustable in the setup menu).
- Modify the standard settings as needed.
- IPMI setup - if required, enter manual IP info. Else note address assigned via dhcp.
- Save changes.
SSD install
- Install SSD in conversion kit
- Mount converted SSD into HD Holder
- Slide into server.
IPMI setup
- Make sure that the latest version of the JRE is installed on your PC (32-bit).
- Use your browser to access the IP address recorded above.
- Login (User ID: ADMIN, Password: ADMIN - not in manual)
- Verify that all options, including the remote session capability are functional.
Assembly Comments
Simple, almost foolproof. The hardest part is getting the heatsink orientation correct (the enclosed setup diagrams help here). Having the correct heatsinks included makes life simple. The only issue I had was locating the 3.5 inch to 2.5 inch HD conversion kits. I found an excellent source via Amazon.com marketplace: Bravolink sells these in 5 packs for about $40.00. They are spec for Dell units, but work just fine with both this Supermicro server and my Promise Tech SAN.
Thoughts on single versus multiple CPUs per node
I have implemented several nodal and blade systems through the years. I believe that we have reached a point where a single CPU can adequately and most efficiently address most loads for virtual server host environments. While AMD and Intel both have very good multiple CPU architectures, with up to 16 virtual cores per node, the overhead (heat, power, space) for supporting the multiple CPU model can be avoided in many cases. This also can reduce the bandwidth bottleneck out of each physical server.
Perceived Performance
I have been running Hyper-V 2012 on these for about a month now, employing the built-in replication capability that comes with this server-core deployment. In my development environment, I maintain anywhere from 8 to 20 virtualized Windows servers for ongoing projects, depending upon integration requirements. With these split between two servers, and replication, some comments on performance:
- Replication: No loading apparent in single users desktop sessions during replication for either initial copy or hourly updates.
- Stability: Only hiccup is the occasional Linked-Layer Topology Discovery Mapper service collapse, but this was happening even in a straight physical environment (would love to hear from Microsoft about how to address this as all network services now seem to depend on it).
- Power consumption: Two nodes draw less than 1 AMP @ 110V with this configuration.
- Noise: Very quiet. Not noticeable from an adjacent room with the door open (Fans running at 3125RPM). Excellent for a home or small business office.
- Heat generation: Very low. To the point of being barely noticeable if you put your hand at the exhausts. Core Temp (32C/90F), peripheral temp (42C/108F).
With the many new features added, and more to come with the R2 release, many organizations are considering moving hyper-v 2012 to the front line of their server deployments. This makes particularly good sense for SMBs. If you are migrating an existing physical server, the existing tools are there to make this pretty painless (particularly if you can take the migrating server out of production (not down, just not in transactional use) for the conversion time). Even domain controllers can be implemented successfully (with proper care) now that there exists a good reference library for power-shell scripts to handle VM startup and shutdown. Given that this is a free product (when used with existing licensed servers), it is a hard deal to beat - particularly with legacy systems. When implemented with the full version of Windows Server 2012, Hyper-V also offers some very nice virtual desktop implementation tools, particularly if the datacenter version is acquired. When implemented with thin clients, this provides a user environment that is easy to fulfill (simply add a new user to the appropriate groups), administer and support.
Conclusion
Given the low cost per implementation of high performance nodes (less than $1200/node as configured here), the high density and performance capabilities, and the reduced management headaches, I believe that this is a platform that most datacenter architects should be looking at very seriously in assembling their next upgrade plan.
No comments:
Post a Comment