A breathless deliveryman, having lugged a huge box from HP up the stairs to our lab, asked us, what the hell’s in here? A server, we told him. A really large one. He nodded, panting, and having his signed bill of lading, left.
HP hadn’t told us exactly which server they were sending, noting that it’s new, and hasn’t been reviewed anywhere before. We didn’t even know the model number.
Inside the massive box was a 4U server marked: Hewlett-Packard DL580 G8. It weighs just over 100 pounds in the configuration that HP sent us. It was crammed to the gills, but also had room inside. The server sounded like a WhisperJet gone wrong when we powered it up. There’s a good reason: the fans have to cool 60 x64 cores on four massive processors. Sixty!!!
+ ALSO ON NETWORK WORLD HP Proliant Microserver G8: Powerful features in a small, stylish package +
The DL580 Gen8 is HP’s most powerful server in the Proliant line. HP says that it’s designed specifically to take on IBM server iron (specifically the IBM Power 750), and one of the biggest beasts from Oracle’s Sun server line (the T5-4). (Watch a slideshow version of this story.)
What you get
Two slightly different tiers of the DL580 G8 are offered -- basic and high performance. With basic, you get two processors, Intel EZ-4809 V2s at 1.9Ghz clock, which means a dozen cores. HP Insight Control server management software is not included in the basic tier, but twin 1200w power supplies are part of the package.
The high performance selection boosts capacity to four processors, Intel Xeon 4850 V2s (48 cores), or 4890 V2s (60 cores) clocked at 2.3 or 2.9Ghz respectively. Increased clock speed also means increased requirements for heat dissipation, and the faster you go, the more power your server will consume.
Insight Control is bundled with the high performance tier. The power supply count doubles to four, and they’re 1500w/ each—and in either chassis, they’re universal rather than left or right-side.
In either case, the combinations of processors and cores/processor can be handily and specifically ordered to match the requirements of specific hypervisor companies.
Importantly, the high performance tier now also includes two or four 10Gigabit Ethernet ports, using SFP+ connectors. By contrast, the basic tier has four Gigabit Ethernet ports. We were supplied with HP’s four 10G Ethernet ports in the same space that the SFP+ connectors usually are mounted. Both chassis also have a Gigabit Ethernet iLO port.
There is massive room inside, chassis-dependent, for PCIe V3 slots, which can run at a stunning 15Ghz+. Five 16 lane/fast slots, and four 8 lane/half-that-speed slots are included. Into these slots can go anything from InfiniBand cards to additional SAS or General Purpose Graphics Processing Units (GPGPU), or other SAN target Host Bus Adapters (HBA).
HP has stuffed an insane amount of the server CPUs and memory components-plus-cooling in the front of the chassis to provide space in the rear. What was pushed out?
The base prices for the machines reflect no drives. Yes, they can be installed in a Small Form Factor (SFF) cage inside. In a fit of modernity, however HP believes you probably won’t put many drives inside and the drives aren’t easily accessible from outside the chassis for hot swap anyway—the front is covered by the four aforementioned fans. The drives just aren’t as easy to service as exterior-mounted drives. Yes, you can get drives, but you might want to boot from iSCSI or an internally installed HBA.
Kickstarting The Battleship
You can take a coffee break while waiting for the DL580 Gen8 to boot. In our testing and rebooting, it took seven minutes. Should you use the supplied front panel or rear panel connections to a crash cart/console, the system boots to numerous options, including a rudimentary GUI. Function key selections permit alternate boots, although we use PxE, which worked well.
The BIOS suits UEFI boots, but the UEFI can be disabled to accommodate OS installations that don’t want the draconian controls of UEFI. The media installed locally, or presented through interface cards installed in the chassis, can be pre-selected for setup for Windows, RedHat, etc. HP’s iLO also works here, so chassis monitoring through boot can be controlled remotely.
We were supplied with 256GB of memory. However, the current max is 3TB, and with new memory sticks not yet available, the total could be 6TB. This means with 60 available cores, each core could conceivably be configured with 100GB of DRAM per. This is an astonishing possible amount of memory in any chassis, and this one’s just 4U high.
HP also includes legacy memory, cache, CPU cache and PCIe bus traps, most of which must be supported by operating systems or used in conjunction with HP’s Active Health APIs, HP’s iLO Intelligent Management System and/or third-party hooks.
The device also has hardware detection fault monitoring (and perhaps correction where do-able), plus Advanced Memory Protection that ostensibly recovers from CPU, cache, and memory problems. In our test scenario, with up to 60 cores running at clock speeds approaching 3Ghz, it was virtually impossible to inject sludge somewhere into the system in order to test the efficacy of HP’s scheme.
Hypervisors and operating systems could benefit from this information to ensure transactional integrity in and among operations that it supports. With luck, a brave hypervisor maker will start to utilize this information, and it would be lovely to track results in a way that would allow decision support for taking a machine out of service should its failures reach a certain threshold.
Moving VMs out of a bad stick or instantly removing it from unused pools could be a healthy way to prevent chassis disasters—especially in a server designed to serve as both a consolidation platform and one for rapid scale-up.
This contrasts with the scale-out thinking of the individual cartridges used in HP’s Moonshot. Scale-up mandates a platform that won’t topple and kill many consolidated processes, or additive functions.
The problem is that the firmware for Advanced Memory Protection in installed, but it isn’t yet supported by vendors like Microsoft or VMware. HP told us: soon.
The chassis layouts permit up to nine PCIe cards. There are six auxiliary power connections, and there’s a 12Gbps SAS bus available, hopefully for your new crop of hefty SSD flash drives. The drives go inside of a cage, easily accessible after pulling the chassis out of the rack it’s installed in, then popping the lid of this beast. We could put up to 10 Small Form Factor (SFF) drives into the cage.
Intel claims, using this server, a 1.7x performance increase using VMware’s VMark 2.5.1 benchmark over a previous generation Xeon processor—specifically the older E2-4870s vs the DL580 Gen8’s E2-4890 v2s. The benchmark has only a couple of potential flaws, first the difference in clock speed between the two CPUs, the second is that it’s based on Windows 2008R2—an operating system that’s arguably eight years old. Nonetheless, a better clock and more cache—multiplied by an insane number of cores per processor is like a lit fuse.
What the platform does have going for it is not only faster CPU clocks, but much larger CPU cache at 37.5Mb, which most every operating system can use. With 3TB of current maximum memory, and an eventual 6TB of main memory, each of the 60CPUs can get a whopping amount of core memory, along with the hedged bet of CPU cache hits to achieve high muscularity in terms of either virtualization performance or as raw computational performance.
Add this to 40GB of Ethernet (at max configuration), 12Mbps SAS SSDs inside or iSCSI/etc., outside, room for PCIe full height/width cards in the chassis, and it looks very good on paper. And it should: $39,000 is the max base configuration before drives and extras. The unit we tested was: $39,046
We installed several operating systems and while Windows 2012/2012 R2 (including Hyper-V V3), Red Hat EL 6.4, and SUSE SLES 11 SP3 can use the Advanced Error Recovery Feature, other hypervisor and OS platforms are said by HP to be nearing availability. We have no good way to induce errors today that can test this feature without possible physical damage to the test server, and so we didn’t check this feature.
What we don’t like
We found the power-on-self-test (POST) process to be extremely slow. Yes, there are delicious options for specific-operating system pre-install options, although most servers get at most, one OS their entire service life. There’s a delightful GUI one can use, too, which isn’t quite as terse as the power-on CLI (actually function-key) list of choices.
There is room for 10 SFF drives in a single cage. Gone is the day when front-panel access is possible. To swap out a drive, one must pull the server away from the rack, open the top removable panel assembly and yank drives that no longer sit in custom frames. This means that expensive custom frames are no longer necessary for the drives, but their accessibility has been reduced; a mixed blessing.
Much of the scale-up power of the DL580 Gen8 depends on networking, and although the internal networking options on the larger frame are good with 4x10G Ethernet, it also means that clients will become more dependent on internal SDNs, and as there are varieties of SDNs endemic to each hypervisor/OS family.
Getting maximum output with inherently non-blocking switch architectures will require HP and the hypervisor/OS vendors to pay attention to the Ethernet adapter families that are used.
While this isn’t a criticism, it’s a mixed blessing that HP didn’t put an actual L2/L3 Ethernet switch inside the box. With such a switch, such as the one we reviewed inside the Moonshot 1500 chassis, much configuration work could be done inside the box, rather than externally.
This is the healthiest server to cross our path, ever. HP has put much thought into the safety features of the design and its flexibility. We haven’t seen any server with a potential for 6TB of main memory and 60 cores (each with two threads).
While it’s a battleship in a 4U form-factor, HP has employed numerous features to ensure the battleship doesn’t sink. It’s a different chassis layout, not far from the other new Gen8 models.
No more front-loaded drives, instead, hefty fans. Room inside for more guts. Serious density, and rapid-deployment tailored to specific platforms. Industrial, and priced like it. Customization, perhaps in the extreme, especially if you like mixing and matching Intel processors (and not AMD).
Overall: we like battleships, and this one doesn’t need many tenders.
How We Tested
We tested the HP DL580 Gen8 first in our lab, then as a member of our network operations center located at Expedient/nFrame in Carmel, Ind. In the NOC, we tested the DL580 G8 using both its internal configuration options for specific hypervisors/operating systems, then as a PxE-booted server, receiving images from several servers to test deployment. These included Windows 2012 R2 (patched to April 2014), VMware ESXi 5.5, CentOS 6.5, and SLES 11 SP3. All loaded with no issues.
In turn the DL580 Gen8 was connected via Extreme Networks Summit L2/L3 switches to our backbone via 10GBaseTX connections, and in turn, to our local SAN (Dell Compellent) and other NOC hosts (Dell, Lenovo, HP, Tyan, and Apple servers) and resources. We also used HP’s iLO management software, which is not a part of this review.
Henderson is principal researcher for ExtremeLabs, of Bloomington, Ind. He can be reached at email@example.com.