Review: HP’s latest blade delivers big punch in small package

ProLiant BL-460 Gen 9 blades are tuned for virtual workloads.

HP’s Gen9 servers are out, and we found them incrementally better than the Gen8 version we recently tested, due to both increased processor horsepower and efficiency.

The Gen9 server features firmware updates, long awaited UEFI local media blade support, faster CPUs, and healthy interconnect paths. There were a few time wasting bugs, which were surmountable, if frustrating.

To demonstrate the shift from 1U-6Us horizontal servers towards blade frames, HP sent us two BL-460 Gen9 blades with a raft of new firm/software on the side. The new half-height blades contained not quite half the memory and number of cores as the blades we tested in full-height Gen8 form. The Gen9 blades were sent with 128GB, and two processors with a dozen cores each, for a total 24 in the blade.

The drives were comparatively tiny but performed well in our tests. Initially, we tested the blades as a boot-on-SAN component, later using boot on iSCSI. Later, we added drives.

+ ALSO ON NETWORK WORLD: Anti-virus doesn't work So why are you still using it?  +

The Gen9 CPU performance was very respectable, perhaps not meteoric by comparison with the Gen8; the Haswell Intel CPUs are incrementally faster. The good news is that Gen9 has an integration upgrade.

Overall speed is also noticeably improved. After testing, it’s safe to say deployed systems can clock a 10%+ kick. And if your OS is up to snuff, the kick could get much better, as the clock is faster and the amount of onboard processor-cache is larger so that the dozen onboard cores per processor can do more pre-fetch and faster inter-core transfers.

Even the dreaded lunch break, also known as PowerOnSelfTest (POST) is faster, and the BIOS now supports UEFI, which cuts a few seconds more from blade reboots.

What you get

In our test, it’s about blades and density. The Intel Haswells improved OS/hypervisor loads by roughly 20 seconds, with RedHat7, Ubuntu 14.04 Server and Windows 2012 R2-with-Updates moving more quickly than VMware ESXi 5.5. But the differences, we found, are nominal—they’re all a bit faster.

Inside the half-height blades were the aforementioned two processors, both dozen-core Intel Haswells, 256GB of memory, a number of lanes to networking and SAN fabric storage infrastructure linking the rest of the chassis, along with plentiful and newly enhanced ways to link the lanes.

Eight NIC connections are available, and some can be used for SAN, the others for Ethernet. We like choices, but infinite permutations aren’t necessarily going to be made.

Gen9 as a server program is all about the possibilities. There are plentiful ordering combinations in terms of processors, memory, onboard media, but the backside of the details are captive to an HP blade server chassis, its mezzanine gear and back-of-chassis electronics, which in this test is the HP C7000 chassis -- same as in our Gen8 review.

What does change is the processor musculature, firmware, and level of HP Integrated Lights Out/iLO options. There is much upside to this, and a few hassles. We used HP’s OneView 1.1 in our review of the Gen8 blades; we updated to OneView 1.2 for this test.

A few problems gave us fits. The short story is that OneView 1.2 can fight with iLO/system BIOS settings—and lose. OneView 1.2 is supposed to be the GUI salvation to the myriad possibilities relating a Gen8/9 blade to the C7000 chassis. Gen8 seemed to work without a hitch in our last review. In Gen9, we ran into obstacles, eventually surmounted. We ran OpenView 1.2 as a VMware appliance, and it’s also available as a Hyper-V appliance. Both run as SUSE 11.X virtual machines. They would improve dramatically as stripped-down speedily loaded containers.

Let’s set the stage: The updates to OneView 1.2 and iLO 4.0 were largely welcome; a license comes usually with the C7000 chassis, but Gen9 also has onboard firmware that can be viewed from the HP Onboard Administrator engine that lives underneath each blade in the chassis as a control plane system.

From a chassis management perspective, HP has also increased management options via OneView 1.2 (we used 1.1 for Gen8 testing), but only a few minor changes are related from OneView 1.2 to the new Gen9 blades, as the Gen9 feature set is incremental.

Although HP’s been shipping the Integrated Lights Out (iLO) architecture for a long time, the intelligence of the onboard system-within-system has been nice, if rudimentary. HP engineers will send bricks to us for saying this, as it’s been more functional than the competition’s management infrastructure at the chassis level. This time, however, while they’ve made numerous improvements, they also have presented a problem.

Think about it this way: there are two computers before you get to the blades. One is the C7000 chassis controller. The other is the iLO processor in each blade component, and indeed, in most all of HP’s modern servers. The iLO Intelligent Provisioning is easier than ever, and dovetails more evenly with HP’s OneView 1.2. All of these, chassis, blade, and host, have web interfaces if desired, but can now also be controlled by CLI using REST interfaces.

Problem No.1 is security. Chassis access is dangerous, as access to the chassis can do many things, and some of them are evil, like the ability to burn new firmware (which is why Cisco ships chassis secretly these days—to avoid tampering with firmware), or change underlying settings easily—like changing a BIOS boot to UEFI—which stops VMware cold in its tracks.

+ ALSO ON NETWORK WORLD Will enhanced servers do away with need for switches? +

There are a number of ways to staunch this problem, and HP’s taken a large number of them. These steps include a long password only found hidden in the chassis or on a tag hanging from the rear of the server—requiring physical access to the server, and by easily permitting VLAN construction to take the iLO IP ports onto their own network. Nonetheless, much evil can be done without a secondary authorization—if you can pound the password long enough to get access—a Motif-like window manager that interfaces with core iLO logic.

Here’s Problem No. 2: If you get OneView out of sync with the enclosure chassis and the iLO within the individual blades, you’ll have lots of homework to do to find out the nature of this problem, multiplied by many firmware editions, each with their own hazards.

Admittedly, most users will never see this, because they buy homogenous chassis that keenly update everything all at once, and will be running Hyper-V, VMware, RedHat, CentOS, etc, homogenously. But maybe a blade goes bad, or you get the chassis, then fill it just a few blades at a time. This is what happened to us. We went through four firmware upgrades, only to find that OneView 1.2 can’t update one of those upgrades correctly, and then refuses to load custom ISOs of VMware 6.0.0.

The iLO system on each blade rules, and so OneView, no matter how sweetly you talk to it, won’t change certain settings on the blade; these must be done using either iLO-based/firmware-based Intelligent Provisioning, or configuration tools on the chassis.

If you wanted a recipe for say, VMware 5.5, it’s necessary to go iLO-upwards to match OneView 1.2. We tried the reverse, and were able to cream our desired settings. Then we learned which direction to go. There are a few UI oddities in OneView along the way were additional headscratchers, but we made our way through them.

The good news

Once the morass of OneView and iLO and firmware are figured out, the Gen9’s increased horsepower pays off in good ways. As an example, we did a VMware v-Motion across the chassis backplane and it blazed. The speed gets a boost from the HP FlexFabric 20/40(GB) backplane, and the number of NIC ports that can be dedicated to whooshing a working VM from blade to blade. It’s awesome to watch; there’s no time to even refill a coffee cup! Find the VM to move, drag it and drop it, and it races to the next blade!

A working Windows 2012 R2 live VM move from one Gen9 blade to the other can be as much as about 30% faster than heftier Gen8 blades. We say “about”, because even when we quiet the backplane, a periodic background maintenance traffic can cause slight pauses, and our standard deviation is something like 14% over 20 observations.

The BL-460 were blades configured with internal drives, which rendered just a bit of that fraction of the faster-timed Gen9 result as we routed them via commonly-shared Fibre Channel SAN common storage.

The four vCPUs with SQL Server blazed from one blade to the other. Impressive use of the FlexFabric 20/40 back-of-rack switching undoubtedly helped, and we admit that the entire chassis was as quiescent as we could make it during the transfer—an optimized result. Nonetheless, when we moved the VM back and forth between blades, it was awesome to watch.

HP has both tools for Windows and VMware to do control work at several levels with Gen9, including add-ins for System Center, as well as VMware vCenter. We didn’t test other management platforms, but others are either currently supported or are in the queue. HP supports sending syslogs at the iLO level to other hosts, or sending information to the aforementioned network management apps, as well as SNMP traps. All of these can be confined to VLANs to reduce probing traffic.

Only mildly alarming was the choice of HP’s remote consoles, which are Java-based, and while the access keys are OK, the sessions generated are persistent. They don’t time out with inactivity. This means that if we obtained a session, we could make it good for quite sometime before we had to renew with credentials, and secondary authentication methods aren’t supported. We’d like to see this, although we understand we’ll get complaints from NOC personnel that we're the reasons they need to carry keyfobs with Yubikeys or other auth devices.


The increased processing speed is discernible, but the backplane traffic kick is awesome. OneView 1.2 and iLO need work, and hobbled us, instead of helping us. Integration with specific operating systems gets closer to one-click deployments, although we’d like to see more than just four (Windows, RedHat, SUSE, and VMware) recipes for the Gen9 Intelligent Provisioning.

How we tested HP Gen 9 server blades

We installed two BL-460 Gen9 blades in a rack already containing two previously reviewed Gen8 full-sized blades. Our first problem was an older firmware bug in the C7000 chassis requiring an update. Then, we received a message that the first blade’s battery was bad, and another battery was shipped overnight. It was easy to install.

We tested blades with Windows 2012 R2/Updates, Ubuntu 14.04/14.10, VMware 5.5, then ran into the aforementioned firmware and synchronization problems in getting VMware 6.0 to run. Software was installed using either onboard iLO, or via OneView 1.2, rather than our usual PxE boot (which is available, and worked well). Another firmware rev was performed, thus enabling more testing and subsequent I/O results.

Performance testing ran in two areas using Hadoop. We benchmarked VMware 5.5 with Ubuntu 14.04 running MapReduce and the dfsio benchmark on equally configured Gen9 and Gen8 instances. Over numerous runs, we found a stabilized 17% I/O throughput increase with Gen9 over Gen8 blades. Using the same configuration with the sort benchmark, we calculated just under 10.0% increase in Gen 9 performance. Your results may vary. Setup scripts available upon request.

Tom Henderson runs ExtremeLabs, in Bloomington, Ind. He can be reached at


Copyright © 2015 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022