• United States

How we did it

Aug 16, 20042 mins
Computers and PeripheralsData CenterServers

How we tested the HP, IBM, and RLX Technologies blade server offerings.

We tested each blade server arrangement in one of the network operations centers at Indianapolis-based nFrame, an ISP/MSP that specializes in rack hosting. We asked each vendor to supply a base chassis with four dual-CPU blades, an onboard hard drive, network connectivity and SAN connectivity. Each vendor was asked to state an operating system platform preferred.

We used Spirent’s WebAvalanche appliance to test each blade type in two areas: maximum number of SSL sessions that could be supported until 5% dropped or produced errors (to test CPU strength); maximum number of open TCP connections until 5% dropped or produced errors (network strength), and we performed a simple file I/O test. Both ‘public’ Ethernet ports were the targets of our tests for Maximum Open TCP Connections. The file I/O test consisted of a fresh 1G-byte file copy (Windows batch file and Linux shell script using cp).

We initially tested each blade, then tested four blades in combination to see if bottlenecks occurred; we found that the single blade scores could be effectively multiplied by four.

We hot-pulled anything that was hot-plugged; removed power from redundant sources, yanked hot drives, and otherwise failed all possible failable devices. All pulled hot-pulled devices, without exception, restarted correctly when re-inserted into their respective places. We found that HP’s monitoring can track blade servers from slot to slot, a benefit to forgetful installers.