Americas

  • United States
by Tom Henderson, Network World Lab Alliance

Packed in tightly: Blades get sharpened

How-To
Aug 16, 200413 mins
Computers and PeripheralsData CenterIBM

On performance, we found the blades are pretty close – where they differ is in the companies’ systems management applications and availability options – but IBM wins our Clear Choice Award for providing management and administration that was a cut above the competition.

Jamming as much computing power into the smallest possible footprint is the goal behind blade servers. We recently tested three different server blade platforms – HP’s ProLiant BL p-Class Blade Enclosure, IBM’s BladeCenter and RLX Technologies’ 600ex. On performance, we found the blades are pretty close – where they differ is in the companies’ systems management applications and availability options – but IBM wins our Clear Choice Award for providing management and administration that was a cut above the competition.

All three vendors provided systems with architectures that perform about the same and are ready for heavy work.

Applications for blade servers are the same as any other server, except that disk and storage expansion typically is done through storage-area networks (SAN) and associated hardware/software. Most often, we’ve found blades being used for discrete applications, such as a blade for mail service, two or three blades for Web services, a blade for a relational database or CRM, or accounting applications. Increasingly, clustered blades tackle large database warehousing and mining applications, video-rendering engines and other computationally intensive applications, and these clusters are easily managed from a hardware and software perspective by the advanced management applications bundled or optioned for blade servers.

We installed the blade servers in one of the network operations centers (NOC) of nFrame, a large ISP/managed service provider in Indianapolis, which had the power and cooling we needed. Each vendor sent a specific configuration – a dual-CPU blade, a single frame, and connectivity for a SAN. Dell declined our invitation, as a new version of its system is coming out soon, and Sun declined, desiring a different test methodology. We actually asked for four blades, but found in testing that the results of testing four blades using our metrics is the same as multiplying the results for one blade by four.

Performance between the blades was close. IBM’s offering was slightly better overall than HP’s and RLX’s, although RLX’s OpenSSL under Linux took the computational prize despite a slightly slower CPU clock than the competition. Each vendor also took advantage of partners for “glue” products in networking and/or SAN connectivity. HP used Qlogic Fibre Channel pass-through boards. IBM included Cisco Gigabit Ethernet and Brocade FC switches. And RLX included Qlogic SAN components for its blades and chassis.

Performance between the vendors’ blade submissions also was close. Because blade servers are used a variety of applications, we chose three simple tests to compare them: the number of Secure Sockets Layer (SSL) sessions that could be maintained, the maximum number of open TCP connections that could be held and a rudimentary blade server disk-copying test. IBM’s connectivity was strongest, but its onboard Integrated Drive Electronics (IDE)-based hard drive was the slowest of the three. The dark horse RLX HPC 2800i blade server proved a strong performer. But the difference between the three vendors was very small, even insignificant.

The blade difference

Because blades pack a lot of computing power in a small space, chassis electrical current requirements are much larger than equivalent 1U-rack servers. All the chassis required 208V AC power and 30 amps for each circuit. Dual (and ideally independently fed and redundant) power sources are the rule for blades. Blade servers cool from front to back, not bottom to top, as is the norm in data center equipment racks. We had to switch doors from other racks in the nFrame NOC to accommodate the blade chassis. Depending on the installation, data center power-distribution requirements and cooling methods for blades need to be understood before deploying a blade chassis and related components. No amount of management control software will make redundancy pay if all the blades in a chassis are fried. Fan noise from blade servers is higher than normal – and is made higher because of the need for front and back rack door ventilation holes. The fan noise from the IBM chassis was deafening.

Each vendor was asked to specify the operating system it would prefer we test with. IBM and HP chose Windows 2003 Standard Edition, and RLX chose Linux Red Hat. From a performance perspective, our tests showed the choice didn’t make a lot of difference (see graphic). All three blade servers noted that they support other operating systems. IBM’s supported list was the longest, but not by much.

PERFORMANCE CHART
 SSL connectionsMax connections/secInt Disk I/O
HP (BL20pG2)62258,22823.9M byte/sec, RAID 1
HP (GL30p)63161,88324.3M byte/sec
IBM (HS20)63963,40419.9M byte/sec
RLX67156,31527.1M byte/sec
OTHER specs
CPUsBlades/enclosureVoltage
HP (BL20pG2)2 3GHz Xeon8208V AC X 2
HP (GL30p)2 3GHz Xeon16208V AC X 2
IBM (HS20)2 3GHz Xeon14208V AC X 4
RLX2 2.8GHz Xeon10240V AC X 4

Common to all the blade components we tested in our redundant power configuration was the ability for a blade server to be hot-pulled with automatic re-integration when re-inserted into the blade chassis. All the servers supported Preboot Execution Environment (PXE) boot, and all have optional remote deployment options for Win 2003 editions and Red Hat Linux (except RLX, where it’s free with RLX’s optional Control Tower XT management hardware/software). We tested PxE boot on all three blades, and all worked as expected. (For more information, see How we did it.)

BladeCenter details

IBM’s BladeCenter chassis uses 208 AC, and the unit tested had power distributed by twin/redundant 208V AC feeds, which in turn branched to four needed 120V AC connections. Up to 14 blades can be inserted in each BladeCenter.

A Management Module blade sits in Bay No. 1 in the BladeCenter and can be accessed via KVM or HTTPS. The module permits a view of all the members of the BladeCenter, and lets them be queried as to state, and can power them on and off. The entire BladeCenter cannot be powered down at once – each blade must be powered down individually. The Management Module blade frees up a port on a blade server that otherwise might be dedicated to management.

We tested the HS20 blade, a two-Xeon CPU blade (IBM also sent an HS40 blade, a four-Xeon CPU blade, but we didn’t test it). The HS20 takes up one slot inside the BladeCenter chassis. A daughtercard (called a “mezzanine adapter”) is provided to connect blade servers to a SAN. Brocade provides redundant 16-port SAN switches if SAN options are chosen for the blades. Like HP, the IBM blades come with an onboard drive, and two can be placed on the drive in a RAID 1 configuration. Also like HP, IBM permits an external, hot-swappable drive to be used with a blade. This can be helpful because a blade must be removed from the chassis (and therefore powered off) for an internal hard drive swap to be made. The external drive can be hot-pulled or failed over to without removing a blade from its chassis. An IBM-branded Cisco blade provides either single or redundant Gigabit Ethernet switches to the IBM blade enclosure.

The IBM Director is the management application used for the BladeCenter chassis and components. Like HP’s management applications, each item, including switches, can be discovered and managed by Director. Managed devices can be discovered through a query process and then populate the Director GUI. Devices with Director agent software can then have various facets examined, or set for error traps. We checked the trapping mechanism by watching CPU temperature and utilization. We then blocked a specific blade’s airflow and watched the CPU temperature climb, until the blade shut down on our toggle. It took about 4 minutes for a CPU to cool down.

The Director software, where supported by Director agents, has incredibly granular details about the blade server. Director supplements the functionality of the management module blade, and Director has a superset of functionality over those in the management module. Director is the greatest strength of the IBM BladeCenter and was a pleasure to use.

Blading with HP

We first tested HP’s blade servers in May 2003 (see here). HP blade server implementations require a power distribution subsystem that takes up three rack-unit spaces. The HP blade chassis we tested takes up 6U. A smaller version (the e-class Blade Enclosure) wasn’t tested. A dual redundant configuration requires two 208V AC 30 amp feeds; one power distribution unit can power several full blade chassis and/or other gear, depending on configuration. Up to three might be needed for a maximum 42U configuration; the power distribution units are connected via a back-of-rack bus. HP recommends 208V AC 3-phase power, and “Telco -48VDC power” can be used in lieu of 208V AC single phase.

HP makes several blades – we tested the BL20p G2 and BL30p. The BL20p G2 is full-height, compared with the half-height BL30p. Both blades contained dual 3GHz Intel Xeon CPUs with 2G-bytes of Dynamic RAM and an onboard hard drive. Both blades can be connected through optional dual-channel Fibre Channel boards to a SAN. Except for a slightly better disk I/O performance on the BL30p, we found the blades performed identically.

The enclosure we tested contained two Gigabit Ethernet switches, along with SAN switches. The GBE switches currently don’t support 10G Ethernet, but there’s a placeholder for when that is released. Blades within the chassis are networked via the Gigabit Ethernet switches through the chassis backplane. We found the switches don’t produce a bottleneck. Using the switches contributes to fewer Gigabit Ethernet connecting cables.

The backplane represents perhaps one of the few single points of failure in the HP design, but all blade server makers share this problem. We found that it would be difficult to damage the backplane, unless it was deliberate.

HP’s Integrated Lights Out Advanced (or “iLO”) management tool set is a well-known, longtime HP/Compaq server management application that connects systems via an often-dedicated Ethernet port on an HP/Compaq server. This feature is extended to other server products in HP’s line, so that an in- or out-of-band network monitoring and management system can be put together. There are no KVM connections to the servers – they are all managed by remote-control applications (included in iLO) via Windows Terminal Services.

Like IBM, HP offers a rack configuration application that produces a pictorial view of the rack and its components. This is a vital application for both vendors, as the number of potential options, devices, slots and ports that need to be tracked can be staggering.

RLX offers control

RLX uses three 208V AC 10-amp connections from the RLX 600ex chassis to an optional power-distribution unit. In turn, the connections feed the 6U high chassis’ three power supplies. An optional RLX Control Tower XT management blade module can be installed – we tested the blade servers with this option. Control Tower XT is the best management interface that we’ve seen for Linux, but it’s an almost $4,000 option, and also costs an extra $199 per managed node. However, rapid provisioning, a free feature contained in Control Tower XT, is an option in both the IBM and HP offerings.

A management LCD is used to initially configure the RLX 600ex chassis. Each RLX 2800i blade starts with an IP address coded to the slot where it resides. With the Control Tower XT software, an HTTP logon is used to start Control Tower XT. Red Hat Linux Advanced Server was shipped on the blades we received – this is done for free, although a license key must be subsequently introduced to the installation.

Control Tower XT is the rough equivalent of the IBM Management Module and Director software, as a hardware/software combination chassis administrator. It tracks faults based on SNMP, and the Intelligent Platform Management Interface specification. Like HP’s Insight Manager/iLO and IBM’s Director, Control Tower XT is used to administrate, manage and provision HPC 2800i, 2.8-GHz server blades. RLX blades also can PXE boot, and the process takes about the same time to load an operating system image.

Control Tower XT manages each blade and its chassis characteristics. An initial loading of blade server information is input to Control Tower XT – there’s an auto-discovery feature that finds blades and its IP addresses automatically. Blade servers talk to the Control Tower XT management blade via a third Ethernet port on each blade server over SSL from a Control Tower Blade Agent, which must be manually activated (once) on each blade server. The management network must be kept private, as SNMP monitoring requires the use of the unsecure “public” community name.

Once devices are discovered or descriptions manually input, they must be registered before they can be managed. SAN port management isn’t as useful as other controls offered for the other blade servers tested. SAN connections must be set manually, and both HP’s and IBM’s monitoring of SAN functionality were more complete and integrated in this regard.

Like Insight Manager/iLO and Director, Control Tower XT makes it possible to control blades and components remotely, once the devices are registered. Unlike Insight Manager/iLO, users and groups can be entered with an administrative class. We found that Lightweight Directory Access Protocol user and group information can be successfully used to import usernames/groups quickly, simply by pointing to the LDAP server with correct credentials.

The swordsmen clash

HP’s offering provides a great deal of hardware flexibility that in our opinion exceeds that of IBM and RLX. IBM’s Blade Center has the most useful and flexible management application, IBM Director, for blade system deployment in areas from large data centers through to remote branches. By contrast, RLX has strong management capabilities, but more confined hardware flexibility than IBM and HP. We’ve never seen a stronger management package for Linux-based servers, but, and the strength comes at a significant price.

All three are enterprise-ready, and all three require an adoption of “their way of doing things.” All three are enterprise-class.

HS20 Blade Server, Blade Center Chassis OVERALL RATING
4.7
Company: IBM Cost: $9,386 as tested. Pros: Outstanding management; tightly integrated at all levels. Cons: requires base Management Module blade server; noisy air flow.
BL20p G2 Blade Server, BL30p Blade Server, p-Class Blade Enclosure OVERALL RATING
4.48
Company: HP Cost: $21,432 (4 BL30p blades); $30,628 (4BL2op blades); $10,610 for blade enclosure, power and other equipment. Pros: Highly flexible options; good overall management; easily installed and serviced. Cons: Slightly disjointed feel; needs sewing up.
HPC 2800i Blade Server, RLX System 600ex Chassis OVERALL RATING
4
Company: RLX Cost: $10,535 as tested Pros: Best Linux management seen; very good performance. Con: Components not as tightly integrated as others overall.
The breakdown   IBM HP RLX
Management 40%  5 4.25 3.75
Performance 25%  4.5 4.25 4.25
Flexibilty/features 20%  4.5 5 4
Serviceability 15%  4.5 4.75 4.25
TOTAL SCORE 4.7 4.48 4
Scoring Key: 5: Exceptional; 4: Very good; 3: Average; 2: Below average; 1: Consistently subpar