Americas

  • United States

Blades attack data center

Feature
Feb 24, 20038 mins
Networking

Blade servers can ease management and optimize space, but might not be ready for high-end processing.

Dwight Gibbs, director of technology acceleration at Capital One in McLean, Va., says the combination of blade server hardware and management software allows him to deploy new Web servers in minutes, and to do automated patch management on 20 servers at once.

Appro Systems, an application service provider specializing in financial lending applications, is using high-density blades to fit the processing power of 20 servers into the space that previously held three rack-mounted servers. This allowed Appro Systems to increase the capacity of its data center from 350 to more than 600 customers, without adding space or power.

And blade server technology allowed Gator.com to add more than 400 new servers without having to lease additional collocation space, for a savings of $24,000 a month.

These companies and others are turning to blades to shave server management costs, trim space requirements, and cut the tangle of cables and wires in the data center.

Early blades appeared in fall 2001 from Egenera and RLX Technologies, and focused on high-density, low-power processing for driving front-end applications such as Web serving. Blade technology earned its stamp of approval when HP, IBM and Dell came out with blades last year. Sun released a blade server earlier this month.

Individual blades have evolved from one- to two-processor systems and have added management features that automate server processes. Today, blades are capable of replacing traditional 2U servers for a variety of applications. And IBM announced plans to ship a four-way Intel blade later this year.

John Madden, senior analyst for Summit Strategies, says blades address a variety of customer issues. “Customers are looking for more flexibility and better use of space,” he says. He adds that improved management features help customers deploy servers quickly, and perform remote management, metering and monitoring.

Longer term, some analysts see blades taking on basic network routing and server load-balancing functions. For example, IBM plans to embed a Layer 4/Layer 7 LAN switch module in its blade chassis.

Having the network and storage connections included in the backplane is significant, says IDC analyst John Humphreys. “The fact that these systems have switches in them . . . replaces a whole tier of switches in your data center.”

Management is Job One

Customers agree that one big advantage of blade servers over traditional rack-mount servers is ease of management. In a blade system, multiple blades plug into a chassis with its own backplane and bus architecture. Power supply, network and storage connections are shared among all the blades.

Customers can perform automated software upgrades, patch management and server setups on multiple servers within the chassis.

Gibbs has used RLX 300ex System and ServerBlades at previous jobs and plans to evaluate blades at his current employer. He says that deploying Web servers with RLX’s Control Tower software takes a matter of minutes, and Control Tower helps him install security patches on numerous Linux servers.

“Five minutes to deploy patches is a tremendous boon for management,” vs. patching each server. “I can control a whole rack of servers from one blade . . . and keep a spare pool of blades on standby for doing database replication, launching test servers or adding Web servers. The blade dies, and I just pull it out and pop in another.”

Humphreys says blade servers, such as IBM’s eServer BladeCenter managed by IBM Director software, offer solid hardware performance and money-saving server management features. “With IBM Director, you have a streamlined way to manage anywhere from 10 to 20 servers in one chassis. Before you were doing that one server at a time,” he says.

Space, the final frontier

Blade servers also help IT manage the use of space in data centers, and troubleshooting is easier because cable clutter is reduced. “If you’ve got 42 1U boxes in a rack and you’re trying to troubleshoot a hardware problem, you’ve got to trace the wires and that can get pretty ugly,” Gibbs says.

A blade chassis offers power and network connections that are shared among all the blades, eliminating the need for additional cabling. In traditional server setups with hundreds of servers, cables clutter the data center, Gibbs says.

On the other hand, easing cable management isn’t a top blade-server draw for IT at Devon Energy. Brad Whitley, Intel systems supervisor for the oil and gas producer in Oklahoma City, says he keeps cables neat by installing ceiling trays.

However, the ability to reduce the amount of equipment by using blades is a benefit, he says. Through acquisitions, the number of servers in his data center has doubled every year, which also means double the number of keyboards, monitors and mice. “That’s extra equipment that you have to keep,” Whitley says, while blades automatically have power, monitor, keyboard and mouse hooked up.

Appro has optimized its rack and data center space since deploying HP’s ProLiant one-processor blade servers last year, says Richard Caronna, senior consultant and former vice president of delivery services for the Baton Rouge, La., company.

Caronna is putting 20 servers in the same space that contained three HP DL 320-1U servers. Appro’s data center originally was designed for using the bigger HP ProLiant 1600s, with power to handle about 210 customers, he says. “Transitioning to the DL 320s got that number to about 350; now we’re at a capacity with blade servers that we can push over 600 customers in our data center.

Money-saving features

While the hard cost of buying a chassis and blades to populate it is roughly the same when compared with traditional servers, Caronna has seen savings in other areas. For example, with a blade chassis there are two power supplies that all 20 blades share. Comparable 2U server systems require 40 power supplies, two for each server.

Appro avoided spending an additional $200,000 in not having to add a new uninterruptible power supply system. “The power requirements per server have decreased by at least 50% with blade servers,” Caronna says.

Appro purchased the gigabit backplane option with its HP blade servers. The backplane has four-gigabit ports that provide throughput comparable to traditional server setups. “We can plug that up to our switches. You end up with very similar throughput,” Caronna says. “But at the same time, we’ve gone from 40 wires to four.”

Consolidation of equipment with blade servers is key to reducing costs. Where the DL 320s required purchasing the base system, along with added memory and hard drive, “Now the blade is a package deal, with everything we need on it,” he says. “It has more memory than we were putting in the servers before and enough hard-drive space.”

Gator.com, of Redwood City, Calif., saved $24,000 per month in collocation costs through its rollout of 22 RLX blade server systems. Gator uses RLX 800i Intel blades and RLX 657 Transmeta blades for Web hosting, and Web and application serving, says Tony Martin, vice president of engineering for the Internet ad-serving provider. The blade rollout allowed IT to add more than 400 servers without having to lease a new cage.

He adds, “Rack space is expensive at collocation facilities. With 2U servers, we filled these up really quickly. You can take out the existing 2U servers and put in two RLX chassis and still have three-quarters of a rack left.”

David Richter, vice president of infrastructure and application support for Harrahs Entertainment in Las Vegas, plans to roll out blade servers this year to improve CPU utilization on its reservations system, where call volume varies greatly. “We’ll be able to dynamically run applications on any number of servers as demand varies through the day. With the old model you had to have enough boxes, enough horsepower dedicated to the application to handle the peak time. Most of the time you just have spare power sitting there unused.”

But Richter says blade servers still are early in their life cycle, and aren’t ready to support high-end applications such as Harrahs’ Exchange server environment, which has consistent large volumes and 24-7 access needs.

Madden agrees that blade servers aren’t ready today for heavy-duty transaction processing, high-availability applications or applications that require large amounts of storage.

Challenges ahead

Blade servers face several challenges before they conquer the data center. First, there are no standards allowing users to plug one vendor’s blade into another’s chassis.

Performance is an issue. “They just have a lot to prove when it comes to these systems, not only in terms of price, but performance,” Madden says.

Initial costs aren’t any better than those of traditional servers, although there’s a case to be made for blades saving money on the management side.

And blades still have to prove that they can scale up to high-end database applications. “Data centers aren’t moving to an all-blade architecture any time soon,” Madden says.

But blade servers will have a place, Humphreys says, and IDC estimates that 20% of server shipments will go out in blade form factors in 2006.

Responsibilities: To write in-depth feature stories about a variety of enterprise technologies and management strategies. Tasks include formulating story ideas, interviewing and writing stories. Past experience: Worked as Associate Features Editor at Network World compiling the technology Buyers Guides. How to reach: Best to reach early in the day, Monday through Friday.

More from this author