• United States

Drama at the computing core

Feb 16, 20048 mins
Data CenterIBMLinux

The stage is set for delivery of on-demand’s grand promises – more efficient and flexible use of IT hardware and software.

From the beginning, IT executives at Boscov’s department store have had a mainframe bias. Today, as they think about evolving this family-owned retail chain’s data center into a more flexible, business-driven computing resource, little has changed: They consider the mainframe more important than ever.

That might come as a surprise to those IT executives who consider the mainframe a dinosaur. But when considering which core computing platforms are best suited to support the new data center, twists on the old become newly viable options.

“The mainframe will stay, but its role will be substantially different from what it is today,” says Joe Poole, technical director for Boscov’s in Reading, Pa. Mainframe workloads will transition from the traditional batch jobs into a more fluid environment. For instance, Boscov’s is merging Linux and the mainframe. The company deployed Linux on its IBM z900 mainframe in 2001 and began turning processes previously run on Windows NT servers into Linux instances. It has consolidated about 40 of about 70 NT servers onto the mainframe. By using middleware such as IBM’s MQSeries, transactions can flow from machine to machine, he says.

Letting business processes flow, regardless of hardware and operating system, is behind lofty vendor strategies for pooling computer resources that grow and shrink in response to demand. System vendors are busy promoting their on-demand programs – HP with its Adaptive Enterprise, IBM with eBusiness on Demand and Sun with N1 – but analysts say the concept won’t be reality for many years. As a result, users today should focus on establishing the core computing platforms that will lay the foundation for that eventuality.

For Boscov’s part, it is considering buying mainframe capacity on demand from IBM and virtualizing Windows servers. These kinds of technologies would reduce management headaches while assuring that communication among all servers and the mainframe continues and that infrastructure is used efficiently. Poole needs to ensure this because Boscov’s expects the number of transactions to jump significantly as it brings technologies such as radio frequency identification and wireless to its 39 stores.

Little by little

Beyond open source operating systems, IT executives have numerous other core computing options for moving from the status quo to a new data center that is easier to manage and that can support services-oriented and Web-enabled applications. These include industry-standard 64-bit server platforms, server clusters, blade servers, grid computing and server virtualization.

Analysts and other industry observers suggest that IT executives attack such decisions one at a time, rolling out pilot projects to see what works where and then figuring out how componentized portions of the data center can work together as an integrated whole.

“You have to make this process decision tree that says, for example, ‘Am I going to move away from [symmetric multiprocessing] to scale out, and, if so, where does Linux or clustering fit in, and where do some of the database capabilities running on a clustered environment fit in?’ ” says Vernon Turner, group vice president for global enterprise server solutions at IDC. “So you’re starting to break down your data center into the smallest manageable components. That’s important because in the utility environment you have to be able to bill out in as small increments as possible.”

IT executives at financial publishing firm Bowne & Co. knew they needed to address server inefficiencies. But they decided to start fixing the problem one application at a time.

The company had built up enough capacity to handle spikes in demand from the printing of quarterly and annual financial statements, but that left the servers underutilized for most of the year.

The publisher considered bringing in blade servers, but after carefully analyzing application demands and infrastructure capabilities, determined that a grid architecture likely would be a better choice, says Ruth Harenchar, CIO at Bowne. And so the New York company decided to deploy a grid, starting small.

Working with IBM and grid software maker DataSynapse, Bowne figured out that the statement-processing portion of its proprietary typesetting application would work best in a grid environment. It then determined which servers to use for the grid, based on application load and utilization. “We had to find servers that had a similar configuration, the same operating system – in our opinion, we needed to have a minimal number of variables with the grid to work with in a pilot,” Harenchar says.

Since migrating that application from a Dell Power Edge 1150 to a grid of two Power Edge 2650 servers, processing time has dropped by 50%, she says. Next Bowne plans to spread the application across a grid of 10 servers, reducing processing time by another 40%, she adds.

Before the grid, the statement processing application was running on a server at a very low utilization rate and when traffic spiked, performance took a hit. Harenchar says she is quite happy with the performance improvements from the grid and the ability to get more efficient use of her hardware. She plans to expand the use of grid technology within her data center.

The hot technologies

IT executives will embrace these core new data center technologies in the next few years.

Companies that don’t use server virtualization technologies will spend 25% more annually for hardware, software, labor and space for Intel servers and 15% more for RISC servers by 2008, Gartner says.


41,000 blades were sold in the second quarter of 2003, accounting for just 3% of the overall server market, The Yankee Group says. By 2007, more than 2 million blades will be purchased, accounting for more than a quarter of all servers sold, Yankee says.


Worldwide customer revenue for server consolidation will grow from $5.2 billion in 2003 to $8.5 billion in 2006, with the bulk of consolidation happening with Unix servers, IDC says.

Harenchar attributes Bowne’s success with the grid to a clear understanding of what it was trying to achieve. “Having set out our criteria and our objectives, we were able to pick the right application and the right servers, and things went quite smoothly,” she says.

Standards-based approach

When it comes to the choice between platforms, such as grid computing vs. blade servers, some analysts say integration and flexibility issues could cause a company to hold off deploying the tiny servers.

A lack of standards in chassis design that locks buyers into a specific vendor’s products, plus huge power demands of the compact blades can be troublesome, they say. This lack of standards stands in the way of achieving a truly adaptive infrastructure, but interoperability efforts are underway.

In December a new Distributed Management Task Force group, led by Dell, HP, IBM and Intel, began studying ways to manage heterogeneous servers, regardless of platform. This server management working group plans to deliver its first specifications by the beginning of July.

Standardization is one of the reasons why First Trust, an independent trust company in Denver, scrapped its 32-bit IBM Unix boxes and moved a transaction-processing database onto Itanium-based servers from HP, says Jeff Knight, the firm’s vice president of technology and vendor relations. Use of standards-based 64-bit systems, which handle more memory and processing on each chip, has let First Trust improve performance and save on licensing costs.


The necessary first step

One expert shares advice on migrating to a new data center architecture.

When it comes to evaluating your approach to the new data center architecture, think in terms of consolidation, says Johna Till Johnson, president and chief research officer at Nemertes Research, and keynote speaker for Network World’s New Data Center Technology Tour.

Click here for more

It also has enabled the streamlining of data center operations. “It’s given us a common architecture in development, testing and deployment,” he says. “The fact that it’s an industry standard product – the architecture is industry standard, the way the software is moving is industry standard – it really allows us to have a more cohesive data center instead of having to have a specialty product for this business and a specialty product for that business.”

But Knight cautions peers not to jump on new data center technologies before they’re proven well enough. “While you always want to lead with ability to deliver great services,” he says, “you want to make sure that there is going to be a world around you that can help you get there and then help you maintain it once you do get there.