Elastic IT resources transform data centers

Several IT trends converge as data centers evolve to become more adaptable, Gartner says

The enterprise data center of the future will be a highly flexible and adaptable organism, responding quickly to changing needs.

The enterprise data center of the future will be a highly flexible and adaptable organism, responding quickly to changing needs because of technologies like virtualization, a modular building approach, and an operating system that treats distributed resources as a single computing pool.

The move toward flexibility in all data center processes, discussed extensively by analysts and IT professionals at Gartner's 27th annual data center conference, comes after years of building monolithic data centers that react poorly to change.

"For years we spent a lot of money building out these data centers, and the second something changed it was: 'How are we going to be able to do that?'" says Brad Blake, director of IT at Boston Medical Center. "What we've built up is so specifically built for a particular function, if something changes we have no flexibility."

Rapidly changing business needs and new technologies that require extensive power and cooling are necessitating a makeover of data centers, which represent a significant chunk of an organization's capital costs, Blake notes.

For example, he says "when blade servers came out that completely screwed up all of our matrices as far as the power we needed per square foot, and the cooling we needed because these things sucked up so much energy, used so much heat."

Virtualization of servers, storage, desktops and the network is the key to flexibility in Blake's mind, because hardware has long been tied too rigidly to specific applications and systems.

But the growing use of virtualization is far from the only trend making data centers more flexible. Gartner expects to see today's blade servers replaced in the next few years with a more flexible type of server that treats memory, processors and I/O cards as shared resources that can be arranged and rearranged as often as necessary. (Compare blade servers.)

Instead of relying on vendors to decide what proportion of memory, processing and I/O connections are on each blade, enterprises will be able to buy whatever resources they need in any amount, a far more efficient approach.

For example, an IT shop could combine 32 processors and any number of memory modules to create one large server that appears to an operating system as a single, fixed computing unit. This approach also will increase utilization rates by reducing the resources wasted because blade servers aren't configured optimally for the applications they serve.

Data centers will also become more flexible by building in a modular approach that separates data centers into self-contained pods or zones which each have their own power feeds and cooling.

The concept is similar to shipping container-based data centers, but data center zones don't have to be enclosed. By not treating a data center as a homogenous whole, it is easier to separate equipment into high, medium and low heat densities, and devote expensive cooling only to the areas that really need it.

Additionally, this separation allows zones to be upgraded or repaired without causing other systems to go offline.

"Modularization is a good thing. It gives you the ability to refresh continuously and have higher uptime," Gartner analyst Carl Claunch said.

This approach can involve incremental build-outs, building a few zones and leaving room for more when needed. But you're not wasting power because each zone has its own power feed and cooling supply, and empty space is just that. This is in contrast to long-used design principles, in which power is supplied to every square foot of a data center even if it's not yet needed.

"Historical design principles for data centers were simple — figure out what you have now, estimate growth for 15 to 20 years, then build to suit," Gartner states. "Newly built data centers often opened with huge areas of pristine white floor space, fully powered and backed up by a UPS, water and air cooled, and mostly empty. With the cost of mechanical and electrical equipment, as well as the price of power, this model no longer works."

While the zone approach assumes that each section is self-contained, that doesn't mean the data center of the future will be fragmented. Gartner predicts that corporate data centers will be operated as private "clouds," flexible computing networks which are modeled after public providers such as Google and Amazon yet are built and managed internally for an enterprise's own users.

By 2012, Gartner predicts that private clouds will account for at least 14% of the infrastructure at Fortune 1000 companies, which will benefit from service-oriented, scalable and elastic IT resources.

Private clouds will need a meta operating system to manage all of an enterprise's distributed resources as a single computing pool, Gartner analyst Thomas Bittman said, arguing that the server operating system relied upon so heavily today is undergoing a transition. Virtualization became popular because of the failures of x86 server operating systems, which essentially limit each server to one application and waste tons of horsepower, he says. Now spinning up new virtual machines is easy, and they proliferate quickly.

"The concept of the operating system used to be about managing a box," Bittman said. "Do I really need a million copies of a general purpose operating system?"

IT needs server operating systems with smaller footprints, customized to specific types of applications, Bittman argued. With some functionality stripped out of the general purpose operating system, a meta operating system to manage the whole data center will be necessary.

The meta operating system is still evolving but is similar to VMware's new Virtual Datacenter Operating System. Gartner describes the concept as "a virtualization layer between applications and distributed computing resources … that utilizes distributed computing resources to perform scheduling, loading, initiating, supervising applications and error handling."

All these new concepts and technologies – cloud computing, virtualization, the meta operating system, building in zones and pods, and more customizable server architectures – are helping build toward a future when IT can quickly provide the right level of services to users based on individual needs, and not worry about running out of space or power. The goal, Blake says, is to create data center resources that can be easily manipulated and are ready for growth.

"It's all geared toward providing that flexibility because stuff changes," he says. "This is IT. Every 12 to 16 months there's something new out there new and we have to react."

Learn more about this topic

Gartner's Top 10 disruptive data-center technologies

Number crunching: Stats about energy consumption, virtualization and cloud computing

Private cloud networks are the future of corporate IT

The Google-ization of Bechtel

From CSO: 7 security mistakes people make with their mobile device
Join the discussion
Be the first to comment on this article. Our Commenting Policies