- 15 Non-Certified IT Skills Growing in Demand
- How 19 Tech Titans Target Healthcare
- Twitter Suffering From Growing Pains (and Facebook Comparisons)
- Agile Comes to Data Integration
Network World - The enterprise data center of the future will be a highly flexible and adaptable organism, responding quickly to changing needs because of technologies like virtualization, a modular building approach, and an operating system that treats distributed resources as a single computing pool.
The move toward flexibility in all data center processes, discussed extensively by analysts and IT professionals at Gartner's 27th annual data center conference, comes after years of building monolithic data centers that react poorly to change.
"For years we spent a lot of money building out these data centers, and the second something changed it was: 'How are we going to be able to do that?'" says Brad Blake, director of IT at Boston Medical Center. "What we've built up is so specifically built for a particular function, if something changes we have no flexibility."
Rapidly changing business needs and new technologies that require extensive power and cooling are necessitating a makeover of data centers, which represent a significant chunk of an organization's capital costs, Blake notes.
For example, he says "when blade servers came out that completely screwed up all of our matrices as far as the power we needed per square foot, and the cooling we needed because these things sucked up so much energy, used so much heat."
Virtualization of servers, storage, desktops and the network is the key to flexibility in Blake's mind, because hardware has long been tied too rigidly to specific applications and systems.
But the growing use of virtualization is far from the only trend making data centers more flexible. Gartner expects to see today's blade servers replaced in the next few years with a more flexible type of server that treats memory, processors and I/O cards as shared resources that can be arranged and rearranged as often as necessary. (Compare blade servers.)
Instead of relying on vendors to decide what proportion of memory, processing and I/O connections are on each blade, enterprises will be able to buy whatever resources they need in any amount, a far more efficient approach.
For example, an IT shop could combine 32 processors and any number of memory modules to create one large server that appears to an operating system as a single, fixed computing unit. This approach also will increase utilization rates by reducing the resources wasted because blade servers aren't configured optimally for the applications they serve.
Data centers will also become more flexible by building in a modular approach that separates data centers into self-contained pods or zones which each have their own power feeds and cooling.
The concept is similar to shipping container-based data centers, but data center zones don't have to be enclosed. By not treating a data center as a homogenous whole, it is easier to separate equipment into high, medium and low heat densities, and devote expensive cooling only to the areas that really need it.
Additionally, this separation allows zones to be upgraded or repaired without causing other systems to go offline.