Americas

  • United States

The real hurdles in next-generation data center design

Opinion
Apr 06, 20043 mins
Data Center

* The biggest problems in designing a new data center: power, cooling and cables

One of the great things about working with IT executives is that it keeps you grounded. On a regular basis, you’re forced to emerge from the clouds of professional punditry and wrap your head around the issues that matter when it comes to getting things done in the real world. So let’s talk about what some of these issues might be for next-generation data centers.

Take power and HVAC. An ongoing challenge in designing next-generation data centers is sizing the power appropriately while arranging for appropriate cooling and heat dissipation. Sounds prosaic? Maybe. But it’s also a big reason that several of the IT execs I’ve spoken with have elected not to roll out blade servers. Often, the power requirements are too great for the data center – typical power requirements for blade servers are in the 10 to 30KW/rack range, while most data centers are designed for 1 to 5 KW/rack, which means a blade server can pretty quickly suck up the available power of a data center designed for conventional gear. (Amusingly, the Blade Server Summit ran into exactly this problem a few weeks back. The trade show demos managed to bring the Wyndham San Jose hotel to its knees in a rather embarrassing series of blackouts, as reported at https://www.nwfusion.com/news/2004/0311powerthet.html).

Even if you’ve designed your data center to deliver that much power, there’s the cooling problem. No arguing with the laws of thermodynamics – energy in equals energy out, and all that power density translates into massive heat, which means mega-refrigeration. Several of the IT executives I’ve spoken with say they’ve opted against blade servers due to the cooling requirements.

That’s not all. Cable management can be outrageously complex. Yes, that’s right – cable management. That’s the fancy phrase for “making sure cables are plugged into the right sockets,” which is a lot more complicated than it sounds. A single server might have separate cables for power, multiple network connections, KVM, SAN, and network management – not to mention redundant connections for higher availability. Density exacerbates the problem. Not only are there more cables to fiddle with, but also there’s less space to store them in.

Oh, and did we mention battery management? This is a classic one. An IT executive at a government agency reports he has no way of knowing when batteries are exhausted. SNMP management of the power infrastructure would be great – except his organization won’t allow SNMP traffic over the WAN, for security reasons. And he can’t convince the higher-ups to spring for a battery-management service. So he’s effectively reduced to crossing his fingers and hoping he can guess which batteries are about to die.

All these issues and more came out at the recent Network World New Data Center Tech Tour at which I keynoted. Catch the next couple on April 27 in Denver or April 29 in San Diego (see https://www.nwfusion.com/events/itseminars.html to register). We’ll be talking about best practices for next-generation data center architecture and design, from the nuts and bolts all the way through strategic (and futuristic) technologies like virtualization and grid computing. Hope to see you there.

Johnson can be reached at mailto:johna@nemertes.com