Why Data Centers Must Fundamentally Change -- Part 2.1

Grounds Up: Focus on the Floor

Raised Floor Data Center

I started writing this section, and when John Peach from the UK commented to me, "Doug, this isn't a blog, it is a dissertation" I felt I needed to shorten it a bit to make it more consumer-friendly.  Well, I didn't want to shorten it too much, in case I lose something important, so it got much easier to break it up into chapters- a relatively easy way out.Before diving into network details comparing and contrasting addressing changes, tunneling mechanisms, mobile workloads, new levels of abstraction/indirection, and upcoming standards on topology construction I wanted to spend a brief moment talking about the fundamental foundation of the data center itself - the floor.Why do we use raised floors?  up?  Doesn't cold air naturally like to go down?  This has always struck me as somewhat odd, but it didn't come into play in a major way for me until I toured a data enter specifically designed to support high-density compute workloads.One of the many things they did to optimize the data center and lower the PUE is to fill the entire room with cold air, then pull that through the equipment into a contained hot aisle, almost like a chimney that went up and to a false ceiling.  This has several results:  lower PUE (power usage effectiveness) and thus a more economical operations cost, no weight issues with heavier storage and compute systems (ever see a new disk array fall 3 feet as the raise floor collapses under it?), and a much more comfortable operating environment.This data center was able to achieve a power density north of 1500w/sf (most seem to be in the 250-300w/sf range today).  This may seem like overkill to you, but when you consider a fully loaded rack of modular switches or blade servers can consume ~30,000w then it starts making sense.  Back in the 1950's and 1960's if you watch the move The Right Stuff they talked about a 'demon that lived at Mach 2,' well in the data center world there is a 'demon that lives at 32kw.'  Around this number it is almost impossible to get the CFM of airflow needed to cool the equipment in the cabinet- so north of here people start looking at more exotic architectures like liquid-cooling and such.How does this impact SMB and Enterprise?  While 'the cloud' in all its murkiness may not be the best answer for everyone, capturing the economy of scale of a Google, Yahoo, or Microsoft and bringing those economics into the enterprise or SMB may be worth it.  If there is one big lesson to be learned from 'the cloud' its that in the facilities side o things there are definite economies of scale, while in the network side of things there are diseconomies of scale - and that is what we will chat about next.dg

Doesn't it seem somewhat non-intuitive to take COLD AIR and force it

On my last post a gentleman commented that they won't outsource anything to the cloud.  I have no problem with this, to each their own and some companies strict privacy regulations and operating model may dictate no shared compute resources with another agency or entity.  I'd rahter not fight the Layer-8 battle.  However, think about your facility- do you achieve a PUE of below 1.25?  Can you support a Delta-T of >30C? Can you handle a fully loaded cabinet of whatever servers or switches you want to upgrade to next?  Is >30% of your available floorspace today going to non-IT equipment (HVAC, UPS, CRAC, etc)?  Does IT handle the power bill for the data center facility?

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Copyright © 2010 IDG Communications, Inc.