Americas

  • United States

Where data centers came from, and where they’re going

Opinion
Mar 16, 20044 mins
Data Center

* How the data center has evolved

What constitutes a “next-generation” data center? To understand, it’s worth looking back in time.

What constitutes a “next-generation” data center? To understand, it’s worth looking back in time.

Back in the old days – 20 years or so ago – a data center was a “glass house” – a room full of IBM mainframes, DEC minicomputers, and their attached storage subsystems. Users typically connected to these machines by local terminals. By necessity, then, data centers were usually located physically and geographically within corporate and administrative headquarters (where most employees worked). And data centers contained a relatively limited suite of gear: computers, storage systems, a terminal server, and a massive HVAC system.

Then came the client-server/LAN/Internet/distributed-computing revolution.

Through the first half of the 1990s, companies stuck machines on every desktop and linked them via LAN switches and routers to file servers. Physically, these boxes (servers, their storage systems, routers and switches) were stashed wherever IT techs could find room (closets and basements were favorite locations). More often than not, the “data center” of the early ‘90s was a converted utility room.

By the second half of the ‘90s leading-edge enterprises had adopted more structured network architectures. In a typical facility, the big-iron routers and switches were housed in a separate room, along with servers, server clusters, assorted storage gear, etc. Many times the office’s PBX lived there as well. The room was outfitted with a range of power options (including 48-volt power for carrier-grade networking gear) and had reinforced flooring, and custom-designed HVAC systems.

 Then companies took a closer look at what was going on, and realized a few things:

* Average server and storage utilization was running at around 10% to 25%. In other words, they had four to 10 times as much hardware as they actually needed.

* With the advent of storage-area networks and particularly network-attached storage, storage requirements had been decoupled from server requirements. In other words, you no longer had to buy a server every time you needed more storage.

* Networking gear – switches and routers – was taking up a much greater percentage of the data center “footprint” than previously.

* A lot of the existing equipment wasn’t standardized, and therefore was more expensive to maintain.

And finally, human trends had changed as well. Most employees no longer resided at the corporate headquarters (87% of employees work at remote sites, according to our research at Nemertes, and the trend’s increasing). Therefore, there’s no longer a reason to collocate the data center and the administrative headquarters. Additionally, the people with the skills and knowledge to administer the corporate data center are often no longer the “mainframe guys,” but instead are folks with a background in networking, servers, storage, or some other area of technology.

Enter the next-generation data center. Compared with the far past (10 to 20 years ago) and even the recent past (one to five years ago), data centers are more likely to:

* Not be collocated with administrative headquarters (but to be located somewhere with low real estate costs and highly available bandwidth).

* Include a massive amount of networking, storage, and (voice) communications gear as well as computers.

* Rely far more heavily on standardized architectures (blade servers, clusters) with the goal of reducing overall cost of ownership.

* Emphasize highly redundant/reliable architectures. In the old days, tripping over a cable might have taken down an office. Now, tripping over a cable could take down the entire Asia-Pacific region.

* Emphasize automated management and application provisioning.

* Emphasize technologies that lower operational costs and increase utilization, like virtualization and grid computing.

* And finally, be operated and managed by IT executives who are not mainframe/applications specialists.

How is this all playing out in 2004 and beyond? Stay tuned.

Johna Till Johnson is President and Chief Research Officer of Nemertes Research. She can be reached at mailto:johna@nemertes.com