Thanks to x86 server virtualization and its follow-on technologies, the state-of-the-art enterprise data center looks vastly different than it did even a year ago.
Tooling up for the new data center
And moving from old school to next-generation isn't just about hardware and software – it's a call for a new way of thinking about the data center, as well.
"Some people are so accustomed to one application, one server and a methodology that locks you in to one way of thinking that they're having a hard time fully understanding the new data center," says Bill Fife, director of technology for Wholesale Electric Supply Co., in Houston.
"But now with thin replication and replays and synchronization to disaster recovery sites, and virtual machines being able to move files from data store to data store and having multiple data stores on the server, and adding network adapters, you really have to sit back and think about how you want to run your operations and remember that you have options. You're not tied down to any one path. You can go down one road today and change directions tomorrow," Fife says.
Here are four of the major trends in today's data center:
Trend No. 1: I/0 virtualization
At Wholesale Electric Supply, Fife is capitalizing on the ability to virtualize I/O, one of the latest of several significant technology trends shaping the new data center.
I/O virtualization, also known as I/O aggregration, splits interconnections across either 10-gigabit InfiniBand or Ethernet links. Xsigo Systems' virtual I/O Director uses the former and Cisco's Nexus 5000 and 7000 switches the latter, for example.
"In either case, you connect this pipe and then you can get as many virtual Ethernet and Fibre Channel connections as you want out of it," says Logan Harbaugh, an independent analyst and member of the Network World Lab Alliance. "The architectures are similar, as there's a limit to how much they can vary and still provide some level of functionality."
I/O virtualization simplifies the hardware scenario in the data center rather considerably, reducing the number of connections running to each device while increasing flexibility. Take into consideration VMware's best practices recommendation that you assign 1G port per virtual machine (VM). With newer 24-core servers, you could theoretically run at least 24 and maybe as many as 50 VMs on a single piece of hardware, which in turn would mean needing 50 1G ports, Harbaugh says.
Realistically, even if you could get six four-port Ethernet boards, you'd still only be able to support 24 VMs. "The nice thing about I/O virtualization is that everything shares the one InfiniBand or 10G Ethernet connection as lots of 1G pipes."
At Wholesale Electric, Fife is using Xsigo's virtual I/O Director to decouple processing, storage and I/O. "By doing so we've essentially built our own cloud because we can assign processor, RAM, disk and I/O on an as-needed basis, and then, when they're no longer needed, get rid of it all and do something else," he says. "There are no rigid guidelines within which we have to operate. We can be extremely flexible."
Trend No. 2: Data and storage convergence
Today's data centers typically have distinct data and storage networks, nobody much likes that situation. "As soon as people can recombine those two networks, that's what they're going to do," says Joel Snyder, senior partner with consulting firm Opus One and another member of the Network World Lab Alliance.
"My belief and, yes, hope is that we'll get rid of pure Fibre Channel and go to Fibre Channel over Ethernet [FCoE] – but I still see people buying a lot of Fibre Channel because they're told it's the way to go, even though our tests actually show that the network often isn't the bottleneck," he says. "What you can do with Fibre Channel you can do with 10G Ethernet and get equivalent or better performance, even if that's not the belief of SAN buyers and vendors."
This is early days for FCoE, with FCoE- but plenty of folks are looking at the technology, says David Newman, president of Network Test, an independent test firm, and Network World Lab Alliance member. If nothing more, the technology has cost in its favor, he says.
"Besides the capital cost of the equipment, there's the operational expense issue. People who run plain old Ethernet cost less than people who know Fibre Channel," Newman says. "On economic grounds, it'll be cheaper to provision FCoE than running separate infrastructures."
Today, Brocade and Cisco have FCoE-capable switches that fully support all prioritizations and new mechanisms on Ethernet for delivering Fibre Channel-like service levels, and other vendors are coming into the fray, as well. So building a working, end-to-end FCoE network that handles data and storage is possible today – at least using the same vendor's gear, Newman says. Interoperability is unproven as yet.
Scott Engel, director of IT infrastructure, Transplace, a third-party logistics provider in Dallas, identifies FCoE as one of the two biggest networking and infrastructure changes coming to the company's data center over the next year. The other is 10G to the servers, he says.
Indeed, Newman says, the real tipping point in the data center will happen over the next 12 to 18 months when 10G replaces 1G Ethernet on server motherboards. "That'll have all sorts of follow-on effects, enabling data-storage convergence is just one," he says.
Watch for this year to be the first with "appreciable numbers" of 40G switch ports shipping, Newman says. Fatter network pipes will be needed to accommodate the higher-speed server connections.
Trend No. 3: Faster processors, greater consolidation
By now, most enterprises have server consolidation stories to share, spun around a virtualization theme. They tell of impressive physical-to-virtual server ratios, often in the double digits. But consolidation in the data center is just beginning, some say.
The maturity and comfort levels around virtualization are growing, which means enterprises are showing the willingness to put more and more VMs on a single system, says Steve Sibley, an IBM Power Systems manager. Within the year, he adds, the Power 750 will support up to 320 VMs on a single server, the Power 770 and 780 up to 640 VMs and plans for up 1,000 VMs.
The ability to support higher numbers of VMs per physical server comes on the back of faster processors, of course. In IBM's case, the company recently introduced the Power7, an eight-core chip that delivers four times the virtualization capability, scalability and performance than its predecessor, Sibley says. The high-end Power7-based Power 780 and 770 servers will come with up to 64 Power7 cores, for example.
Intel, too, is readying an eight-core chip, code-named Nehalem-EX. That chip is expected out by mid-year.
"If you start at the chip level, the ability to deliver more performance per processor core but also pack four times as many cores onto a single chip gives a vast amount of new capacity and capability to put more virtual servers onto a single platform without sacrificing performance or capability of the overall system," Sibley says. "That design point is enabling systems or offerings that give clients the ability to consolidate even more than they used to on single platform at much cheaper prices than ever before."
Trend 4: Infrastructure optimization
Will your data center strategy one day include a semi tractor-trailer full of hands-off gear parked in some spot selected for optimal cooling and power supply?
Dan Kusnetzky, vice president of research operations at The 451 Group, says he can imagine so – at least as one potential alternative to building out new or extending existing data centers. "Software routes around failures, and maybe you'd replace that truck with a new one every three years or so," he says.
The data center-in-a-box concept is one that bears watching, agrees Doug Oathout, vice president of converged infrastructure at HP. Companies already are using data centers like pods or trailers outside their facilities, optimizing server, storage, networking, cooling and power distribution resources for that size container, he says. "Now we see the performance-optimization trend moving inside the data center."
This is not to say the data center is going to turn into parking lot full of semis. But enterprises that run out of space, electricity, cooling and capacity today can take the container concept and move that type of asset inside the data center, Oathout says. "We're not talking about the container itself, but the concept, being able to say 'I need eight racks of servers, four racks of storage, a rack and half of networking, and here's the power and cooling it will consume,' and optimize that way."
Piecing together a data center section by section is far less costly than the traditional go-for-broke approach, and delivering power and cooling a section at a time is far more efficient than moving it across a long distance, Oathout says.
"There's so much more waste when you build a data center to the ultimate capacity vs. building it to what it needs to do, so you could almost call this a retrofitting trend," Oathout adds. "I'm going to optimize what I've got, doing it with localized power, cooling and energy for the specific work I want to get down in this environment. Then I take the next step, with multiple pods, instantiations or building blocks within the data center. It's mindboggling how much more efficient that is compared to building a monolithic data center that has mega watts and 100,000 square feet of space yet is incapable of supporting the equipment you need for your next workload."
Schultz is a freelance IT writer in Chicago. You can reach her at bschultz5824@gmail.com.