The 8 key challenges of virtualizing your data center

The benefit of virtualizing x86 servers is clear: break the link between software and hardware and create the foundation for a more dynamic, flexible and efficient data center. With the market for virtualization software expected to grow to more than $1 billion this year, companies are more than kicking the tires on the technology. But the road to a virtual data center isn’t without its twists and turns. The move to a virtual environment must be done carefully and with an understanding of how the new infrastructure will change IT planning and management. What follows is a list of eight virtualization “gotchas" — hurdles that users may face as they deploy virtual environments — that we’ve compiled through discussions with IT professionals, analysts and vendors.

The benefit of virtualizing x86 servers is clear: break the link between software and hardware and create the foundation for a more dynamic, flexible and efficient data center. With the market for virtualization software expected to grow to more than $1 billion this year, companies are more than kicking the tires on the technology. But the road to a virtual data center isn’t without its twists and turns. The move to a virtual environment must be done carefully and with an understanding of how the new infrastructure will change IT planning and management. What follows is a list of eight virtualization “gotchas" — hurdles that users may face as they deploy virtual environments — that we’ve compiled through discussions with IT professionals, analysts and vendors.

Thinking ahead

In a January report titled "Virtualization considerations: Forewarned is forearmed," Saugatuck Technology analysts lay out issues companies should think about when they're virtualizing servers:
Will the physical site have adequate and appropriate electrical power?
Will the physical site have adequate and appropriately concentrated cooling capacity?
Will the physical site have appropriate security facilities?
Will the physical site have adequate utility backup?
Will the consolidated/virtualized platform provide the availability needed for the workloads it will run?
Will the consolidated/virtualized platform require new support tools and/or staff skills?

1. Forgoing the physical: The idea of moving to a virtual environment is to run more virtual workloads on fewer physical systems, but that doesn’t mean hardware moves down on the list of priorities. If organizations don’t carefully consider what physical resources are necessary to support virtual workloads and monitor the hardware resources accordingly, they may find themselves in trouble. “With virtualization, it’s really a matter of putting the right physical systems behind it," says David Payne, CTO at Xcedex, a virtualization consulting firm based in Minneapolis. “Some people think they can buy a cheap system from Dell or HP, throw in the hardware, then put virtualization on top of it and have their virtual environment. But many times that’s done based on commodity price, rather than really considering what the virtual workloads are going to be. The companies we’ve worked with that have been most successful have paid a lot of attention to the planning portion and they end up with a really good result, getting high utilization on these systems and a really good consolidation ratio."

2. Sub-par application performance: While virtualization is becoming increasingly widespread, many applications aren’t yet tuned for virtual environments. For example, Daniel Burtenshaw, senior systems engineer at University Health Care in Salt Lake City, deployed VMware’s ESX Server about a year ago with mostly good results. “Our biggest issues have been with some of our application vendors not being willing to support their applications on virtual servers, as well as limitations with the version of ESX that we are using," he says. The healthcare organization has a large Citrix environment, but when it moved some of its Citrix servers into the VMware environment, it found that performance didn’t keep up, Burtenshaw says. “Basically, we get a very limited number of users per server, so if we virtualize, a bunch of virtual servers on a host is equivalent to just having one physical host," he says, adding that his firm is upgrading to VMware’s Virtual Infrastructure 3. “From what we have read — but we have not tested it yet —Virtual Infrastructure 3 is supposed to be optimized better for hosting Citrix, so we should be able to get a more normal user load on the virtual servers."

3. Sneaky security: Once you deploy a virtual environment, you’re removing the link between hardware and software, which can create confusion when it comes to securing your infrastructure. “The decoupling risks blinding security pros to what is going on behind their network security appliances," says Allwyn Sequeira, senior vice president of product operations at patching specialist Blue Lane Technologies. “The server environment gets more fluid, more complex and the security pros ultimately lose the stability that hardware offered. Any type of vulnerability scan could be rendered obsolete in minutes." Dennis Moreau, CTO at security and compliance firm Configuresoft, agrees. Virtualization streamlines provisioning and processes such as patching, but it also adds complications that IT professionals may not be thinking about. “We always had to patch the operating system and the application, and you still have to do that when you virtualize, but now, all of a sudden, you also have to patch the [virtual machine manager] layer where vulnerabilities can exist," he says. “So the work of maintaining a secure environment and of documenting that for compliance purposes, just by the fact of introducing a virtualization technology layer, gets more complex."

4. Left in lock-in: The virtualization market is evolving quickly and even VMware is pushing for a standard way to create and manage virtual machines. But standards and interoperability will come slowly. Companies that aren’t careful may find themselves locked in to a certain vendor’s approach, making it difficult and expensive to move among other approaches as technologies mature. “Try to pick products that can be considered somewhat standard and open to the virtualization market, like products where you can import [virtual machines] from other products," says Ulrich Seif, CTO at National Semiconductor in Santa Clara, Calif. “Too many things can happen in this space in the next couple of years, so don’t corner yourself if you can help it."

5. VM sprawl: Originally, virtualization was a big hit simply for consolidating physical servers — and thus reducing power demands and heat output. But because of the ease with which virtual machines can be deployed, organizations may find that while they have reduced the number of physical devices, the number of virtual systems to be managed has exploded. “One of the biggest gotchas out there is [virtual machine] sprawl," says John Humphreys, a program director at IDC. “We see this again and again: customers that before virtualization had 500 servers each with one image on them, for example, after virtualization all of a sudden have 700 images they’re trying to manage." The best way to avoid that kind of sprawl is to plan virtual machine life cycles, recovering virtual instances that are no longer being used, he says.

6. Licensing costs: Just as companies may be haggling with independent software vendors that set license fees based on CPU usage over pricing on multicore servers, they also may find surprises when it comes to licensing in virtual environments. “Software licenses may be a barrier," says John Enck, a research vice president at Gartner. “You may want to run an application in a large, virtualized server, but the license may be written to apply to the physical processor cores in the machine. So if, for example, you move such an application from a two-way server to a four-way virtualized server, your software license costs may increase — even though the software is only using two processors in the virtual environment."

7. Stuck on storage: Because many of the candidates for virtualization were on distributed x86 systems, it’s easy to forget how the more centralized architecture of virtual resources can impact things. Storage, for example, should get a close look because in many cases virtual resources will all access a shared storage-area network (SAN). “Some companies may buy a certain type of storage array and they may not consider the workload that the VMware environment is going to put on it and it ends up being that that array just can’t handle it: too much throughput, too much I/O," Xcedex’s Payne says. “If that array goes down and has an issue on the SAN every single virtual machine is going to be negatively effected, meaning they’re probably going to crash, they’re probably going to get corrupted and it’s going to be a really bad experience." National Semiconductor’s Seif says storage concerns should be a priority when planning a virtual environment. SAN storage “is essential to reap the benefits of [business continuity/disaster recovery], allowing shifting workloads for optimizing uptime/performance and better scaling of guests to hosts," he says. “The amount of storage — shifting from operating system, software and data on local server hard drives to SAN capacity — can add up very fast, 40GB per host for us, and without a solid tiered storage strategy, it can eat up very expensive SAN storage very quickly."

8. Virtual roadblocks: With AMD and Intel servers running side-by-side in many data centers, some companies may think mobile virtual machines can be moved across any x86 hardware, but that’s not the case. “The question people are struggling with is, 'As I move these [virtual machines] around, one, do I have to have similar hardware,’" says IDC’s Humphreys. Today, VMware virtual machines can’t move between Intel- and AMD-based systems, says Raghu Raghuram, vice president product and solutions marketing at VMware. “Our vmotion technology allows you to migrate a running application from one physical box to another, but the processors in those boxes have to be the same: so you can move from AMD to AMD or from Xeon to Xeon," he says. “It’s because of the difference in processor architectures and the behavior of certain instructions. It’s a problem that will get solved over the longer term."

Learn more about this topic

Virtualization: Xen and the art of hypervisor maintenance01/15/07VMware: beyond the basics08/21/06A virtual breeze

08/21/06

Join the discussion
Be the first to comment on this article. Our Commenting Policies