Enterprise IT operations must adapt its management, security and staffing approaches to handle virtual environments.
As server virtualization projects gain scale and strategic value, enterprise IT managers must move quickly beyond tactical approaches to achieve best results.
Consider these Gartner forecasts: More than 4 million virtual machines will be installed on x86 servers by 2009, and the number of virtualized desktops could grow from less than 5 million in 2007 to 660 million by 2011. The popularity of virtualizing x86 server and desktop resources has many enterprise IT managers reassessing ways to update already virtualized network and storage resources, too.
Virtualization's impact will spread beyond technology changes to operational upheaval. Not only must enterprise IT executives move from a tactical to a strategic mindset but they also must shift their thinking and adjust their processes from purely physical to virtual.
7 things NOT to do with virtualization
"Enterprise IT managers are going to have to start thinking virtual first and learn how to make the case for virtualization across IT disciplines," says James Staten, principal analyst at Forrester Research. "This will demand they change processes. Technologies can help, but if managers don't update their best practices to handle virtual environments, nothing will get easier.
Here enterprise IT managers and industry watchers share best practices they say will help companies seamlessly grow from 30 to 3,000 virtual machines without worry.
1. Approach virtualization holistically
Companies considering standardizing best practices for x86-based server virtualization should think about how they plan to incorporate desktop, application, storage and network virtualization in the future.
IT has long suffered from a silo mentality, with technology expertise living in closed clusters. The rapid adoption of virtualization could exacerbate already strained communications among such IT domains as server, network, storage, security and applications.
"This wave of virtualization has started with one-off gains, but to approach the technology strategically, IT managers need to look to the technology as achieving more than one goal across more than one IT group," says Andi Mann, research director at Enterprise Management Associates.
To do that, an organization's virtualization advocates should champion the technology by initiating discussions among various IT groups and approaching vendors with a broad set of requirements that address short- and long-term goals. Vendors with technologies in multiple areas, such as servers and desktops, or with partnerships across IT domains could help IT managers better design their virtualization-adoption road maps. More important, however, is preventing virtualization implementations from creating more problems via poor communications or antiquated organizational charts, industry watchers say.
"With ITIL and other best-practice frameworks, IT has become better at reaching out to other groups, but the speed at which things change in a virtual environment could hinder that progress," says Jasmine Noel, a principal analyst at Ptak, Noel and Associates. "IT's job is to evolve with the technology and adjust its best practices, such as change management, to new technologies like virtualization."
2. Identify and inventory virtual resources
Understanding the resources available at any given time in a virtual environment requires enterprise IT managers to enforce strict processes from a virtual machine's birth through death.
Companies need a way to identify virtual machines and other resources throughout their life cycles, says Pete Lindstrom, research director at Spire Security. The type of virtual-machine tagging he suggests would let IT managers "persistently identify virtual-machine instances over an extended period of time," and help to maintain an up-to-date record of the changes and patches made to the original instance. The process would provide performance and security benefits because IT managers could weed out problematic virtual machines and keep an accurate inventory of approved instances.
"The ability to track virtual machines throughout their life cycles depends on a more persistent identity scheme than is needed in the physical world. IT needs to know which virtual resources it created and which ones seemed to appear over time," Lindstrom explains. "The virtual world is so much more dynamic that IT will need granular identities for virtual machines and [network-access control] policies that trigger when an unknown virtual machine is in the environment. Rogue virtual machines can happen on the client or the hypervisor."
Discovery technology also serves an important role in maintaining an accurate inventory of virtual resources, says Glenn O'Donnell, a Forrester senior analyst. "From a high level, the ITIL processes around managing configuration, change, incidents or problems doesn't change; but virtualization adds another layer of abstraction and numerous configuration items that need to be incorporated into existing processes," he says.
For instance, using such tools as BMC Software's Topology Discovery, EMC's Application Discovery Manager or mValent's Integrity, an IT manager could perform an ongoing discovery of the environment and track how virtual machines have changed. Manual efforts couldn't keep pace with the configuration changes that would occur because of, say, VMware VMotion or Microsoft Live Migration technologies. "IT has to stay on top of a lot more data in a much more dynamic environment," O'Donnell says. (Compare Network Configuration Management products.)
3. Plan for capacity
Just because virtual machines are faster to deploy than physical ones, the task shouldn't be taken lightly. "If you are not careful, you can have a lot of virtual machines that aren't being used," says Ed Ward, senior technical analyst at Hasbro in Pawtucket, R.I. He speaks from the experience of supporting 22 VMware ESX host servers, 330 virtual machines, 100 workstations and 250 physical machines.
To prevent virtual-machine sprawl and to curb spending for licenses and power for unused machines, Ward says he uses VKernel's Capacity Analyzer virtual appliance. It alerts him to all the virtual machines in his environment, even those he thought he had removed.
"There are cases in which you build a virtual machine for test and then for some reason it is not removed but rather it's still out there consuming resources, even though it is serving no purpose," Ward says. "Knowing what we already have and planning our investments based on that helps. We can reassign assets that have outlived their initial purpose."
When they create virtual machines, IT managers also must plan for their deletion. "Assign expiration dates to virtual machines when they are allocated to a business unit or for use with a specific application; and when that date comes, validate the need is no longer there and expire the resource," Forrester's Staten says. "Park a virtual machine for three months and if it is no longer needed, archive and delete. Archiving keeps options open without draining storage resources or having the virtual machine sitting out there consuming compute resources."
4. Marry the physical and virtual
IT managers must choose the applications supported by virtual environments wisely, say experts, who warn that few if any IT services will rely only on the virtual infrastructure.
"While some environments could support virtual-only clusters for testing, the more common scenario would have, for instance, two virtual elements and one physical one supporting a single IT service," says Cameron Haight, a Gartner research vice president. "IT still needs to correlate performance metrics and understand the profile of the service that spans the virtual and physical infrastructures. Sometimes people are lulled into a false sense of security thinking the tools will tell them what they need to know or just do [the correlation] for them."
IT managers should push their vendors for reporting tools that not only show what's happening in the virtual realm but also display the physical implications -- and potentially the cause -- of an event. Detailed views of both environments must be married to correlate why events take place in both realms.
For instance, if utilization on a host server drops from 20% to 10%, it would be helpful to know the change came about because VMware Distributed Resource Scheduler (DRS) moved virtual machines to a new physical server, Haight says. In addition, knowing when and where virtual machines migrate can help prevent a condition dubbed "VMotion sickness" from cropping up in virtual environments. This occurs when virtual move repeatedly across servers -- and bring problems they might have from one server to the next, Haight says. Proper reporting tools, for example, could help an administrator understand that a performance problem is traveling with a virtual machine unbeknown to DRS.
5. Eliminate virtual blind spots
The fluid environment created by virtualization often includes blind spots. "We monitor all physical traffic, and there is no reason why we wouldn't want to do the same for the virtual traffic. It's a huge risk not knowing what is going on, especially when the number of virtual servers is double what you have for physical boxes," says Nick Portolese, senior manager of data center operations at Nielsen Mobile in San Francisco.
Portolese supports an environment with about 30 VMware ESX servers and 500 to 550 virtual machines. Early on, he realized he wasn't comfortable with the amount of network traffic he could monitor in his virtual environment. Monitoring physical network traffic is a must, but he found the visibility into traffic within the virtual environment was non-existent.
Start-up Altor Networks provided Portolese with what he considered necessary tools to track traffic in the entire environment. Altor's Virtual Network Security Analyzer (VNSA) views traffic at the virtual -- not just the network -- switch layer. That means inter-virtual-machine communications or even virtual desktop chatter won't be lost in transmission, the company says. VNSA provides a comprehensive look at the virtual network and analyzes traffic to give network security managers a picture of the top application talkers, most-used protocols and aspects of virtualization relevant to security. It's a must-have for any virtual environment, Portolese says.
"We didn't have anything to monitor the virtual switch layer, and for me to try to monitor at the virtual port was very difficult. It was impossible to tell which virtual machine is coming from where," Portolese explains. "You will get caught with major egg on your face if you are silly enough to think you don't have to monitor all traffic on the network."
6. Charge back for virtual resources
Companies with chargeback policies should apply the practice to the virtual realm, and those without a set process should institute one before virtualization takes off.
Converting physical resources to virtual ones might seem like a no-brainer to IT folks, who can appreciate the cost savings and administration changes, but business units often worry that having their application on a virtual server might affect performance negatively. Even if a company's structure doesn't support the IT chargeback model, business units might be more willing to get on board with virtualization if they are aware of the related cost savings, Forrester's Staten says.
"IT can provide some transparency to the other departments by showing them what they can gain by accepting a virtual server. This includes lower costs, faster delivery against [service-level agreements], better availability, more-secure disaster recovery and the most important one -- [shorter time to delivery]. It will take six weeks to get the physical server, but a virtual server will be over in more like six hours," Staten says.
In addition, chargeback policies would be an asset to IT groups looking to regain some of their investment in virtualization. At Hasbro, IT absorbs the cost of the technology while the rest of the company takes advantage of its benefits, Ward says. "The cost of physical machines comes out of the business department's budget, but the cost of virtual machines comes out of the IT budget," he says.
7. Capitalize on in-house talent
IT organizations also must update staff to take on virtualization. Certification programs, such as the VMware Certified Professional (VCP) and Microsoft's Windows Server Virtualization, are available, but in-house IT staff must weigh which skills they need and how to train in them. "Certifications are rare, though I do have two VCPs on my staff. Most IT professionals who are able to take the exam and get certified would probably work in consulting," says Robert Jackson, director of infrastructure at Reliance Limited Partnership in Toronto.
With training costing as much as $5,000 per course, IT workers might not get budget approval. Gartner's Haight recommends assembling a group of individuals from the entire IT organization into a center of excellence of sorts. That would enable the sharing of knowledge about virtualization throughout the organization.
"We surveyed IT managers about virtualization skills, and about one-quarter of respondents had a negative perspective about being able to retain those skills in-house," Haight says. "Disseminating the knowledge across a team would make an organization more secure and improve the virtualization implementation overall with fewer duplicated efforts and more streamlined approaches."
In the absence of virtualization expertise, Linux proficiency can help, Hasbro's Ward says. VMware support staff seem to operate most comfortably with that open source operating system, he says.
In general, moving from pilot to production means increasing the staff for the daily care and feeding of a virtual environment, Ward says. "Tools can help, but they can't replace people."
< Previous story: Game-changing IT technologies -- and how they affect the everyday worker | Next story: 7 things NOT to do with virtualization >