While some progress has been achieved in getting virtual machines to run across different types of hypervisors, more work is still needed to bring them to the level of portability that enterprises are seeking, according to a study released by the Open Data Center Alliance (ODCA).
"There's the greatest intent in the industry for interoperability, but we're still a ways off," said Das Kamhout, who is the ODCA technical workgroup advisor as well as the head of Intel's cloud operations. Achieving such interoperability is vital because "IT shops want to be able to move virtual machines, between private and public clouds and between different private clouds."
[ ROUNDUP: What's your obsolete tech really worth on eBay? ]
Overall, the study concluded, VM interoperability is still at an early stage. Vendors are modifying their hypervisors to meet the specifications for VM portability, though much work still needs to be done.
The study is one of the first detailing how easily VMs can be moved around in a cloud environment. Enterprises don't want their workloads to be tied to one vendor's platform and portability is a good measure of how easily jobs can be moved to other providers.
Over the past few years, the hypervisor makers have convened on a standard for VM portability, called the Open Virtualization Format (OVF). Developed by the Distributed Management Task Force (DMTF), OVF provides the minimum set of hooks a VM would need to run on any hypervisor that supports OVF.
The proof-of-concept study looked at how easily a virtual machine could be moved across different hypervisors, namely VMware's ESXi, the Apache Software Foundation's Xen, Microsoft's Hyper-V and the open source KVM (kernel-based virtual machine). Each VM contained a copy of either Windows Server 2008, Ubuntu, or CentOS, which is a version of Red Hat Enterprise Linux (RHEL).
For this project, the researchers devised a testing method using the definitions of basic interoperability ODCA first defined a year ago. They then set up a test bed of servers, where VMs from different hypervisors could be run across different servers.
Overall, the tests showed how well VMs created for one kind of hypervisor could work when run on another hypervisor. The results were bucketed into three categories: successful, warning and failed. A successful rating meant the VM worked automatically in its new environment. In the warning category, the VM also worked in its new environment, though it might require some manual intervention. The final category, failed, signified those cases in which the VM would not work in the new environment, at least not without additional tools.
Running through all the different possible combinations of hypervisors and OSes, the researchers found that 13 test cases resulted in warnings, and 19 test cases failed entirely. Only in two cases did the VM work flawlessly across two different hypervisors. In both of these cases, a VM created with Xen worked without troubles on a Microsoft hyper-V environment -- in one case running Ubuntu and in the other case running Windows Server.
Warnings were issued for a number of different issues. Most were due to the VM's inability to acquire a new IP address in the new environment. A VM reporting changes in memory configuration or CPU speed also resulted in a warning. In other warning cases, some functionality was lost, such as the ability to pause or unpause a running VM in its new environment.
"In some situations with warnings, things should be OK, but it could require some manual intervention, and manual intervention is not optimum," Kamhout said.
No one hypervisor handily beat the others in terms of supporting OVF. All had blind spots. "It really seems like a pretty wide variety of capabilities at the hypervisor level," Kamhout said.
Although the study did not test how well the VM OSes fared per se, it found that Windows 2008 was able to work the most easily across all the different hypervisors, while CentOS required the most additional work. "Windows 2008 was the most forgiving of changes," Kamhout said,
ODCA, however, is confident that hypervisor managers will use the study to further refine their products. "This is a baseline test," Kamhout said, adding that "the solution providers have [shown] a strong interest in fixing the gaps pretty quickly."
The Open Data Center Alliance is a consortium of companies interested in better defining long-term data center requirements, and includes members such as BMW, Capgemini, China Unicom, Deutsche Bank, JPMorgan Chase, Lockheed Martin, Marriott, Disney, and UBS. Intel serves as technical advisor to the Alliance.
The researchers will further discuss this work at ODCA's Forecast cloud computing conference, being held in San Francisco on June 17.