VMware edges out Microsoft in virtualization performance test

Hyper-V's bright spot is a set of drivers that help it support Linux VMs

In a virtualized world, VM guest instances must contend with either internal disk or storage-area network resources. When the hardware is re-represented to guest operating systems through virtualization, the hypervisor layer between the hardware and guest VMs uses its own disk driver to manage disk activity. Adding virtualized guests divides the hardware resources among the guest VM operating system/applications instance. Even though native operating system drivers might be good, the ability for a hypervisor to manage the communication needs among a number of guests becomes a very sophisticated business, and latency and efficiency issues will be seen as application performance slow-downs.

We ran IOmeter in each VM instance to gauge how the hypervisor could "breathe" data to disk. We used a tougher-than-real-world ratio of 70% writes vs. 30% reads. We favored writes in our configuration because they aren't heavily cached by the operating system (so their contents don't evaporate during power outages or hardware resets), and read-based cache can distort measurements.

Table of disk I/O results with VMs accessing a single vCPU

We established the I/O performance of a native operating system (in both single and SMP servers) to establish a baseline of the operating system's disk I/O speed as measured by IOMeter. We then ran the same tests on each of our hypervised environments with six VM guests. We wanted to know if the hypervisor could offer more disk channel availability to VM guests than they could use on their own as native instances.

The good news is that our tests show both hypervisors could pump up the disk channel at rates greater than a single native instance could when we added more guest VM instances. This means hypervisors controlling the disk channel (an HP Smart Array in our case) can do a good job of cramming that channel when the number of VM guests increases.

Table of disk I/O results with VMs accessing four vCPUs

In the hosted SLES results where each VM accessed a single vCPU, we again saw that Hyper-V VM guest instances get a formidable boost from the Microsoft Linux IC as SLES Linux VMs ran faster on Hyper-V than on VMware ESX. When we tested to see if SLES without the LinuxIC kit would be slower, we found it was essentially the same (within a single percent) as VMware ESX's performance. When we ran this test on Hyper-V without the LinuxIC kit, the average I/O for an SLES VM was 83.78 I/Os per second, about 5% faster than VMware's disk throughput with SLES.

However, Hyper-V doesn't fare as well in delivering disk I/O to its own Windows 2008 Server. VMware lapped Microsoft with six Windows 2008 VMs loaded up.

When we measured, disk I/O activity in an SMP environment - where each of our six VMs was allocated four vCPUs - we intentionally oversubscribed the server to see if the hypervisors could sustain their disk channel activity when given a volume of disk demand from each guest. As a hypervisor is an operating system of its own, it must carefully reallocate disk writing time and switch contexts among guests cleanly and efficiently.

In these tests, both hypervisors achieved more I/O performance than a native operating system running on bare metal. But VMware ESX is the clear winner. When hosting Windows 2008 VMs it registered 1733.63 I/Os per second compared with Hyper-V's 874.29 I/Os per second and the native performance of 712.97 I/Os per second. But it also beat out Hyper-V in the hosted SLES environment by a narrow margin of about 45 I/Os per second. Hyper-V no longer has the advantage of the LinuxIC kit, which doesn't support SMP hardware.

Overall

VMware's initial lead in the marketplace has given it a performance lead in most of the areas that we tested, although Microsoft's prowess is beginning to show in a core area - consolidation of single-CPU focused VM performance. Both vendors are likely to improve their performance numbers rapidly, as it's a source of strong competition between them. Biting at their heels are offerings from Citrix, Sun and Red Hat, as well as open source developments that are reaching commercial potential. VM performance is certainly an area to keep an eye on.

Henderson and Allen are researchers for ExtremeLabs. They can be reached at thenderson@extremelabs.com.

NW Lab Alliance

Henderson is also a member of the Network World Lab Alliance, a cooperative of the premier reviewers in the network industry each bringing to bear years of practical experience on every review. For more Lab Alliance information, including what it takes to become a member, go to www.networkworld.com/alliance.

| 1 2 Page
Join the discussion
Be the first to comment on this article. Our Commenting Policies