VMware edges out Microsoft in virtualization performance test

Hyper-V's bright spot is a set of drivers that help it support Linux VMs

With the recent release of Microsoft's Hyper-V shaking up the hypervisor market, we decided to conduct a two-part evaluation pitting virtualization vendors against each other on performance as well as on features such as usability, management and migration.


Podcast: Virtualization game on: Microsoft vs. VMware

Part 2 of this series focusing on virtualization management

How we tested virtualization products

Archive of Network World tests


Microsoft and VMware accepted our invitation, but the open source virtualization vendors - Citrix (Xen) and Red Hat (Linux-based hypervisor) - were unable to participate because they are undergoing product revisions. That left us with a head-to-head matchup between Microsoft's Hyper-V and VMware's market-leading ESX.

The findings here focus on hypervisor performance. A second installment coming later this month will take usability, management and migration features into account.

The question of which hypervisor is faster depends on a number of factors. For example, it depends on how virtual machine (VM) guest operating systems are allocated to the available host CPUs and memory. It also depends on numerous product-specific limitations that can restrict performance.

That said, VMware ESX was the overall winner in this virtualization performance contest - where we were limited to running six concurrent VMs because of the combination of our server's processor cores and memory capacity, and the limitation of the hypervisors we tested. ESX pulled down top honors in most of our basic load testing, multi-CPU VM hosting, and disk I/O performance tests.

Microsoft's Hyper-V, however, did well in a few cases, namely when we used a special set of drivers released by Microsoft to boost performance of the only Linux platform Hyper-V officially supports: Novell's SuSE Enterprise Linux.

VM hypervisors are designed to represent server hardware resources to multiple guest operating systems. The physical CPUs (also called cores) are represented to guest operating systems as virtual CPUs (vCPU). But there isn't necessarily a one-core to one-vCPU relationship. The exact ratio depends upon the underlying hypervisor. In our testing, we let the hypervisor decide how to present CPU resources as vCPUs.

The operating systems "see" the server resources within the limitations imposed by the hypervisor. As an example, a four CPU-core system might be represented as a single CPU to the operating system, which will then have to live on just that CPU. In other cases, four CPUs may be virtualized as eight vCPUs, in a scenario in which quieter VMs aren't likely to frequently use peak CPU resources. Other constraints can be imposed on the VMs as well, such as those pertaining to disk size, network I/O, and even which guest gets to use the single CD/DVD inside the server.

One frustrating performance limitation imposed by both Hyper-V and ESX is that the number of vCPUs that can be used by any single VM is four, no matter the type or version of that guest operating system instance or how many physical cores might actually be available. Furthermore, if you choose to run 32-bit versions of SLES 10 as a guest operating system, you will find that Microsoft only lets those guests have a single vCPU.

The limitations imposed by the hypervisor vendors on the number of available vCPUs come from two areas. First, keeping track of VM guests with very large CPU needs also involves enormous memory management and large amount of inter-CPU communications (including processor cache, instruction pipelines and I/O state controls) that are exceedingly difficult. Secondly, the demand for VM guest hosting has been perceived to be a server consolidation action - and servers that need consolidating are often single CPU machines.

These limitations in hypervisor hardware resource allocations set the stage for how we could take advantage of the 16-CPU HP DL580G5 server in our test bed (see How we did it).

As previously noted, Microsoft officially supports its own operating systems and Novell's SLES 10 (editions running Service Packs 1 and 2) as guest instances. That accounts for why we tested with only Windows 2008 and SLES 10.2 VMs. Other operating systems (Red Hat Linux, Debian Linux and NetBSD) may work, but organizations seeking debugging or tech support are on their own if they use them.

While we were testing, Microsoft introduced its Hyper-V Linux Interface Connector (Hyper-V LinuxIC) kit, which is a set of drivers that help optimize CPU, memory, disk and network I/O for SLES guest instances. We did see a boost in performance with the kit in place, but only in the case of one vCPU per guest. Hyper-V LinuxIC isn't supported for SMP environments.

1 2 Page 1
Page 1 of 2
Take IDG’s 2020 IT Salary Survey: You’ll provide important data and have a chance to win $500.