• United States
by Tom Henderson and Brendan Allen, Network World Lab Alliance

The virtual winner: VMware’s ESX KOs a roughly built Hyper-V package

Sep 29, 200816 mins
Data CenterMicrosoftVirtualization

VMware wins due to manageability, stability that comes with maturity

When the dust settled in the lab after two long months of testing Microsoft’s Hyper-V and VMware’s ESX in the areas of performance, compatibility, management, and security, it all boiled down to two issues: experience and religion.

When the dust settled in the lab after two long months of testing Microsoft’s Hyper-V and VMware’s ESX in the areas of performance, compatibility, management and security, it all boiled down to two issues: experience and religion.

See in-depth performance analysis

How we tested the virtualization management features

The issue of virtual compatibility

Archive of Network World tests


VMware ESX took home our Clear Choice award because in our performance and qualitative analysis of the hypervisor and the first tier of management tools offered by each vendors it showed depth and maturity while Microsoft’s Hyper-V’s components were both very Windows-focused and very rough.

Performance, as reported earlier this month, heavily favored VMware, although Hyper-V edged out ESX in a few contests.

On the compatibility front, Microsoft’s Hyper-V’s early lead in the number of supported hardware platforms (based on the widespread of support for Windows 2008 Server itself) is completely offset by a dearth of support for non-Windows virtual machine (VM) operating systems. While VMware’s supported hardware list is shorter, its support of a comparatively vast number of operating systems made us cheer (see compatibility story).

Score card

VMware’s Virtual Center management platform is also mature and straightforward in how an administrator can use it to control resident VMs on a VMware host. VMware’s Virtual Infrastructure Client (VIC) is the administrative user interface to the VMware Virtual Center platform.

Microsoft’s System Center-Virtual Machine Manager (SC-VMM) 2008 (we tested a very late beta version which Microsoft guaranteed was feature complete) works with very strong ties to the underlying Active Directory and has an interface that fits right into Microsoft’s System Center scheme, so administrators won’t have to work hard to understand how it works. That said, things from standard management tasks such as viewing simple settings for a VM host to much touted advances features like the ability to migrate ESX VMs to Hyper-V caused SC-VMM to crash repeatedly during testing.

Microsoft, with its System Center Virtual Machine Manager 2008 software, provides a centralized console for viewing performance parameters of all Hyper-V host servers on the network.

In terms of the security options for these hypervisor environments, we found that both vendors need to beef up their authentication protection schemes and provide a designated, secure store for VM images.

You can certainly dress up either of these virtualization platforms with a plethora of add-ins that cover everything from eye-catching GUIs to fast tracking for priority applications to special interest pork for favored hardware platforms. And these options could effectively be combined to be all things to all people, but we had to select the components we tested to get an even comparison.

Our line in the sand here was to select the basic bundle – comprising both the hypervisor itself and the management tools needed to build, execute, monitor and maintain a production virtual machine infrastructure.

Our test combinations were Microsoft’s Hyper-V using SC-VMM 2008 vs. VMware’s ESX Infrastructure Foundation package. We added only one option to the VMware foundation, VirtualCenter for ESX, which like SC-VMM is a starter kit for managing multiple virtualized host platforms. These additional software elements make the two hypervisor platforms equivalent.

Although we only very rarely test non-production software, we chose to use SC-VMM beta (Build 0991.1) in Hyper-V testing as it is close to public release and Microsoft contended it was feature complete and on target to be ready for a September release. That said, Microsoft has since missed that release target date and is now saying it won’t even RTM the end of October. We’ll likely take another look at the shipping code and compare it with what we found in this initial round of testing. What we found was that SC-VMM crashed frequently, hard and required a lot of configuration limitations that aren’t supposed to be in the final production product. 

The tools of the VM management trade

Because virtualization is usually part of a server consolidation project, rapid VM instance generation, movement, monitoring and trouble assessment can be critical as a single server usually represents many production processes.  

We built dual Hyper-V and ESX servers to gauge how each hypervisor design could handle both hosting new and consolidated virtualized operating system and application instances. We assessed the system’s flexibility in creating new VM guests, tested the primary tools that do the heavy lifting when moving discrete physical servers to virtual servers in a process known as P2V, and reviewed how the provided tools helped in ongoing management of all guests.

In terms of ongoing monitoring capabilities, we took into account the depth of characteristics each product could track and how those were communicated in the form of logs and reports. We also assessed the flexibility of the VM security choices.

VM management tools need to perform at least four basic functions: managing which drivers are to be used, updated or deleted for the corresponding hardware connections to the hypervisor; allocating and building virtual machine spaces for guests; monitoring both ongoing characteristics (CPU, disk space, I/O) as well as alarming events; and handle loading, unloading and backing up discrete VMs.

Microsoft’s SC-VMM assists in controlling Hyper-V guests from remote (non virtual server host) locations. Hyper-V’s GUI rides on Windows (of course) and connects to the SC-VMM 2008 administrative engine running on the same machine as a Microsoft’s Active Directory Domain Controller and a version of Microsoft SQL Server. SC-VMM installs an agent on each Hyper-V virtual machine it manages.

VMware’s ESX and its hosted VMs are monitored and manipulated by VirtualCenter which runs as a background Windows application either on the virtualized server, or another Windows machine connected to it. VirtualCenter requires that SQL Server Express Edition be installed for it to function properly as a management data store and that an agent is installed on each ESX server.

Both SC-VMM and VirtualCenter perform the aforementioned management missions to varying degrees of success.

Microsoft, as we mentioned several times in our performance discussion, offers up a free Linux Interface Connector which has three components (CPU/memory, IO drivers and keyboard/mouse) to speed SUSE Linux 10.1/10.2 VMs.

VMWare's Infrastructure Client component -- backed by the VirtualCenter management engine -- allowed us during testing to easily build and monitor VM guests across multiple server hosts.

ESX also has an add-in called VMTools that like LinuxIC adds network and block memory drivers, and faster graphics translation speed to a VMware ESX guest operating systems (there are versions for both Linux and Windows) if desired (it’s optional).

With Hyper-V, when controlled by SC-VMM, the admin can remotely turn a VM guest on or off or have it shut down gracefully. Also, you are supposed to be able to manage user access to VM resources through Active Directory which users can access the virtual machines. You can, of course, limit what they do, such as start/stop machines, pause/resume, make checkpoints, remove machines, be a local admin for machines created by them, create new VMs, and more. The feature certainly wasn’t camera ready when we tested it as it crashed the SC-VMM application repeatedly. SC-VMM also drives importation of VM images and is supposed to be able to even import ESX virtual machines to Hyper-V, but that didn’t work in our beta code for SC-VMM. But on that same cross platform note, the same functionality in ESX — importation of Hyper-V images — didn’t work either. No points on cannibalizing a competitor’s images for either vendor were rewarded.

VMware’s VirtualCenter can do many of the same things mentioned above (turn on/off machines, shut down, reset). We were also able to create template images to be used as a base to create images later, or clone a VM (while it’s turned off), and if VMotion, an option, is licensed, then it’s also possible to migrate between two hosts (using shared storage). Also, we were able to assign permissions to each VM where you can set up different users and groups (via Active Directory/Local Users) to be able to access that VM or group of VMs.

Another thing you can do with VirtualCenter is setup what is called a resource pool, which allows you to divide resources among multiple VMs easier. For example, let’s say you have six VMs. You would like two of those to use 60% of all resources on that system and the other four to have 40%. You can create two resource pools and assign the VMs to one of the two pools. This way you don’t need to worry about assigning resources to each individual VM.

Building a virtual host

We used several steps after installation to prepare virtual guest slots on our Hyper-V and ESX hosts. We then populated them to emulate server migration and consolidation processes. Once either hypervisor is installed, we could generate guest instances that served as holding spots for installable operating system/application instances on physical servers that we wanted to migrate to our host servers. 

Both Hyper-V and ESX allowed us to install guest instances without aid of the SC-VMM and VirtualCenter tools, respectively, and then install either a pre-made VM instance, install an operating system from CD/DVD, or install from a network source/share. That said, the added management tools can be helpful in this process if in use, serving as a user interface to the hypervisor in question. Both tools eased common VM instance management tasks, such as duplicating, creating, copying, and allocating and re-allocating resources.

For currently existing operating system/application pairings that need to be migrated to a virtual host, each hypervisor tested has a similar procedure to capture a server instance and import it into a virtual guest slot that we’d prepared.

This process of copying a current physical server to a target server is a process known as cloning. There are two primary physical to virtual (P2V) cloning methods that both hypervisor products support: the ability to migrate from a disk image, and, the ability to clone from a live production server.

Unfortunately, Microsoft’s P2V process couldn’t be tested because this portion of the beta application crashed despite lots of patching, intricate settings tweaks, and calls to advanced technical support. It’s not ready yet.

VMware’s P2V application is an optional extra called VMware Converter, and when we tested it, it worked well in most cases as long as the hard disk controller was supported. It worked best with Windows, where we could produce live clones from Windows XP and Windows Server 2003 images. To cold clone Linux and Windows Server 2008 VMs required some extra setup steps after it was copied.

Images of working virtual machines can then be used as the base of replicas for other VM guests. The images, however, are in known formats and can be mounted as file systems for purpose of manipulation of the content files/folders. Hyper-V uses a cross-Windows file format called VHD,and ESX uses a published system called VMDK.

Some organizations use virtualized images for distribution, and images may need to be customized for purposes of making the image unique (a Windows requirement, generally, for identification), or to load specific software combinations as a payload for a targeted distribution of the virtualized physical hardware instances to other locations.

With both products we found that mounting and editing the images can be simple, but also run the security risks we talk about in detail below.

Migrating images

Migrating VMs from one server host to another happens for a variety of reasons, ranging from load balancing to application aggregation.

Migrations for our direct comparisons here revolve around taking snapshots of existing working VM guests and then moving these images to new target server hypervisor hosts.

VMware offers an optional live migration tool available call VMotion. Our prior experience with ESX VMotion is that it can move images within seconds from one server hypervisor to another. Microsoft recently announced a similar server for Hyper-V won’t be available until 2010, a serious deficiency were we to include this in our direct comparison.

By using snapshots under Hyper-V, we were able to capture live system state data on either Windows 2008 or Novell’s SUSE Linux Enterprise Server 10.2 VMs.

A loaded machine took seconds for the snapshot to complete. The snapshot feature can be used to roll-back or restore a server’s use state, but there are implications. For example, as transactional states of applications are frozen, the server becomes unavailable for a short period of time, and so users may find their applications performing badly because they cannot access the server while the snapshot state is being taken. Further, the snapshot of a system state, where the image rendered is then used subsequently as an instance on another machine, may or may not be supported in operating system and/or application licensing. Microsoft recently changed its policy to allow VM instances to be migrated (for various versions of Windows) from one host to another, but licensing prohibits spontaneous movements of VM instances, whatever their state. That state may also represent application or file states that when re-instantiated, require maintenance. Transaction states may also have to be verified as well.

VMware’s Virtualized Consolidated Backup (VCB) that’s included in the VMware Infrastructure Foundation edition that we tested, adds full and incremental backup to disk or tape of guest hosts. The file system is quieted during backup to keep things synchronized, possibly, and temporarily, removing VM guest operating system/applications from availability through the process. VMware says VCB also has integration capability with CommVault EMC, HP, Symantec, IBM/Tivoli, and other backup applications, but we did not test that level of integration.

VMware’s ESX uses one of two capture systems to pull VM images, one that develops a VM image from a live, running server, or one that takes a shutdown-server’s disk and captures the state of the disk. We captured several operating systems (see How we did it) and found that this is a simple process that works well and consistently.

Monitoring capabilities

VMs are allocated shared resources when they’re born, and then must live within the confines of those settings. When VM instances use their maximum allocation or are allowed to constantly plug into shared (oversubscribed) resources, administrators need to know so that the help desk doesn’t light up with complaints of apparent application inadequacy.

We used SC-VMM’s instance monitoring capabilities to watch CPU, memory and disk use (how much and how frequently) to gauge its capabilities vs. VIC’s ability to monitor VM performance attributes. To make a long discussion short, they’re nearly the same. Important VM characteristics are monitored in each. VIC comes out on top when it comes to watching if exceeding thresholds triggers an alarm. Thresholds aren’t monitored inside SC-VMM as this requires use of other products in the Systems Center family. VIC, however, allowed us to set thresholds in areas such as CPU utilization, where zero utilization meant that perhaps an application had crashed or hitting a ceiling meant the application was peaking.

Using VirtualCenter Infrastructure Client, you can set alarms based on conditions that we needed to know about such as when CPU, memory, network or disk usage goes above or below a certain threshold or when the machine state changes or there is no VM heartbeat. There are three colors for severity, green, yellow and red. Green means everything is fine, yellow is like a warning and red is severe. Once it is triggered, it was recorded in a log file. We could set how often it would trigger again either by frequency (in seconds) or tolerance (a certain percentage). We could also set an action to follow when a trigger is set off. These actions include sending an e-mail, sending a notification trap, running a script, powering on/off a VM, suspending a VM and resetting a VM.

While there are no alarm or trigger options built-in SC-VMM, there is a limited set of options that allowed us to start specific virtual machines as the server boots up. Or when the server shutdowns, Hyper-V can both save the state of an turn off the virtual machines.

Security could use some beef

We had issues with both hypervisors in terms of security in several areas. The first big issue is the fact that images that are used to build virtual guests aren’t serialized and/or authenticated in either platform. Should the image storage area be accessible, only file system time/date/modification meta data will be able to indicate that a virtual machine image has been either used or worse, tampered with.

As both hypervisors lack a native repository, images must be stored in an area chosen by the administrator and would desirably be authenticated through external methods, such as MD5 hashing, rudimentary checksums, or other ways that can validate image contents. VMware does embed an ID number into the image contents for enumeration, but not for authentication, purposes. As both ESX and Hyper-V produce images in formats that are easily mountable file systems, hackers with even rudimentary skills and file system access can tamper with images. This begs for at least a minimal image repository scheme that records authentication hashes or data to be included even in a basic bundle.

We also found that ESX doesn’t police password strength in its strictly Windows-based VirtualCenter. If the passwords are weak, access can be garnered through dictionary password attacks.

Hyper-V when managed through SC-VMM is accessed through default or defined Active Directory passwords, which are by default strong and can be made stronger and/or with additional authentication schemes.

Third-party authentication devices are virtually ignored. Controlled access to both hypervisors is lacking, although, the Windows 2008 Server that runs underneath Hyper-V has some authentication mechanisms in place. Still, no direct authentication for either Hyper-V or ESX exists.

VMware added a basic firewall to surround itself by default when we installed it. The Windows Firewall components built into Windows Server 2008 ostensibly protect Hyper-V VM guests, but we didn’t assault either product to see if we could crack them. We could fingerprint the VM guests if ports were open to do so, and therein lies an unexplored attack vector.


VMware’s long standing virtual history has given the ESX product ample time to mature to a very stable, usable product.

The dribbleware nature of the release of virtualization products from Microsoft — with Hyper-V, the Linux Interface Connector Kit (LinuxIC) and SC-VMM 2008 arriving six months, eight months and 10 months after Windows 2008 Server editions hit the streets — certainly won’t help with the rapid deployment of Hyper-V into environments where it will earn its chops. Microsoft’s development power is obvious, but the devil will be in the technical details as Microsoft plays catch up in the explosive virtualization marketplace.

Henderson and Allen are researchers for ExtremeLabs, of Indianapolis. Contact them at

Henderson is also a member of the Network World Lab Alliance, a cooperative of the premier reviewers in the network industry each bringing to bear years of practical experience on every review. For more Lab Alliance information, including what it takes to become a member, go to