The virtual winner: VMware's ESX KOs a roughly built Hyper-V package

VMware wins due to manageability, stability that comes with maturity

When the dust settled in the lab after two long months of testing Microsoft's Hyper-V and VMware's ESX in the areas of performance, compatibility, management, and security, it all boiled down to two issues: experience and religion.

When the dust settled in the lab after two long months of testing Microsoft’s Hyper-V and VMware’s ESX in the areas of performance, compatibility, management and security, it all boiled down to two issues: experience and religion.


See in-depth performance analysis

How we tested the virtualization management features

The issue of virtual compatibility

Archive of Network World tests

 


VMware ESX took home our Clear Choice award because in our performance and qualitative analysis of the hypervisor and the first tier of management tools offered by each vendors it showed depth and maturity while Microsoft’s Hyper-V’s components were both very Windows-focused and very rough.

Performance, as reported earlier this month, heavily favored VMware, although Hyper-V edged out ESX in a few contests.

On the compatibility front, Microsoft’s Hyper-V’s early lead in the number of supported hardware platforms (based on the widespread of support for Windows 2008 Server itself) is completely offset by a dearth of support for non-Windows virtual machine (VM) operating systems. While VMware’s supported hardware list is shorter, its support of a comparatively vast number of operating systems made us cheer (see compatibility story).

Score card

VMware’s Virtual Center management platform is also mature and straightforward in how an administrator can use it to control resident VMs on a VMware host. VMware’s Virtual Infrastructure Client (VIC) is the administrative user interface to the VMware Virtual Center platform.

Microsoft’s System Center-Virtual Machine Manager (SC-VMM) 2008 (we tested a very late beta version which Microsoft guaranteed was feature complete) works with very strong ties to the underlying Active Directory and has an interface that fits right into Microsoft’s System Center scheme, so administrators won’t have to work hard to understand how it works. That said, things from standard management tasks such as viewing simple settings for a VM host to much touted advances features like the ability to migrate ESX VMs to Hyper-V caused SC-VMM to crash repeatedly during testing.

Microsoft, with its System Center Virtual Machine Manager 2008 software, provides a centralized console for viewing performance parameters of all Hyper-V host servers on the network.

In terms of the security options for these hypervisor environments, we found that both vendors need to beef up their authentication protection schemes and provide a designated, secure store for VM images.

You can certainly dress up either of these virtualization platforms with a plethora of add-ins that cover everything from eye-catching GUIs to fast tracking for priority applications to special interest pork for favored hardware platforms. And these options could effectively be combined to be all things to all people, but we had to select the components we tested to get an even comparison.

Our line in the sand here was to select the basic bundle – comprising both the hypervisor itself and the management tools needed to build, execute, monitor and maintain a production virtual machine infrastructure.

Our test combinations were Microsoft’s Hyper-V using SC-VMM 2008 vs. VMware’s ESX Infrastructure Foundation package. We added only one option to the VMware foundation, VirtualCenter for ESX, which like SC-VMM is a starter kit for managing multiple virtualized host platforms. These additional software elements make the two hypervisor platforms equivalent.

Although we only very rarely test non-production software, we chose to use SC-VMM beta (Build 0991.1) in Hyper-V testing as it is close to public release and Microsoft contended it was feature complete and on target to be ready for a September release. That said, Microsoft has since missed that release target date and is now saying it won't even RTM the end of October. We'll likely take another look at the shipping code and compare it with what we found in this initial round of testing. What we found was that SC-VMM crashed frequently, hard and required a lot of configuration limitations that aren’t supposed to be in the final production product. 

The tools of the VM management trade

Because virtualization is usually part of a server consolidation project, rapid VM instance generation, movement, monitoring and trouble assessment can be critical as a single server usually represents many production processes.  

We built dual Hyper-V and ESX servers to gauge how each hypervisor design could handle both hosting new and consolidated virtualized operating system and application instances. We assessed the system’s flexibility in creating new VM guests, tested the primary tools that do the heavy lifting when moving discrete physical servers to virtual servers in a process known as P2V, and reviewed how the provided tools helped in ongoing management of all guests.

In terms of ongoing monitoring capabilities, we took into account the depth of characteristics each product could track and how those were communicated in the form of logs and reports. We also assessed the flexibility of the VM security choices.

VM management tools need to perform at least four basic functions: managing which drivers are to be used, updated or deleted for the corresponding hardware connections to the hypervisor; allocating and building virtual machine spaces for guests; monitoring both ongoing characteristics (CPU, disk space, I/O) as well as alarming events; and handle loading, unloading and backing up discrete VMs.

Microsoft’s SC-VMM assists in controlling Hyper-V guests from remote (non virtual server host) locations. Hyper-V’s GUI rides on Windows (of course) and connects to the SC-VMM 2008 administrative engine running on the same machine as a Microsoft’s Active Directory Domain Controller and a version of Microsoft SQL Server. SC-VMM installs an agent on each Hyper-V virtual machine it manages.

VMware’s ESX and its hosted VMs are monitored and manipulated by VirtualCenter which runs as a background Windows application either on the virtualized server, or another Windows machine connected to it. VirtualCenter requires that SQL Server Express Edition be installed for it to function properly as a management data store and that an agent is installed on each ESX server.

Both SC-VMM and VirtualCenter perform the aforementioned management missions to varying degrees of success.

Microsoft, as we mentioned several times in our performance discussion, offers up a free Linux Interface Connector which has three components (CPU/memory, IO drivers and keyboard/mouse) to speed SUSE Linux 10.1/10.2 VMs.

VMWare's Infrastructure Client component -- backed by the VirtualCenter management engine -- allowed us during testing to easily build and monitor VM guests across multiple server hosts.

ESX also has an add-in called VMTools that like LinuxIC adds network and block memory drivers, and faster graphics translation speed to a VMware ESX guest operating systems (there are versions for both Linux and Windows) if desired (it's optional).

With Hyper-V, when controlled by SC-VMM, the admin can remotely turn a VM guest on or off or have it shut down gracefully. Also, you are supposed to be able to manage user access to VM resources through Active Directory which users can access the virtual machines. You can, of course, limit what they do, such as start/stop machines, pause/resume, make checkpoints, remove machines, be a local admin for machines created by them, create new VMs, and more. The feature certainly wasn't camera ready when we tested it as it crashed the SC-VMM application repeatedly. SC-VMM also drives importation of VM images and is supposed to be able to even import ESX virtual machines to Hyper-V, but that didn't work in our beta code for SC-VMM. But on that same cross platform note, the same functionality in ESX — importation of Hyper-V images — didn’t work either. No points on cannibalizing a competitor’s images for either vendor were rewarded.

VMware's VirtualCenter can do many of the same things mentioned above (turn on/off machines, shut down, reset). We were also able to create template images to be used as a base to create images later, or clone a VM (while it’s turned off), and if VMotion, an option, is licensed, then it’s also possible to migrate between two hosts (using shared storage). Also, we were able to assign permissions to each VM where you can set up different users and groups (via Active Directory/Local Users) to be able to access that VM or group of VMs.

Another thing you can do with VirtualCenter is setup what is called a resource pool, which allows you to divide resources among multiple VMs easier. For example, let’s say you have six VMs. You would like two of those to use 60% of all resources on that system and the other four to have 40%. You can create two resource pools and assign the VMs to one of the two pools. This way you don’t need to worry about assigning resources to each individual VM.

Building a virtual host

We used several steps after installation to prepare virtual guest slots on our Hyper-V and ESX hosts. We then populated them to emulate server migration and consolidation processes. Once either hypervisor is installed, we could generate guest instances that served as holding spots for installable operating system/application instances on physical servers that we wanted to migrate to our host servers. 

Both Hyper-V and ESX allowed us to install guest instances without aid of the SC-VMM and VirtualCenter tools, respectively, and then install either a pre-made VM instance, install an operating system from CD/DVD, or install from a network source/share. That said, the added management tools can be helpful in this process if in use, serving as a user interface to the hypervisor in question. Both tools eased common VM instance management tasks, such as duplicating, creating, copying, and allocating and re-allocating resources.

For currently existing operating system/application pairings that need to be migrated to a virtual host, each hypervisor tested has a similar procedure to capture a server instance and import it into a virtual guest slot that we’d prepared.

This process of copying a current physical server to a target server is a process known as cloning. There are two primary physical to virtual (P2V) cloning methods that both hypervisor products support: the ability to migrate from a disk image, and, the ability to clone from a live production server.

Unfortunately, Microsoft's P2V process couldn't be tested because this portion of the beta application crashed despite lots of patching, intricate settings tweaks, and calls to advanced technical support. It’s not ready yet.

VMware's P2V application is an optional extra called VMware Converter, and when we tested it, it worked well in most cases as long as the hard disk controller was supported. It worked best with Windows, where we could produce live clones from Windows XP and Windows Server 2003 images. To cold clone Linux and Windows Server 2008 VMs required some extra setup steps after it was copied.

Images of working virtual machines can then be used as the base of replicas for other VM guests. The images, however, are in known formats and can be mounted as file systems for purpose of manipulation of the content files/folders. Hyper-V uses a cross-Windows file format called VHD,and ESX uses a published system called VMDK.

Some organizations use virtualized images for distribution, and images may need to be customized for purposes of making the image unique (a Windows requirement, generally, for identification), or to load specific software combinations as a payload for a targeted distribution of the virtualized physical hardware instances to other locations.

With both products we found that mounting and editing the images can be simple, but also run the security risks we talk about in detail below.

Migrating images

Migrating VMs from one server host to another happens for a variety of reasons, ranging from load balancing to application aggregation.

Migrations for our direct comparisons here revolve around taking snapshots of existing working VM guests and then moving these images to new target server hypervisor hosts.

VMware offers an optional live migration tool available call VMotion. Our prior experience with ESX VMotion is that it can move images within seconds from one server hypervisor to another. Microsoft recently announced a similar server for Hyper-V won’t be available until 2010, a serious deficiency were we to include this in our direct comparison.

By using snapshots under Hyper-V, we were able to capture live system state data on either Windows 2008 or Novell’s SUSE Linux Enterprise Server 10.2 VMs.

1 2 Page 1
Page 1 of 2
The 10 most powerful companies in enterprise networking 2022