Virtual Iron offers plausible VM hosting platform
Test shows product to be simple, stable
Virtual Iron’s Enterprise Edition V3 is a virtual machine hosting platform that couples its open source hosting foundation with tools to convert existing servers into VMs and subsequently manage them.
Virtual Iron’s roots are grounded in the same Xen paravirtualization project used in Citrix’s XenSource Enterprise and Novell’s SUSE Enterprise 10’s Xen (in fact, it OEM’s the latter one). Virtual Iron Enterprise’s own secret sauce comes in the form of tools for physical-to-virtual, virtual-to-virtual server conversion and a management system that we found does a good job of managing entire VM farms.
On the downside, the current list of compatible hardware platforms for Virtual Iron are for all practical purposes confined to a short list of top 64-bit virtual processes-enabled Intel and AMD CPUs hardware. The company does offer a 32-bit version, but we did not test it because 32-bit memory confines make 32-bit platforms essentially obsolete for hosting virtual machines.
We ran Virtual Iron on a Dell PowerEdge 1950 server with an Intel V-series dual-quad-core 1.86-GHz CPU and reasonable memory (at least 4GB).
Virtual Iron lives on a server that straddles a required, separate and isolated network segment for Pre eXecution (PxE) boot provisioning of physical hosts, and an accessible network segment to access the management functions. The Virtual Iron server acts as both PxE boot server for physical servers acting as VM hosts and the VM instances on those hosts. VM images can be stored and managed on either the host’s installed disks or on a managed storage-area network. The PxE boot process (think of it as a DHCP client boot with an executable provisioned payload) uses trivial FTP -- which uses no password authentication -- to provision all PxE requests. TFTP is notoriously insecure, but acceptable in this case since the network is required to be physically isolated.
Net Result | |
Vendor | Virtual Iron |
Price | Ranges from $499 to $799 (includes PlateSpin LiveConvert software) per CPU socket |
Pros | Open source Xen-based; offers solid availability |
Cons | Limited hardware compatibility; weak image control |
Virtual Iron uses a modification of an OEM SUSE Enterprise Server (SLES) 10 SP1 kernel as its hypervisor hosting application (see discussion in the SLES 10 review). The hypervisor kernel (referred to as “domain0”) was installed via a PxE boot server onto the Dell server host. From there, building Virtual Iron’s guest operating system hosting architecture is a matter of installing operating systems (SUSE Linux, Windows XP or Windows 2000 or 2003 Server Editions) and the desired applications manually — or through a physical-to-virtual conversion process.
The process to migrate a physical-to-virtual image uses a Virtual Iron application that gathers application and operating system files into a virtual image. The image is ported across from the host server to the desired VM host. Our experience was that this process took about two hours; other products we reviewed took less than half that time under the same conditions.
Virtual Iron administration, monitoring and provisioning capabilities come courtesy of a Web–based GUI or a Windows executable that link to the Virtual Iron substrate through the management server. An application called Virtualization Manager performs license control, puts, gets and deletes files on the host, and performs full system backup/restore of the host that it’s logged on to.
Virtualization Manager also serves as the ‘business end’ of Virtual Iron, performing access control (if LDAP is enabled or if users are entered into its own user account pool), server discovery and configuration, and network/virtual LAN configuration. It also monitors performance and manages and reports on jobs (rebooting, new user additions and other environmental changes) for the host and guest operating system environments.
The Virtualization Manager found the server we used for testing correctly, but that wasn’t tough to do. We were able to easily control guest VMs, whether Windows 2003 Server, Windows XP, or Linux CentOS editions, starting them, stopping them, finding their CPU utilization, disk use and network traffic speeds. We could also set CPU use ceilings, and allocate disk and network resources simply. Objects can be built so that groups of VM characteristics can be monitored and manipulated. We could occasionally lock up the user interface, but it could always be restarted readily. Objects can be aggregated for manipulation, to, say, reboot all VMs at a specific time/date. We setup VI through Virtualization Manager using CentOS 4 and Windows 2003 Enterprise Server Edition as guest operating systems. In our testing, we found that crashing one hosted operating system instance had no bearing on the stability of others.
Virtual Iron has another feature called LiveCapacity, which moves a hosted operating system VM from one hardware host to another when CPU capacity exceeds a threshold for an administrator-specified period of time to an available host meeting administrator-defined criteria. Rules are in place that forbid using LiveCapacity to move a working VM, live, from an AMD-hosted VI hypervisor to an Intel-based hypervisor, so we could not test this feature. We tested VM performance, using CentOS as a guest operating system, and LMBench3. Performance was equivalent to Citrix/XenSource Enterprise 4. Disk I/O in an otherwise inactive host ran at 10.7 megabyte/sec; slightly faster than Citrix/XenSource 4, while network I/O measurements were essentially identical. Our execution of LMBench3 fork+execve -- a favored metric that measures complex task initiation in Unix-like operating systems – registered 309.2 microseconds for Virtual Iron vs. 299.8 microsec for XenEnterprise 4.
Virtual Iron doesn’t have quite the third-party application support found in other VM hosting environments as there are few third-party vendors chasing after Virtual Iron compatibility (Platespin and Leostream are exceptions, see test). This may change as the shift between VMware ESX and the hypervisor-based troika of Virtual Iron, Xen Enterprise and Xen/hypervisor-compatible Windows 2008 Enterprise Virtual machine compatibility arrives in 2008. Several manageability items found in other VM hosting platforms are missing from Virtual Iron’s repertoire, such as the ability to articulate user controls (left to the auspices of the guest host VM operating systems) and guest operating system image authentication (leading to possible image-forgery difficulties). Additionally, it was slow to make physical-to-virtual images as well as copies of running images for use as "gold masters." These criticisms aside, we found Virtual Iron’s day-to-day use to be simple and reliable — two very good qualities in a VM hosting scheme.
(Compare physical and virtual server management tools in our revamped Server Management Buyer's Guide.)
Henderson is principal researcher and Rand Dvorak is a researcher for ExtremeLabs in Indianapolis. They can be reached at thenderson@extremelabs.com.
Henderson is also a member of the Network World Lab Alliance, a cooperative of the premier reviewers in the network industry each bringing to bear years of practical experience on every review. For more Lab Alliance information, including what it takes to become a member, go to www.networkworld.com/alliance.
Copyright © 2007 IDG Communications, Inc.