Virtual Iron Xen offers topnotch security, policy controls

Philosophically, Virtual Iron is different from the other hypervisors tested because it uses a hypervisor server farm managed through a direct-control application.

Philosophically speaking, Virtual Iron is different from the other hypervisors tested because it sets up a hypervisor server farm that is managed through a direct-control application over a private link. The Virtual Iron 4.4 Enterprise Edition we tested requires a separate physical machine used as management server. In turn, this server controls what Virtual Iron calls nodes — a 64-bit hypervisor VM-hosting servers. The VMs running on top of these nodes are still referred to as guests.

Virtual Iron uses a master/slave configuration where servers use Preboot eXecution Environment (PxE) boot mechanisms to start their initial program loading, and then they become substrates for virtualization. This means that Virtual Iron slave servers have two networks, a public one that faces the world and a private network used for communication with the master (a machine where Virtual Iron's management application, the VI-Center console, is running).

Virtual Iron platform support has two considerations, one for the Virtual Iron VI-Center and the other for managed nodes.

The VI-Center must be installed on a machine with RHEL 4 (32- or 64-bit), Windows 2003 (32-bit), or SLES 9 (32- or 64-bit) -- all of which are older versions of these operating systems.. To use VI-Center, we also needed to have Java 1.5.0 installed.

As for the managed nodes, you need at least 2GB of RAM, an Intel-VT or AMD-V processor, either SATA or SCSI drives, and at least two Ethernet ports. A full listing of the hardware supported can be found on Virtual Iron's Web site.

The guest operating systems supported include RHEL 3 and RHEL 4 and 5 (32- and 64-bit); SLES 9 and 10 (32- and 64-bit); CentOS 4 and 5 (32- and 64-bit); Windows Server 2000; Windows XP; and Windows Server 2003, 2008 and Vista (32- and 64-bit). All must run fully virtualized because the Virtual Iron hypervisor does not support paravirtualization yet.

Like Citrix's XenServer, Virtual Iron's Java-based management tools are included with the license. Although we didn't run into as many configuration errors as we did in our testing of XenServer, we did have our share of difficulties using Virtual Iron's Java-based GUI.

To get the Virtual Iron installation off the ground, we had to create a data center — basically an object in which the nodes are virtually held and from which they are managed. In turn, the nodes use PxE methods to boot, find the Java-based management server and take directions from it. You also use VI-Center to build and provision new VMs that will reside on each node.

We attempted to set up shared storage between nodes, but were unable to use NFS because it's not supported. So, we moved on to iSCSI connections. To set iSCSI up, we had to create a new network within the GUI and check the iSCSI box — which then takes up another server Ethernet port. Luckily, we could still use that same network link for connecting to the Internet or the LAN for our VMs, although the company doesn't support or recommend this because it's likely to clog the port with a combination of network or SCSI-targeted data communication. Using Virtual Iron's recommended construction, we occasionally lost iSCSI links.

We also tested an interesting and unique Wake-on-LAN feature to manage Virtual Iron nodes remotely. It worked quite well and it is useful for remote management tasks.

Operating and monitoring VM guests

Once we had Virtual Iron 4.4 and VI-Center in place, we were able to move, copy and migrate VMs in sequential operations. Each new cloning job had to wait until the previous one finished: VI-Center locks out other processes from executing while one is processing. We saw this take place when we were creating storage components and ISO images and starting VMs. VI-Center messages said, "This may not be combined with other job operations."

Cloning VMs didn't take very long, depending on the size of the VM. We couldn't choose a name for a cloned server at the time of cloning, however (it will just default to "Copy of VM . . . "), which seems odd. We had to rename images manually afterwards.

Virtual Iron 4.4 supports a Live Migration feature as well. Once we set up our disk channels (iSCSI and Fibre Channel, an untested option), we could drag and drop VMs between the nodes to do a VM Live Migration from one Virtual Iron host to another. The GUI doesn't make it obvious or easy, but it works.

Snapshots included in the Virtual Iron package worked well in testing generally. We found a bug in the process, however, in that storing a snapshot, then reloading a VM from a stored snapshot changes the MAC address of the snapshot VM's Ethernet adapter. This sets off a cascade that affects SUSE Linux guests, which key on MAC addresses, forcing a reconfiguration of a SLES 10.2 guest's network information. We reported this to Virtual Iron and were informed it was a known bug.

For ongoing VM management, we could use the VI-Center GUI to view dashboard-like information regarding the amount of VM-instance RAM being used, the CPU utilization, and the number of VMs started or stopped. This is similar to the level of monitoring offered by the other hypervisors.

Policies, which act like the other hypervisors' alarms but also offer corrective actions in some cases, are included in the Virtual Iron offering. There are a limited number of built-in policies of three basic types: user policies, reports and system policies. We could edit and customize these, but there doesn't seem to be a way to create a new policy.

Among the user policies are EmailNotifier, which sends you an e-mail when an event happens; and SystemBackup, which backs up the database. This backup policy came in handy a couple of times when the database became corrupted and we had to restore from a backup. The system policies include AutoRecovery (a feature that moves VMs to another node if their primary node goes down) and LiveCapacity (which moves VMs depending on resource usage). With reports, you get detailed information about events, jobs, nodes, or virtual disks or servers. You can customize these reports and save a copy.

Virtual Iron let us use Lightweight Directory Access Protocol (LDAP)-based authenticated directory-services credentials to log on to hypervisors; the hypervisor's security therefore is only as strong as the foundational directory service. We also could use administrator-added Virtual Iron-specific users for tracking purposes, but there wasn't a good reason beyond logging to do so.

The Virtual Iron 4.4 Enterprise Edition we tested includes a license for Virtual Iron LiveConvert, which is an OEM version of Novell's PlateSpin P2V tool. To use it, we needed an extra server with Windows 2003 Server (the platform we tested on) or Windows 2000 Server installed that could host Microsoft's SQL Server, which LiveConvert uses. In our testing we could convert only Windows XP machines because Linux and Windows 2008 are not supported yet.

< Return to main test: Citrix, Novell make a valid run at VMware ESX virtualization crown >

Insider Shootout: Best security tools for small business
Editors' Picks
Join the discussion
Be the first to comment on this article. Our Commenting Policies