- Top 10 Recession-Proof IT Jobs
- 7 Hot IT Jobs That Will Land You a Higher Salary
- Link Building Strategies and Tips for 2014
- Top 10 Accessories for Your iPad Air
Network World - In all rounds of hypervisor testing, we used an HP DL580 server as our primary test system. This server was equipped with
an HP Smart Array Controller, four Intel Xeon CPUs/sockets for a total of 16 cores (each running at 2.93GHz), and 32GB of
Each tested hypervisor had allocated to it a 146GB HD (2 x 73 GB SAS drives with RAID 0). We installed each of the hypervisors
separately and then subsequently installed the guest OS instances, once for Windows Server 2008 VM testing, and then once
for SLES 10.2 VM testing.
After each guest was installed, we set up each guest VM instance to include the benchmarking tools and downloaded the latest updates. We then shutdown these guest images and made six copies of the instances using either GUI or command line functions provided by each vendor.
We tested performance using two benchmarks: one for business application performance called SPECjbb2005; and, one for disk
IO performance, Intel's IOMeter.
We tested SPECjbb2005 on each native operating system to get a baseline measurement. We then ran a script in each VM guest
instance to launch SPECjbb2005 concurrently. Each SPECjbb2005 run started within less than a second of each other this way.
We used a memory allocation process to give each instance the same amount of allocated user memory space.
We used IOMeter in pre-compiled binary form to measure disk channel/disk subsystem (an internal HP Smart Array). To tax the disk channel, we used a tougher than real-world matrix of disk channel reads/writes. We used a 30% read, 70% write mixture to exercise the drive subsystem.
We then tested using Intel’s IOmeter. First we gathered IOmeter results for the each operating system running natively on
the hardware platform. To test with multiple VMs in place, we connected the server to a machine running Windows XP with the
IOmeter server running. Each dynamo (the IOmeter worker code) in each VM instance to be tested was connected to the IOmeter
server. We then ran IOMeter through the two test sequences the first comprised six VMs, each using one vCPU and the second
comprised six VMs, each using four vCPUs.
For all results reported, individual VM instances recorded consistent numbers (within three percent of each other on the same
hypervisor platform with the same guest operating system). The only exception was found in the Citrix XenServer’s VMs, which
we explain in detail in the performance results article.
For qualitative analysis of each hypervsisor, we used the same host platform, an HP DL580 G5 (four-socket, 16-core Intel Xeon CPUs) server.
We tested the importation, migration and setup of virtual machines using both native hypervisor management applications and
add-on tools provided by each vendor.
We used an IBM x3550 (two-socket, eight-core Intel Xeon V-enabled CPUs) and Dell 1950s (two-socket, eight-core Intel V-enabled
CPUs) using local SAS storage as base platforms to test both single and SMP kernel migrations as well as cloning applications.
We tested for iSCSI connectivity, as well as remote mounts using NFS.
We then monitored virtual machines as they started up, operated continuously and shut down using all tools provided by the participating vendors. We looked at reporting, alarms/alarm messaging (if avaialble) and how hypervisors could be accessed for security purposes, as well as innate firewalling. We also checked administrative policies and authentication methods.