Abiquo: We installed their virtual appliance from ISO in two VMs on an ESXi server. On the first VM, we selected the Abiquo Server, Abiquo Remote Services, and Abiquo V2V Conversion Services for installation.
On the second VM, we installed the iSCSI functionality. Note most of the installation was done with Abiquo support as a kind of guided setup. After the VMs were ready for use, we setup DHCP on the first VM using ssh, re-branded the Web GUI with files they provided using ssh and started setting up our infrastructure by logging into the Web interface.
We added a XenServer and two ESXi hosts through the GUI. Then we pulled some sample virtual appliances from Abiquo's public repository to test. We tested a m0n0wall firewall and a few versions of Ubuntu (server/desktop). Next we tested capturing some of our VMs that were previously setup (by us) on the XenServer. We also tried to use the remote viewing (requires Java) of our running instances. We also shared, rebooted, stopped, and started instances. Finally we tested V2V functionality from XenServer to VMware. Most everything we tested was available through the Flash-based Web GUI.
Symplify: We installed the CentOS 5.5 VM on an ESXi server and installed the SimpleLink rpm, credentials, etc., in order to connect with the Identity router. We tested the admin portal to see how it worked. We added some user stores (our local ActiveDirectory users and Google Apps users). Then we tried adding some apps. We used business-level Google Apps (SAML) and also our own WordPress site (using HTTP-Federation). We then logged into the portal and tested that we were able to successfully use our Web apps (we could) and see if the policies worked (they did). We also deleted user credentials to see how fast Symplify updated, and we discovered it updates almost spontaneously.
HP CloudSystem Matrix: We tested CloudSystem Matrix using an HP blade server, located in HP's Austin Operations. We connected via IE8 (Safari didn't work due to VPN issues) to a system that HP had setup for us, including numerous blades, a switch, and storage pool infrastructure. We setup the visual map of the infrastructure, then proceeded to deploy our designed infrastructure. We found a limitation of only having one boot volume per storage pool, and needed to reconfigure our SAN-based storage pools.
We set a goal of bringing up two ESXi servers. We ran into UI process problems, and needed to reconfigure using VMware controls. Some of the errors we encountered were self-inflicted because the operation of the UI is non-obvious, and the application doesn't check dependencies well. We then launched instances of Windows 2008 R2 Server Edition and Red Hat Enterprise Linux 5, then successfully checked their status through the Matrix UI.
We also tested the UI extensively, and tried to learn its navigation, which we also found to be more difficult than others we've tested. We found the UI to occasionally be very slow to update, although we were connected through broadband directly to their data centers via fast links.
Turnkey Linux Backup and Migration: We used the Turnkey Linux site to register, profile, and download their WordPress/TKLBAM-enabled ISO (disk) appliance image, then deployed it in our data center, which consists of a host for the appliance and an Extreme 10GB Switch, with storage provided by a Dell Compellent iSCSI SAN.
We made an alternate backup of our site, then followed the instructions to bring up the WordPress/TKLBAM-enabled appliance, and we restored our WordPress files successfully as described. We monitored the backups, and performed restores to ensure that the process went smoothly.
PuppetLabs mCollective: We downloaded the mCollective toolkit and ran the client portions on a clean installation of Ubuntu 11.04 in a VM, accessed by its host, a MacBook Pro (MacOS 10.5). We used our account on Amazon's EC2 cloud to build an instance of Linux CentOS, then loaded it with mCollective apps, along with PuppetLabs's facter application.
We then used the EC2 control panel to deploy an initial instance, and tested that instance, along with the instruction set that STOMP uses via the Ruby gem. We then used mc to spin up instances, testing their states and the mc filter queries about the instances. We then installed, and spun up/down Apache http'd instances to see how fast the reaction time would be; the response was ultra fast. We put a randomized spin up/down sequence into a shell script and ran the script to watch the output of the commands we used.
Up to 40 instances were then spun up, representing the data cited in the review. We later sent other packages to the instances and had them run and report their states, then forced the instances to update themselves, checking progress through the sequences.