If you're thinking about operating a private cloud, you'll need management software to help create a virtualized pool of compute resources, provide access to end users, and handle security, resource allocation, tracking and billing.
We tested five private cloud management products -- Novell's Cloud Manager, Eucalyptus Enterprise, OpenNebula, Citrix Lab Manager, and Cloud.com's CloudStack -- to see if the current generation of tools is up to the task. We found that Novell's Cloud Manager was the only product that had all of the features we were looking for. Therefore, Cloud Manager is our Clear Choice Test winner. We were frustrated by some of the other products, and a couple are not yet ready for prime time.
As with any discussion of cloud computing, the first step is to provide definitions. In this test, we're building and delivering infrastructure-as-a-service (IaaS) inside the corporate firewall.
And while it's certainly possible to run e-mail and line-of-business applications in a private cloud, our testing focused on scenarios where developers or end users are able to select a finite set of resources (hardware, licenses, applications) for non-persistent, short life-cycle jobs.
We designed our test to see how approachable the private cloud management application was to both IT administrators and users — especially if the users weren't technical systems personnel or developers.
We also looked for the ability of the management program to assemble a variety of resources, provide heterogeneous virtual machine support, as well as do this in a secure fashion — and be able to report what it's done and cost it all out.
Here are the individual product reviews:
Novell Cloud Manager 1.0
Novell's Cloud Manager (CM) controls internal assets in much the same way that public cloud service providers do, but with most of the rough edges removed and the rest highly automated.
Cloud Manager allows private cloud builders to identify hardware assets, bring together resource pools on virtualized servers, package applications, then bill and track usage through Active Directory and LDAP security models.
As with all of the products tested, there's considerable preparation work needed to allocate hardware and software resources, group them into identifiable components, then permit them to be accessed and tracked through the life cycle of the production phase.
When finally built, Novell Cloud Manager had the most mature way of managing, provisioning and accounting for cloud resources, with the added benefit of having resources that could be readily manipulated by end users.
There are two principal control components: the Cloud Manager Application Server, and the Cloud Manager Orchestration Server, which we installed into a VMware 4.1 environment using one SUSE 11 virtual machine for each service.
Initial provisioning also required building virtual machines to serve as a library for access through CM Orchestration Server (CMOS) to get work done. CMOS contains components (from Novell's acquisition of PlateSpin) that build customized VM instances.
And we installed a CMOS agent onto VMware's vCenter to connect the Novell bits with VMware. Novell Cloud Manager also works with bare metal hypervisors Xen and Hyper-V, but we didn't test these.
Once configured (not a tough process), Cloud Manager allowed us to expose cloud resources and put limits on them. Users authenticate through Active Directory or LDAP directory services, and use preconfigured templates to cost, deploy, and 'life-cycle' cloud components.
The components are virtual machines, configured with optional pre-installed -configured applications with specific VM characteristics like vCPUs, storage, memory, IP addresses. Settings can be locked down, or can allow changes, such as storage size/location or memory increases.
Templates must be available to users on an NFS share mounted by vCenter as storage. We tried both Windows Server and SUSE Linux Enterprise Server VMs and had no difficulties launching them and their payloads.
Novell Cloud Manager's billing/accounting components add a Managed Services Provider (MSP) flavor and sets it apart from the other packages we reviewed. Each workload can show how much it will cost per month based on the rates set up by the administrator -- these can include costs such as storage (per gigabyte), vCPU, memory (per megabyte) and network and storage costs.
For example, we could set $3 per vCPU per month. (Everything is done in monthly costs, not hourly). Various business reports can also be generated in CSV, PDF or Excel formats. Resources deployed are then able to be tracked, billed, just as though users were buying and deploying public cloud resources from an MSP or public cloud vendor.
CM offered us the complete picture of private cloud management. The documentation was usable, and required as you're essentially building a cloud bank from scratch, then offering it for paid-for production services.
Citrix Lab Manager 3.9 with self-service portal
The Citrix self-service portal isn't a freestanding product, it's part of the Citrix XenServer Platinum Edition coupled with Citrix Lab Manager, a resource manager and control system specifically for XenServer VMs.
XenServer has the capacity to turn Lab Manager components into a cloud provisioning system via roles defined within the self-service portal.
Unlike Novell's Cloud Manager, the self-service portal (SSP) is XenServer-specific, but works with Linux and Windows virtual machines—which can be easily loaded with applications and environmental specs (CPU, memory, etc.) for use in this library-like system.
SSP tracks a lot of information in reports but doesn't provide the components necessary to bill, which must be handled outside of the package.
XenServer 5.6 Platinum Edition is required, which is licensed by the server no matter how many cores the server contain. We installed XenServer and setup a licensing server to assign the Platinum license. We then imported a Lab Manager router VM template (which needs to be located on shared storage) and then installed Postgres database and Lab Manager on it. A few configuration steps, and we brought it all up, although we experienced a few headaches with the Linux Licensing virtual appliance.
Cloud VMs to be used with the SSP were then created from ISO operating system images stored on our server. Once these were created, we needed to install the guest agents for Lab Manager, and both Windows and SLES VMs worked fine.
Inside the SSP, we assigned templates from examples that can be used to define what roles admins and users will have, and what VMs and resources they can deploy.
Lab Manager and SSP can connect to Active Directory or LDAP (we used AD) for user authentication, and users have roles that are very highly detailed inside Lab Manager. Users can get quotas, limits on memory and disk space. Oddly, the LabManager virtual router counts towards these limits.
In testing SSP, we launched various Windows and Linux VMs successfully and simply with user roles. The VMs could contain preconfigured applications, or just be 'raw' operating system instances. SSP gets points for a strong ease-of-use factor that can be approached by enlightened civilians successfully.
Inside of Lab Manager are billing reports that are quite detailed and can include lots of information on the job name, start time, end time, RAM used, storage used, etc. These can be used to create billing information, but there is no cost associated with these fields and, as mentioned, costing/billing is outside of the scope of the SSP.
Cloud.com's CloudStack 2.1.3
More Spartan and less flexible is Cloud.com's CloudStack. CloudStack uses a management server app that can be contained in a VM or a physical box running Red Hat Enterprise Linux, or Centos 5.4+ 64-bit editions. Although utilitarian, CloudStack serves as a library or repository of VM images that can be deployed in a cloud or cloud-like configuration.
There's a lot of preparation needed before deployment. Once you create a VM, you can't change the CPU allocation, memory, number of CPUs or disk size; the VM configuration is locked. This means that a large number of possible VMs must be well thought through to meet potential configuration requirements, and built prior to deployment.
We tested CloudStack with Citrix XenServer. Although the Web site says KVM and VMware are supported, we could find no documentation for this and were subsequently told that support will come in the 2.2 release, which is going into beta around now. Storage for CloudStack works with NFS and iSCSI, 100GB minimum. And at least one NFS share is required for secondary storage. Then either iSCSI or another NFS storage system is needed for primary storage.
We had some issues with CloudStack. As we mentioned, once you create a VM it's very difficult to alter it. If you want something different, you'll have to delete it and create it again, which we found annoying. It's possible to go into the MySQL database that keeps track of Cloud.com configurations, but Cloud.com doesn't supply a schema for the database, and the updates are both manual and tricky. Support for database access will be released soon, according to a Cloud.com spokesperson.
We tried to create a new VM instance twice, but failed both times because there weren't enough IP addresses available. The user interface told us we had three VMs available for a certain account, but only one was actually there.
The defaults in the user interface are often out of a 'normal' range. As an example there's a default of 24 hours before a "destroyed" VM is deleted. Even though it is destroyed, the IP address associated with it is not returned, and you cannot add another VM until the destroyed VM is expunged if you don't have enough IP addresses. This could be a problem if you have a lot of users who churn a lot of VM cloud resources. We recommend carefully considering defaults prior to deployment.
Another limitation is that we couldn't create any VMs on the XenServer node before adding the VM to Cloud.com's repository, as there is no "discovery process" for existing/current VMs. If a VM was added to the XenServer node beforehand, CloudStack would not initialize properly and we couldn't continue configuring CloudStack.
We also found we couldn't have the XenServer license server on the node; the license server must be on another machine or VM not on the node. This and other quizzical behavior wasn't annotated in the documentation, which could occasionally be wrong or misleading.
After the tedium of setup, CloudStack supported Windows and Linux VMs without incident. CloudStack respects LDAP security, but we were on our own to modify the default Web user interface that accomplishes this. There are built-in users (not members of a directory service) that can be authenticated to if desired.
Cloudstack usage is tracked per user, although this isn't shown anywhere in the Web user interface; it's only available through the listUsageRecords API call, but it is possible to modify the Web user interface, as was the case with LDAP security. Clever SOAP programmers could probably make billing templates for the data.
Eucalyptus Enterprise Edition 2.0.1
We've reviewed Eucalyptus before as the cloud-management underpinning to Ubuntu Enterprise Cloud (UEC) and looked forward to seeing what the "Enterprise" version might look like. Indeed openEucalyptus is the underpinning for both versions, and is known to be solid, but due to documentation errors, we almost gave up on it twice.
The Enterprise Edition (EE) management components are similar to those used in UEC: Cloud Controller, Walrus (for Amazon S3-like storage), a Storage Controller (for block-type rather than file-type storage) and a Cluster Controller.
The components are installed on a RHEL 5.4+, CentOS, 5.4+ or OpenSUSE 11.2+ machine on bare metal rather than a VM (although this wasn't documented until the middle of our review).
Installing EE is a bear, due to poor documentation. While Eucalyptus typically installs EE for its clientele, we asked to do it ourselves. In the process, we discovered that support for Windows Server editions was troublesome and not well documented.
In Eucalyptus' defense, they state that their staff has no difficulty with it, and while we believe them, availability for cloud components is somewhat critical and becomes highly dependent on Eucalyptus support personnel availability.
During installation, we had to enlist a plentiful amount of workarounds, referring occasionally to openEucalyptus documentation to fix things rather than those from EE. After being informed midstream that we needed a discrete physical server for the management components, we had extra work to do.
Then we created images and instances using the same steps we used in creating UEC instances. These included bundling the instances, uploading them and registering the instances. Current VMware instances must go through a conversion step, and Windows instances must go through a several-phase conversion process; one of the phases can lead to Windows Server instances having the wrong boot-time hardware configuration, which blue-screened our instances until we obtained support to discern the workaround.
There are no links to LDAP or Active Directory for user authentication, so user and administrative logons are contained to EE's security, which we found barely passable.
Using the converted and registered instances once the installation is complete worked correctly. EE's Web user interface is a bit primitive, although it tracks deployed instances well.
The only way to deploy VMs is using the euca-tools on the command line, or other Amazon tools. Tools such as euca-run-instances, euca-reboot-instances and euca-terminate-instances are commands users need to use to deploy and un-deploy instances; this won't appeal to civilians, but developers and admin types will have no problem with it.
The only thing users or admins can see on the Web interface are the images that are available for them to use and their credentials to be used with command-line tools (ssh keys, secret keys, query ID, X.509 certificates, etc). Getting information on running VM instances requires a tool called euca-describe-instances. There are a ton of other command-line tools available. Still, there are no fancy Web user interfaces for users to connect to for viewing their managed instances, shutting them down or rebooting.
There is a section of the EE Web user interface that can be used to build reports on things like system events, and resources that have been used/deployed. The reports can be exported into a variety of formats, such as PDF, CSV, Excel or HTML.
With the resource usage reports, you can see storage or instance usage, including how many volumes were used, how many hours they were used and how many instances created. This could be used for billing, although there is no billing built-in.
By the end, the documentation deficiencies had us crazed; and we spent far too much time debugging this package. We recommend using Eucalyptus support if you find EE's features interesting.
The OpenNebula Project provides a cloud management toolkit that's specific to Linux VM instances, and is a component of a larger group, the Open Cloud Community. OpenNebula is fully open source, rather than "open core" like Eucalyptus Enterprise.
OpenNebula is especially suited for developer needs and non-persistent (job-control-focused) cloud VM use. It can work for enlightened civilians but requires an administrator with moderately strong Unix/Solaris/Linux skills to set up and deploy for private cloud users. OpenNebula has options for public and hybrid clouds too, but we narrowed our focus on private cloud for this review.
OpenNebula runs on Ubuntu 10.0.4 or CentOS 5.4+ (most other Linux distributions also work, but the express install is only available on the aforementioned ones.) Installing is strictly script-based (rather than GUI-based) and requires numerous (but easily crafted from example) text configuration files.
OpenNebula VM images must be created as KVM or Xen images before being installed via scripts. There is also a VMware driver, but it requires that the libvirt API be installed from source and takes a lot of extra legwork to get running.
There is a sample VM to download for testing out your environment. We were able to get it up and running easily. Once an image was created, we had to manually copy it to where we wanted our images stored. Then we created a configuration file for the VM's image and created a network description.
After the desired network configuration was added with the onevnet command, we launched an instance of that image with the onevm command via a logged-in user. We were able to get a Linux server working perfectly without any issues. We also tried using a Windows Server 2008 VM that was working in KVM, but unfortunately we weren't able to get it launched with OpenNebula. There is ostensible support for Windows, but no real guidance.
Supported add-ons include LDAP authentication (which requires 'net-ldap' ruby gem), accounting (which requires 'sequel' ruby gem), VMware drivers and openNebula express installer (which we used for installation).
We used Ubuntu 10.04 in a VM on ESX 4.1 as the front end and another machine with Ubuntu 10.04 as the node. The express installer sets up an NFS share on the front end for the nodes to share images.
Everything is done with command-line tools at the moment; there wasn't a Web interface to interact with OpenNebula. Here are some of the main commands:
* oneimage (adds/lists/deletes an image/iso to a repository and settings attributes for those images)
* onevm (create/delete/start/stop VMs and other miscellaneous VM related functions)
* oneacct (used to get billing/accounting data on hosts/vms/users)
* onecluster (list/create/delete clusters) onehost (add/delete/sync hosts) oneuser (create/delete/list users)
* onevnet (used to create/modify/delete virtual networks
OpenNebula supports several types of authentication schemes, including basic user name and password with SQLlite or mySQL database, and through SSH key management.
There is also a new LDAP plugin, although we couldn't get it working correctly with Active Directory. (One of the problems with OpenNebula's documentation is the lack of troubleshooting tips.)
There is an add-on for OpenNebula that installs the command oneacct, which allows you to look up information on how long instances ran, who ran them, what host they are on and other details. This information can be used for billing (although there is nothing premade for this purpose, this is left up to the admin).
OpenNebula's modularity makes it a candidate for future conglomeration with other FOSS applications towards fully open source private cloud configurations, although there are lots of pieces still missing. While OpenNebula documentation isn't stellar, it was workable, and we liked how it worked the first time and as expected.
OpenNebula contains useful tools, and its strengths are more as a core set of tools that chime together to make cloud resources available to developers and systems personnel rather than civilian users.
Overall, building a private cloud with the tools that we reviewed mandated a lot of upfront configuration work, blending the configuration to the hypervisor family supported, then finding ways to aggregate/objectify resource pools to make them easily available through instance life cycles.
Novell's Cloud Manager did it well and knows how to charge for it. Citrix's Self Service Portal builds on Lab Manager, but the portal components are part of a fatter per-server license cost, and are captive to XenServer. Cloud.com will soon offer wider compatibility to these two products, but is also a work in progress. We liked OpenNebula for developer/advanced systems personnel use. Remaining is Enterprise Eucalyptus, which Eucalyptus can control, but EE vexed us.
Henderson is principal researcher and Allen is a researcher for ExtremeLabs in Indianapolis. They can be reached at firstname.lastname@example.org.