Cloud management tools are as varied as cloud uses. For this test, we chose five tools that each attack cloud management from a different perspective.
We looked at Symplified for identity management exclusively targeted to SaaS-based apps, Puppet Labs for virtual machine deployment, HP for building and managing private clouds, Abiquo for IaaS platform management and TurnKey Linux for low-cost cloud backup.
Symplified Identity Management and SinglePoint
Symplified Identity Manager (SIM) provides administrators with a way to deal with Web-based application identity and passwords. This is done through an "identity router" called SinglePoint. The SIM product, in turn, manages identity for users with SaaS applications.
The SaaS applications covered include LinkedIn, Google Apps (the business version), Salesforce and many more. Almost any Web app that has a login screen can be included, using HTTP federation.
With SIM and SinglePoint, all of the construction of authentication is "behind the scenes" to users. Administratively, we found SIM and SinglePoint to be a little tough, but very usable once constructed.
SIM develops an identity vault that stores passwords and identities for selected websites. These identities can be linked to local in-house user stores such as LDAP or Active Directory via the included SimpleLink connector.
The identities and passwords are stored in a centralized vault that is encrypted with AES128, using a rotating encryption key. The vault is stored on the Identity Router, which can be installed locally or hosted by Symplified (ours was hosted).
The identity router becomes a middleman to connect the user to the apps. Single sign-on (SSO), access control and centralized auditing are some of the benefits of SinglePoint.
Setup and configuration
SIM needs a virtual machine (VM) to connect your credentials (like Active Directory or LDAP) to the Symplified cloud-hosted proxy authentication system. The VM instance uses CentOS 5+ or Red Hat Linux. We used CentOS and only installed an SSH server on it.
After that we installed SimpleLink RPM (Red Hat Package Manager) kit. Symplified usually helps customers with this portion of the install; we tried doing it ourselves. After we had a setup call, we got help linking our Active Directory to Symplified's cloud platform. There is a local Web interface for uploading the credentials. The SimpleLink server then connects our infrastructure with its Identity Router(s), and behind the scenes SimpleLink uses openVPN to secure the channels.
SinglePoint Studio is the cloud-based admin Web portal where everything is set up and configured. SinglePoint Studio is a Flash-based app and is responsive, although the fact that it uses Flash will give some organizations security concerns. The portal allowed us to add user stores or entries of logon IDs and passwords. We could create application groups and links to the applications themselves. HTTP Federation or SAML type apps can be discovered, but it's also possible to manually configure HTTP-based apps that log users on.
Within the portal's app groups selection, we could create policies to allow certain users/groups access to various apps based on attributes that are retrieved from the various user stores.
There's a "My Dashboard" section that displays an overview of Identity Router sessions, loads, file system, CPU usage, system memory and configuration info such as how many user stores, app groups, applications, policies and Web servers have been created.
Perhaps the only operational criticism that we have of the process is that there is no interstitial message to remind us to publish configurations when they're changed. If we were to forget, and exit without publishing, nothing would be saved.
Overall, SIM is a nice, lightweight but highly effective method of dealing with many internal users needing single sign-on with multiple popular cloud-based SAML/HTTP applications. It's flexible, and has the grace not to be annoying in an otherwise annoying process.
The Abiquo platform is a unifying management application that's compatible with VMware, Xen, HyperV, Red Hat and KVM-based products.
Abiquo is a multi-tenant application, and can remold resources in fascinating ways. We tested Abiquo using what it calls "proof of concept modeling." This method has its limitations for testing, but we were able to get a good feel of how Abiquo works.
An Abiquo engineer guided us through the installation, as the company does for all of its clients. Multiple services need to be installed, including Abiquo Server, Abiquo Remote Services, Abiquo V2V Conversion Services, DHCP and a NFS Server.
We could put all these services on a single ESXi host and install the services under different VMs. Abiquo is pretty easy to use once all the prerequisites are in place.
Our installation specifics used a CentOS installation. All we had to do was select the different options that we wanted to install and fill in some values. The server VMs were easy to set up and configure. The installation forms are understandable and useful.
We could also brand our portal. This allows customers to bundle services together for aggregation poised toward groups. All the branding that was required was replacing a few files and restarting the server.
Inside the GUI are infrastructure views for admins, which shows resources in terms of VMs, vCPUs, storage and other infrastructure characteristics. Admins can add "bare metal" physical hypervisors to a "rack" and configure each one. They can also view networks, storage tiers and allocation rules.
Abiquo's Virtual Datacenters are among the exciting elements of the components. We could see virtual data centers created with supplied or our own virtual appliances, along with network and volume information. We could add/delete/edit virtual appliances, which lends itself to "off the rack" data center provisioning. We could also set up resource limits for each virtual data center.
In turn, an Apps Library is built that lists all the virtual images that have been downloaded from remote repositories or uploaded from local files.
A tab in the GUI lists the users for each "enterprise," which can be used to separate users into different groups and roles. The events tab lists all the events that happen (similar to Unix logs, Info, Warning, Normal, Major, Critical) — all color-coded for our viewing pleasure.
Interestingly, Abiquo divides VMs into managed or persistent vs. non-persistent, which, upon shutdown, evaporate and repopulate the resource pools available.
Abiquo's data center infrastructure is egalitarian, yet fairly easy to deploy and to manage, both for internal use and for customers or business units.
HP CloudSystem Matrix
We tested HP's CloudSystem Matrix 6.3, a private-facing IaaS management tool. There's also CloudSystem Enterprise, which controls internal IaaS, PaaS and SaaS, and a Service Provider version.
Matrix is a sophisticated and complicated combination of HP blade servers and management software. Its breadth is staggering, but the system's complexity can also make it difficult to use. Matrix manages a wide variety of hardware, software and virtual machinery (chiefly VMware) in a control plane of IaaS. Its components consist of several servers, including a blade server, software controls, server storage and software. The package isn't just for HP systems, as CloudSystem Matrix can discover a long list of hardware and infrastructure by IP address range, although this wasn't tested.
Matrix, which we tested on HP blades, has a cloud-in-a-box feel. There are a number of software parts and pieces that go together and are managed through a Web-based administrative portal. The portal includes links to all the different application pieces.
The portal is quite daunting as there are so many menus, submenus and options on each screen; it begs for a huge display or two monitors. The operations of the CloudSystem seem stitched together and some parts of it seem to load another part, but we weren't sure which part was being loaded. Nonetheless, CloudSystem's breadth manages a wide variety of infrastructure.
The pivotal piece of Matrix works through an app called Insight Orchestration. Matrix has a discovery application that works on existing infrastructure, identifying assets and arranging them. These are added to a clever tool that uses icons to drag and drop a visual representation of discovered or inserted infrastructure.
Templates are then used to drag and drop objects like bare metal or virtual disks, servers, network and VLANs into a map. We could then connect the objects together, inserting details about a connection as we went through the design process.
Once the template is done, it's launched and a visual representation of deployment progress can be viewed, along with actions that might need admin approval during deployment steps. Users are then added, and we could connect to Active Directory to link users to the application.
We could also create asset pools of machines, dividing them up into objects. The more advanced versions of Matrix allow pooled/grouped assets to be branded.
Cloud apps could also be pooled in this way, so as to allow users to choose off-the-shelf configurations relating to specific or general tasks.
We had the ability to look at Cloud Maps, which were a strong visual interpretation of the cloud resources that we'd configured and deployed. We could then flip to the Capacity Advisor if we wanted to perform what-if type analysis for different scenarios. We found the user interface to be cumbersome, and procedurally not intuitive.
The user interface seems to have many options available, and is seemingly procedurally and productively simple, but we found lots of gotchas. Our mission was to deploy two ESXi servers, and during that process, our molehill turned into a mountain. After trial, error and HP support, we were able to get the VMs running.
We also had to do a lot of manual work inside of VMware to perform associations to the CloudSystem for our ESXi servers. The documents, while somewhat useful, didn't prepare us for the daunting experience that we had.
CloudSystem Matrix is complex, but it has the capacity to manage and potentially "remarket" a variety of infrastructure assets.
Puppet Labs MCollective
We first saw MCollective in our review of Ubuntu 11.04 Server and Cloud editions. What intrigued us was its ability to rapidly provision instances of operating systems, but also applications. It's poised toward developers, and is limited currently to Linux instances.
Despite the fact that the Marionette Collective/MCollective ("mc") tools are CLI, it achieves astounding speed at communicating with potentially thousands of instances as fast as the wire speed can move the messages, no matter where the instances are located. The mc tools are middleware that use a multicast-like push messaging system to controlled nodes. There is no artistic drag-and-drop rack configuration. There are no library-like user interface Web pages that one can "check out" an instance of a desired application. If CloudSystem Matrix and Abiquo 1.7 are sky-management generals-of-the-armies, MCollective is the battalion commander, bereft of the niceties, pomp and circumstance.
Inside instances that mc controls are two mc agent daemons installed from RPMs. The daemons are based on Ruby code, and can manage inter-process communications and managing packages. The client has similar components. The "collective," therefore, consists of nodes, which in turn have servers running in them — agents that are the messengers that speak to middleware, in the client. The collective is a living and dynamic thing — but is totally bereft of security as an object.
This means that communications must be performed over VPN links and SSH, and applications like Apache or a LAMP installation must have their own security components enabled outside of what mc manages. Fortunately, much of this can be done via mc — but application security and link security for the collective object are two different things, we found.
The MCollective can spin up applications with frightening speed. We deployed a single instance, provisioning it with mc. We had 40 instances done in approximately 29 seconds.
We then instructed mc to install Apache into the instances, start the instances and tell us that it was done. Total time was approximately 31 seconds. Stopping all 40 Apache server instances took approximately seven seconds. Killing the instances via shutdown with verification took approximately 12 seconds.
The middleware keeps track of basic connectivity facts regarding deployed instances, but there is no database; it's a stateless, push-based messaging concept with metadata intelligence inside the messaging that makes the collective do work. The commands we used are easily scripted into scripts/batches. Had we the budget, the number of instances that we could spin up within a minute could number in the thousands. Having them do work, send messages or store the results, then shut down (and stop the cost cycles) can be stunningly quick.