- Silicon Valley's 19 Coolest Places to Work
- Is Windows 8 Development Worth the Trouble?
- 8 Books Every IT Leader Should Read This Year
- 10 Hot Hadoop Startups to Watch
InfoWorld - In the beginning, there was one block of bytes that was called "the application." Perhaps you toggled it into the front panel of the computer, perhaps you handed someone a deck of punch cards, or perhaps you dragged a single EXE file into your favorite folder. But there was one thing that was the application -- one tangible, immutable, unopenable thing -- and it was relatively easy to move or duplicate.
Those days are ancient history now. Today's applications are multilayered, multitiered, service-enabled, and often as dependent on a certain server cluster as on a specific operating system version. Naturally, even as "the application" becomes more and more complex, we want to make it more and more portable. It's no longer enough to migrate the virtual machines from server to server. Now we want to move the app from one data center to another, or from the data center to Amazon EC2.
[ From Amazon to Windows Azure, see how the elite 8 public clouds compare in InfoWorld's review. | For a quick, smart take on the news you'll be talking about, check out InfoWorld TechBrief -- subscribe today. ]
Enter two companies that are changing the game and creating one of the largest abstraction layers I've ever seen, all in the hopes of conquering the myriad layers of dependencies and making it possible to move server farms from here to there without pulling out your hair. Ravello Systems and CloudVelocity are creating the tools for juggling virtual machines from the data center to the cloud and back again.
The two companies confused me at first with their use of the word "application." Ravello and CloudVelocity are using the word to refer to something different than a pile of bytes or a collection of files (although that's what it is ultimately). They're not even using the word to describe a working group of virtual machines.
When Ravello or CloudVelocity say "application," they mean a cluster of what we used to call computers and all the information inside of them. After all, what we call a computer when we purchase it from a cloud company is just another program wrapped around your software. Together, these programs make a bigger program, and we might as well call it an application because that's closer to the reality in the cloud.
Once you have the Ravello or CloudVelocity system up and running, you can go to a website, click a few times, and clone a big collection of virtual machines all at once. Bingo -- then you can keep them running or shut them down or spin up 30 clones, all to handle failovers or extra loads or troubleshooting or testing, whatever you want. CloudVelocity focuses on migrations and disaster recovery, while Ravello leverages the public cloud for the development and testing of applications that will be deployed on-premise.
The Ravello and CloudVelocity systems -- both hosted services and both focused on Linux-based application stacks, at least for now -- take a different approach to juggling all the machines. While some folks are building elaborate rules for installing all the software in the new compute instances using tools like Puppet or Chef, Ravello and CloudVelocity just make clones, then clone them again.
CloudVelocity's One Hybrid CloudCloudVelocity's website offers two basic operations: watching and cloning. The company is able to do this because it has built a little tool that digs deeply into the operating system and siphons the performance statistics -- CPU load, as well as memory and bandwidth usage -- to CloudVelocity's servers. The graphs are similar to the basic results you get from the average monitoring program.
But CloudVelocity also builds a virtual machine that's an Amazon-based clone of your current machine with all the data, ready to go. It does this by copying all the files that make up your current instance, right down to the OS kernel, to an Amazon EBS volume. Then it creates a new AMI (Amazon Machine Image) with those files. The Dashboard tab lists every soldier in your clone army, while the Cloning tab lets you create a new copy in seconds.
The GUI makes it easy to specify which hosts will get new, public IP addresses -- just drag them to the "public" side of the firewall. Hosts on the "private" side of the firewall will keep their existing internal IPs.
The cloning process lets you duplicate multiple machines in one click. If you have a MySQL server, a MySQL mirror, three application servers filled with business logic, and a load balancer, you can group them together as one "application." One push of the clone button creates six new machines.
CloudVelocity automatically creates the Amazon VPC (Virtual Private Cloud) and the necessary security groups based on the services in your application. If you want to change those security rules, you can do so directly through the CloudVelocity GUI.
Thereafter, if you like, CloudVelocity can keep your Amazon clones continuously updated with any changes to your on-premises machines (but not vice versa). You can also define subsets of machines and replicate/synchronize them as needed. CloudVelocity offers a fairly elaborate way to group together your virtual machine collection, and keep the files in Amazon EBS in sync with their on-premise counterparts.
Thus, CloudVelocity also offers a way to provide disaster recovery in the cloud. The company's literature suggests this can help with failures in both the cloud and local machines. You might want to keep the main servers in-house but have CloudVelocity's clone army ready to go in case your in-house machines fail. Or you might want the Amazon clones ready for action in case your Rackspace machines go offline. After all, CloudVelocity can clone machines on Rackspace as easily as those in your data center.
CloudVelocity's cloning tool is powerful, but it's not omniscient. It won't track changes in memory, so your clone won't be an exact replica. This probably isn't important because the software should be writing anything valuable to disk in any case.
CloudVelocity integrates with MySQL to synchronize database transactions to the cloud site, but if you have your own way to create live backups of your MySQL instance or use a different database, CloudVelocity can kick off your scripts and ship the backups as required.
CloudVelocity supports the major Linux distros, including Red Hat, CentOS, Ubuntu, and Amazon. Windows is a work in progress. Of course, thanks to the magic of bitrot, my experience wasn't perfect. I tried cloning a CentOS 6.4 machine from Joyent only to have the machine lock up. Joyent uses a version of CentOS with a kernel that's been modified just enough to mess up everything. When I switched over to a Rackspace machine, it all went swimmingly.
CloudVelocity's handy discovery tool will locate all of the servers in your application. Then you determine which will be copied to Amazon and whether they will be synched with the originals for disaster recovery.
Ravello's Cloud Application HypervisorWhereas CloudVelocity moves clones to the cloud for development, production, or disaster recovery purposes, Ravello taps the cloud specifically for the development and testing of applications that will be deployed not in the cloud, but in-house.
Ravello's Cloud Application Hypervisor allows you to run an exact replica of your multitier production application -- including the VMware or KVM virtual machines -- on Amazon, Rackspace, or HP Cloud. You can test in the cloud, iron out the bugs, capture all the changes, then bring the new version of your application back into your data center.
Setup takes a different path than with CloudVelocity. Instead of offering a tool that burrows into your machines and copies them to the cloud, Ravello gives you a sandbox where you can upload your own VMs or start with Ravello's prebuilt virtual machine images, and draw up "blueprints" of your application.
This construction is done in a graphical Web interface that shows all of the machines as blocks. You drag and drop them onto a canvas, then sketch out the connections among them. Ravello offers a few of the standard Linux distros, and some machines are already configured for standard jobs like running a Web server. If you're uploading your own VMs, both Linux and Windows are supported.
I found it pretty easy to set up a few machines, populate them with the software I was using, and save them as a blueprint. Then I could stamp out new versions. With the push of a button, I could spin up the whole works on Amazon.
One feature lets you preconfigure the firewall settings for each machine by opening or closing the ports for your software. When the machines are spun up, Ravello reaches in and opens and closes the ports to match what you want. This is a great feature because I find that half of what I do with a new machine is fiddle with the firewall. The distro's defaults never match what I need exactly.
The tool subtly encourages better security practices. As with CloudVelocity, you can connect the machines so that they use the internal IP addresses, keeping the second-tier machines like the database machine away from the public Internet. If you twiddle with the DNS correctly, the Web server will always be able to find the matching database machine in all the replicas.
But Ravello's real magic is a hypervisor for hypervisors that allows you to run VMware and KVM virtual machines inside the cloud machines. It's not just repackaging the contents of your servers in AMIs, but dropping the machines themselves onto the cloud.
You can make changes to these VMs, test them, and save them back to Ravello as a blueprint. When you're done, you can download them from Ravello and run them back on-premise.
To the cloud and back againIf you look at systems like Ravello and CloudVelocity from the perspective of, say, the programmers who toggled their software into the front panel of a computer, the layers and layers of hypervisors and operating systems and libraries and virtual machines is pure madness. The stack must be a million calls high.
But if you think like someone who just wants to get the server running before 5 p.m. on Friday, then it makes perfect sense. Everything is packaged together and everything more or less runs.
I was pleasantly surprised by the ease I found using both of these systems. Sure, it made sense for our great-grandparents to install software or configure the machine, but it's simpler to toss the VMs around as solid units.
Ravello lets you upload your own virtual machines or map out a new application using pre-built instances from its library. You can "publish" the results to Amazon, HP Cloud, or Rackspace.
In all, I preferred the simplicity of CloudVelocity's constant duplication. The synchronization makes it possible to clone your machines quickly and easily. The clone is ready to go.
But the cost in bandwidth must be pretty high if you're constantly making changes to the file system. Plus, there must be some questions about security risks if you're storing anything valuable on the machine. The cloning tool is a high-powered backdoor even if it's doing exactly what you want.
If you prefer to build your application from scratch and don't want or need the constant copying, Ravello is a simpler solution, though it depends on what you want your clones to be able to do right off the bat. If they're going to serve relatively static data, a blueprint will suffice. But if they're supposed to jump right in and be current with dynamic data, then the CloudVelocity model makes sense.
CloudVelocity is putting much of the focus on recovery from site failures, and this will probably be the biggest market, at least initially. Still, the convenience of having an automatic duplicator for your infrastructure is bound to be a seductive tool that even developers may become addicted to using.
Ravello's tool will be most useful for the developer who's building and testing an evolving system. The ability to move the actual enterprise machines into the cloud and back again is invaluable. Developers can easily re-create problems or bugs encountered in the production system because they're working with an exact replica.
The tool could also be useful for failover, but only when the server blueprint is all that's necessary. If the server doesn't need to track the latest changes, the new clone can take the place of the old. This could be useful during periods of high load when the stock server can step up and do the work.
I imagine that both companies will gradually converge on a tool that meets all of these needs -- development, testing, production, disaster recovery -- in both private and public clouds. Their simplicity solves one of the constant headaches for anyone trying to run a collection of machines. Tossing around applications filled with machines may not be the most elegant or theoretical solution to the problem, but it's simple and effective.
The ultimate message from both Ravello and CloudVelocity is that the idea of the operating system and any kind of software modularization has failed. No one has the time or the energy to handle the permissions, the libraries, the packages, or any of the other details of keeping a machine running. The simplest solution is to think of the entire cluster of machines as one atomic unit that no one can open or touch.
This article, "Review: Dueling hybrid cloud wizards," was originally published at InfoWorld.com. Follow the latest developments in application development, cloud computing, and open source at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.
Read more about cloud computing in InfoWorld's Cloud Computing Channel.
Originally published on www.infoworld.com. Click here to read the original story.