It’s easy to get all “cloud first” when you’re talking about new, greenfield applications. But how do you get the core business applications running in your data center – so-called brownfield apps – easily and efficiently migrated to the cloud? That’s the problem startup CloudVelox set out to solve, with the larger mission of helping CIOs build “boundaryless” hybrid data centers. IDG Chief Content Officer John Gallant spoke with CloudVelox CEO Raj Dhingra about how the company has automated the migration of complex, traditional applications to Amazon Web Services (and Microsoft Azure in the near future). Dhingra explained how companies are using CloudVelox’s One Hybrid Cloud platform to not only migrate apps, but to build cloud-based disaster recovery capabilities and simplify a variety of test/dev chores.
You have a long and very successful background in the tech industry. Talk about this opportunity and what CloudVelox set out to do?
The company was founded in late 2010, beginning of 2011 at a time public cloud was gaining interest in the minds of enterprise customers. There were quite a few companies trying to help enterprises and developers build new applications for the cloud. Vendors called them greenfield applications. We all heard companies who were innovators and early adopters talk about cloud first, mobile first, building customer-facing applications that can take advantage of the cloud.
What nobody seemed to have focused on was what about my existing data center applications? Let’s call them our brownfield applications. How can I take advantage of the cloud for my existing brownfield applications that I run in the data center? There are anywhere from 15 million to 80 million VMs running in data centers around the world. What about them? How can I take advantage of that? CloudVelox focused on the brownfield applications and how enterprises can migrate and run those applications in the cloud and how can that be automated.
The secret to cloud computing is automation. You request a service, maybe it’s infrastructure-as-a-service, then you are able to quickly spin it up and pay as you go. In a similar way, how could you automate taking an application that’s running in your data center and then run that in the cloud without having to do a lot of manual script-oriented effort? That was the vision.
Let’s go solve this problem. It’s a tough problem to solve. There are many things that need to occur for this automation to be useful and valuable, to allow enterprise CIOs to think about a boundaryless data center. What that means is, how can I basically manage my virtual data center as if it’s one data center, whether I actually own the data center, I run some of my workloads in a hosting facility or I’m running this application on Amazon Web Services. Think about all of that as a seamless set of resources and applications and, more importantly, be able to actually move that workload from any location or data center to another.
MORE: We’ve mapped aggressive cloud data center expansion by AWS, Google & Microsoft
When we talk to analysts or customers, there’s the sense that people don’t move those brownfield applications either because they’re worried about the security of the data or because they don’t see a huge cost advantage in making that change. Are you saying that it’s really just too difficult and you’ve overcome that obstacle?
In the past, many of the concerns have been around security or maybe about performance. We’ve seen a progression over the last few years where many companies of different sizes started to operate in a hybrid IT model. I might take some of my existing workloads and refactor them, modernize them so they can take advantage of native cloud services. It is not that straightforward to take that brownfield application and make it run.
There’s a variety of issues that come up. Traditional tools have been more about manual processes, taking a VM and doing an image conversion so it can run, let’s say, in AWS or Azure. There’s a lot of configuration required because the way the application was running in your data center required a certain type of server, so much memory, a certain type of storage infrastructure. You may have set up your network in a certain way; how your subnet works, how your IP address is used, physical IP addresses. Maybe you set up security groups and you locked down some ports and opened up some ports. If you’re going to re-host that in the cloud, all that needs to be replicated, keeping in mind the matching services in the cloud. Your data center may be running VMware, but AWS is not. How is this virtualized instance in EC2 going to use the right kind of storage, which is EBS on AWS? How will my network design map into a virtual private cloud on AWS? How do the security characteristics in my data center match the security groups on the public cloud?
If you do this manually it’s overwhelming, it’s time consuming, it’s error prone and many times people find it doesn’t work. The key was to address these barriers, automating to reduce that complexity of learning what I have in my data center, to reduce the complexity of learning what’s in the cloud. The cloud is a very fast-moving set of services based on the cloud provider. Do I have people trained to do this mapping and recomposing with matching services? We had to take a holistic approach understanding the infrastructure, understanding the application, looking at the data, looking at the database and all the apps that composed the particular workload.
What applications is your system appropriate for and what are some applications that it’s not so appropriate for?
Before I answer that I’m going to step back and make a point. What I’ve seen is there is a great need for education. People are trying to learn about the cloud and what works and what doesn’t. One of the things that we have focused on is providing some very good content around how to do these things. What’s important and what’s not? Your question maps exactly to a 10-installment blog we started a few months ago. We’ve written about why it makes sense to go to the cloud, the three paths you can use to go to the cloud - re-host, re-platform, re-factor.
The third blog post was: How do you select what are good candidates for the cloud? And the last one that’s just gotten published is: What are bad candidates for cloud migration? For example, if you look at what might be good candidates for cloud, what are the criteria you have to use? First and foremost, is the application operating system environment supported in the cloud, at least for re-hosting purposes? If your application is running Windows or Linux, that’s going to be a good candidate. If it’s running a proprietary version of your OS, then it will need some modernization or some re-platforming. For example, if it’s running Solaris or IBM AIX then that needs some work before it’s going to run in the cloud.
The second criteria can be, is your application running on proprietary, custom hardware which is actually not available in the cloud? If it’s running on some ASIC-based appliance that uses proprietary silicon, that hardware environment is not going to be available in the cloud either. You need to virtualize the application before you can move it there. Third, does your application have any dependency on another application or service that’s running in the data center? Maybe it uses AD {Active Directory}. You can move it to the cloud but then you need to set up a VPN or some sort of network connection.
AWS offers what’s called Direct Connect where you set up a high-speed link between your data center and AWS and the cloud becomes an extension of your data center. Are you concerned about data security and data sovereignty? If you are a global company, you’ll need to maintain some of your data in the right location. These are some of the factors that affect how you think about the right applications. Typically, Windows-based apps, Linux-based apps, collaboration apps, ERP applications, Oracle, SAP, SQL, we’ve seen customers basically take many of these applications and re-host them in the cloud.
+ ALSO ON NETWORK WORLD: Tech Q&As: The IDG Enterprise Interview Series +
What clouds do you work with?
Currently, we have been helping customers with migrating or protecting their data in Amazon Web Services. We plan to deliver Azure support. The source could be your data center running VMware, Hyper-V, Xen, KVM. For destination, we’ve commercialized Amazon Web Services. The second most popular cloud we are hearing about in 2016 - and this was not the case as much in 2015 for brownfield applications - is Azure. We are commercializing support for that by the beginning of 2017.
The third cloud, when we talk to enterprises, seems to vary. For some customers it might be Google, maybe it’s OpenStack, maybe it’s an IBM software cloud. We haven’t found a third cloud to be very popular. A very large percentage of enterprises today are operating more in one cloud and maybe two clouds. I have spoken to one customer that’s actually running in four different clouds. That’s more the anomaly today.
It would seem that one of the strong use cases of this is creating a test and dev environment for a particular application in the cloud so if you want to make changes to it you’re doing it there instead of on the live application. Are people doing that?
Yes, they are. There are three main use cases that we’ve seen for the cloud. The first one we’ll call cloud migration and I’ll dive into it in a little bit more detail in just a minute. The second one also gaining popularity is what I call cloud recovery. An enterprise doing traditional disaster recovery for business continuity purposes had a primary data center and a secondary data center. My secondary data center needed to be acquired, with Capex, OpEx impact. I needed to maintain it. That’s very expensive and very high operational impact. We’re helping those customers with replacing their secondary data center with the cloud. You continue to run your applications in your primary data center but we can actually help them failover to the cloud. More importantly, we are able to recover their application in a matter of hours rather than days without significant effort.
The third use case is what we call cloud Dev/Test and that’s exactly the type of example you mentioned. Maybe my development team wants to test the scalability of my application based on going from 100 users to 1,000 users to 10,000 users. Trying to set up an environment and acquire the type of infrastructure to be able to test for 10,000 users would not just cost a lot of money, it would take a lot of time. Then you may not need that again. So the ability to scale that and provision that and only pay for how much time you use it, is a very good example for being able to clone your application workload from your data center to the cloud.
We recently saw a customer who was running Oracle in their production environment and wanted to go from Oracle 11 to Oracle 12. However, they did not want to bring down their production system to do the test. Cloning it in their data center would take them, again, a great deal of expense and effort. What we did was essentially replicate their entire Oracle workload into AWS. They then tested the Oracle 11 to 12 upgrade, learned what worked and what didn’t, validated the upgrade and applied that to the production system. That’s a very good example of being able to do something without incurring a great deal of Capex, the amount of effort required to create the environment.
Could you share one great customer example that really shows how people are using this?
One is a company called Exar. It’s a manufacturing company and they were faced with a few issues at the same time. One was that they wanted to run some of their applications in AWS and needed to find a way to get them there with a smaller IT team. Less than 15 people, I think. Second, they have a primary data center here in the Bay Area, but the secondary data center was in Sacramento and they had aging hardware that was coming up for refresh.
It would have cost a great deal of money to buy new hardware and, of course, the CFO was asking why do we need to spend this money for a secondary data center? Why don’t we look into the cloud? The third issue was actually the one I just referred to which was that Oracle 11/Oracle 12 upgrade. They used our software to achieve all three goals, all three use cases. They were able to migrate some of their applications to run in AWS. That was a re-hosting scenario. The second case was to take the Oracle applications that they were running in their primary data center and protect them, meaning for cloud-based recovery in AWS, and the third was also related to the 11-12 upgrade, cloning DevTest, and they were able to achieve that goal as well.
We’ve seen customers in manufacturing who are very focused on cost savings and being able to do more with less because they operate in a very global and competitive environment, as well as tech companies who want to take advantage of the cloud, media companies in a similar way, government agencies and retail and pharmaceutical. These are the verticals where the notion of cloud migration and cloud recovery is gaining a lot of adoption.
I want explore your One Hybrid Cloud platform, specifically two key aspects that I think people would be really interested in. One is security and the other is manageability.