Piston Cloud has made the tough private cloud decisions for you

Experts at the company have built big private clouds and know what to put in, what to leave out. One hint: Don't use blade servers

Joshua McKenty, co-founder and chief executive officer of Piston Cloud, what he calls The Enterprise OpenStack Company, was in on the ground floor of OpenStack's creation, working as he was on the Anso Labs team at NASA to build a compute cloud on top of open source platform Eucalyptus. The team eventually gave up on that and wrote Nova, which NASA uses today to power its Nebula Cloud environment, and Nova was ultimately contributed to the OpenStack project, which it formed with Rackspace. McKenty left NASA after Anso was acquired by Rackspace in 2010, and formed Piston Cloud in 2011 with co-founders Gretchen Curtis (also of NASA) and Christopher MacGown of Rackspace. Network World Editor in Chief John Dix recently caught up with McKenty for a deep dive on why OpenStack matters and where Piston Cloud fits in.

When OpenStack launched and vendors started joining in, most of the development focus was on what service providers needed to operate at scale, and not what enterprise needed as far as security, regulatory compliance, ease of use and performance. So we kicked off Piston Cloud with a focus on making an OpenStack distribution specifically geared toward enterprise, and solving some of the really hard security problems. Our first product is Piston Enterprise OS, and it's essentially a very opinionated distribution of OpenStack that addresses the issues around making it easy to build a private cloud environment that meets regulatory requirements.

Opinionated?

OpenStack supports six different hypervisors and five network models and three different ways you can configure the storage backend. So there are a vast number of configurations of OpenStack that don't work at all. And there are a number of features that are only available given specific configurations.

Consider live migration, a feature everybody wants. How do I move a running VM from one server to another? It works really well with OpenStack but only if you are using the right hypervisor on the right shared storage backend with the right network configuration and a little bit of sophisticated understanding of your underlying hardware configuration. Look at Red Hat. Linux itself supports a number of different hypervisors. Red Hat supports one. So the distribution is the opinionated version of the software that is fit for a specific use case.

We only support one hypervisor. We only support one network model. We only support one method of storage. And we support that really, really well. So we can guarantee benchmarks on performance given a certain set of hardware because we're only supporting a configuration we know can achieve the optimal performance for a given use case.

These are the same decisions I had to make when I was running a cloud for NASA and the White House. The White House was running a Greenplum database which has enormous requirements for disk I/O. So to achieve those requirements I was forced to make a whole set of decisions about how do we configure the JBOD, how do we configure the RAID controllers and what was the striping width, then we had to test that in hundreds of permutations. Each file system, and test that. Which directory structure, and test that. And the result of all of those tests and benchmarks is a strong set of opinions about the right way to do it for an enterprise cloud.

o

Why does the world need OpenStack?

We're moving out of the information age and into the data age. The pioneers in the cloud infrastructure space really are the Googles and the Facebooks and the Twitters, only because they had no choice. It became the thing that made their business viable. When you're making a fraction of a penny per query, you need every query to happen as cheaply as possible. And enterprises are starting to make this transition into the data age as well, striving for those kinds of efficiencies. So there's the trend of doing what you have been doing but for less money, but there's also the pressure to be able to do entirely new things. There are things that are possible with massive amounts of compute or storage resources that have never been possible before. There are insights that can be gleaned from data once you have the capability to store and analyze that data. The challenges of doing it without cloud are enormous. Actually there's no way to manage infrastructure at scale without having it ending up looking like cloud. OpenStack is the next step in the evolution of computing.

Where do you guys fit in?

We are The Enterprise OpenStack Company. We are intensely focused on just one thing, and that's making OpenStack suitable for enterprise. Rackspace says they are the OpenStack company, but at heart they are actually a hosting service provider and fanatical support company. Fundamentally, they will probably do the best damn job of anyone supporting people running OpenStack. And they will sell that support unilaterally across SMBs, midmarket and large enterprise. But they're not a software company. And at the end of the day they have built a company and a workforce focused on selling services and support.

We are very single-minded and only do one thing: Make OpenStack software for enterprise. But we believe we do it better than anyone else because we are extremely focused and because we happen to be experts in this area. I was the technical architect of Nebula, the project at NASA that eventually became OpenStack compute. Half of our engineering team also came out of NASA, working on some seriously complex security problems.

My co-founder Gretchen Curtis worked with me at NASA and helped write the Federal Cloud Computing Strategy with Vivek Kundra's team at OMB, and my other co-founder, Christopher MacGown, worked on some of the earliest implementations of OpenStack storage at Rackspace.

We continue to be core contributors to the open source project, and I sit on the project policy board. We haven't really done anything that adds functionality to the cloud experience. All we've done is write code that makes OpenStack really easy to deploy and manage for enterprise users because that's what we know best. And we are extremely proud of the high-availability security and configuration pieces that we've crafted. These components are very opinionated and specifically focused on an enterprise private cloud environment; they're not for everyone. For example, they're not really suitable for public cloud service providers.

We are also, I believe, the only OpenStack distribution that is only an OpenStack distribution. Midokura has a distribution but they are also doing networking software, which is their core competence. They did a distribution largely to jump-start selling their network software.

Is there a danger of too many distributions surfacing and segmenting the market?

I don't think so. I think the major players are all announced. Canonical has a distribution that I think will appeal to people that buy from Canonical, and they do a great job on the free side, so that will be popular. StackOps has a certain European flavor. They really focused on a developer-friendly downloadable. You can boot up a cloud in a disk image, which is cool, but it's not how you would deploy a production environment. They're really using it as a gateway to sell their professional services.

I don't think Red Hat's going to come out with a distribution. So I don't think there will be a lot of pure-play distribution companies. There will be folks like Nebula doing an appliance, and there will be others doing an OpenStack API on top of whatever their product is, which is interesting for some use cases, but not the same as having a distribution.

As far as fragmentation, even if there are 100 distributions like the early days of Linux, as long as we're all compatible with OpenStack, as long as we're all interoperable, we won't end up with a fragmented ecosystem. And I think that's really the important part. Linux did a pretty good job of this. You can still take dev packages from Debian and install them on Ubuntu and 99% of the time it works.

So the strong standards in place for OpenStack will really help. Bear in mind I'm the chair of the Faithful Implementation Test (FIT) working group. And the goal of that is to define the test a distribution or a product has to pass in order to be called OpenStack, whether it's powered by OpenStack or built on OpenStack or OpenStack compatible or compatible with OpenStack storage. You have to be not just interoperable, but you actually have to be almost 100% the same implementation code.

For a while I thought Citrix was our most viable competitor. I was really excited because they're a big company and they had a big commitment to OpenStack. They've sort of changed focus a bit with the Cloud.com acquisition, but they still have some really smart folks focused on OpenStack and Project Olympus. So when they bring that to market, the best thing that can happen is that Project Olympus is an enormous success and is 100% compatible with OpenStack, because then you've got a distribution of OpenStack on Xen supported by Citrix, and the distribution of OpenStack on KVM supported by Piston Cloud.

People can say, "I'm a Citrix user and I like NetScaler and I like XenDesktop and I like these other Citrix products, and I'm going to buy Project Olympus from Citrix and know that I have OpenStack in house and it integrates well with all my Citrix stuff."

Or they can say, "We're a Red Hat shop and we like KVM and security is really important to us so we're going to take Piston Enterprise OS and we know that it's interoperable if we decide to use Project Olympus over in the marketing group."

You've mentioned compliance and security a few times; what makes you folks uniquely able to address that?

We've been really clear to say we're never going to be the cheapest, we're never going to have the most options, what we're focused on is what we know we're good at. We did this at NASA and for the White House. We understand regulatory compliance. We understand what happens when you need to audit virtualized infrastructure.

If you have a server in a data center as part of your cloud environment and a half a dozen virtual machines on there and one of them is running WordPress blogs for your marketing group and there is an exploit and WordPress gets compromised and something heinous is put up, your security team will come to you as the cloud operator and say, "you've got 30 seconds to convince us that that virtual machine is isolated from everything else on that box and that you can suspend it and provide us with audit capabilities for that entire environment or we're going to unplug the physical hardware and going to walk out of here with it, taking the other five virtual machines down as well."  [Also see: "30,000 WordPress blogs infected to distribute rogue antivirus software"]

So in that environment, understanding regulatory compliance and understanding how to work with enterprise IT, and specifically enterprise security, is mission critical, because who cares about the WordPress blog? What you really care about are the five other VMs on that box running some mission-critical application. So either you need to make sure the other VMs were also WordPress blogs -- so you segmented your cloud environment into zones -- or you need to be able to provide forensic support so you can say, "Here's how you can control and manage and audit and suspend and resume and clone and snapshot that VM, and yes, we can give you adequate assurance, and in fact we support the CloudAudit API, so that you can assess the state of the host without having to kill the rest of these VMs."

Piston Cloud was the first to build a reference implementation of CloudAudit. It's a new IETF draft standard around exactly this. In any regulated IT environment you've got to be audited, usually on a quarterly basis. The auditor has hundreds of controls they're required to look at and prove things that have never been simple to do in a virtual environment.

You announced the general availability of Piston Enterprise OS in January. What's next for Piston Cloud?

Now that we've launched, we're gathering feedback from early customers and making sure OpenStack itself is moving in the direction it needs to go to support their needs. We're also working on the next version of Piston Enterprise OS, which will be significantly better because of the work we've been putting into Essex, the next OpenStack release. There are some cool features that we cut from GA that will be in the next release. Most of those are around multi data center support. So how do you do live migration, not just between a couple of racks or within one data center, but across data centers? And that's also been something that OpenStack itself has been focused on. How do we do multi backend store replication for Keystone? There's a bunch of these fundamental things we need to work on inside OpenStack before we can actually productize them.

One thing that has been really interesting is hearing the feedback from early customers on what they're using the product for. In some cases it's nothing at all what we thought.

In what regard?

1 2 Page 1
Page 1 of 2
The 10 most powerful companies in enterprise networking 2022