OpenStack’s director: Why open source cloud should be the core of your data center

Amazon, Microsoft and VMware rule the cloud. Here’s where OpenStack’s opportunity is

openstack primary

OpenStack Executive Director Jonathan Bryce at OpenStack’s Tokyo Summit in 2015.

Credit: OpenStack Flickr

Six years ago over two days engineers from Rackspace and NASA met in Austin, Texas, for the very first OpenStack Summit. Six years later, OpenStack is returning to its roots.

As it does so, OpenStack has cemented itself as the dominant open source IaaS platform. But at the same time, more proprietary offerings from vendors like Amazon Web Services, Microsoft Azure and VMware still seem to reign in the broader market.

+More on Network World: 15 most powerful OpenStack companies | OpenStack by the numbers: Who’s using open source clouds and for what? +

OpenStack Foundation Executive Director Jonathan Bryce sees an opportunity for the project to become a platform for a next generation of tools for building modern applications. He says the industry needs an open source cloud now more than ever.

Brandon Butler: Overall, where do you see OpenStack fitting in with the cloud market right now?

Jonathan Bryce If I was going to pick an overall theme that we’ve seen from users, it’s the incredible enthusiasm and engagement that has really driven practical improvements in the experience of deploying, operating and managing OpenStack clouds. We’ve also seen real growth in the variety of workloads running in OpenStack environments. In the past we had a lot of organizations deploying OpenStack, but the workloads were mainly development, test, maybe some mobile applications.

At the Summit in Austin we have SAP speaking, we have Oracle speaking – true enterprise companies and their customers talking about moving traditional enterprise workloads onto their OpenStack environments. At the same time we’re going to have demoes of Internet of Things workloads and Network Function Virtualization (NFV) workloads running on OpenStack. This engagement from users is driving real practical improvements to the system.

What do you see as the true value proposition of OpenStack compared to the quite mature proprietary cloud offerings in the market?

We live in a really exciting time in this industry. There are so many new technologies coming out and so much happening and everyone is trying to figure out the right mix of what to run in their own data center, what to run in the public cloud, what to run in containers, etc. The interesting thing about OpenStack is it plays a role in all of those environments.

We’ve seen a pretty big uptick in the last six months of public cloud service providers running OpenStack at scale, but really focusing on verticals and regional specialization. It’s been a big trend in Europe. Deutsche Telekom launched their major public cloud initiative powered by OpenStack; City Networks has created a European public cloud based on OpenStack for European financial services businesses and it meets the EU’s regulations for security and regulations. The largest insurance company in Sweden has moved on to that.

At the same time, we have SAP speaking at the conference about running OpenStack internally and customers using OpenStack to run SAP workloads. We see all of this at play, and I think the flexibility of OpenStack puts us in a lot of those conversations. I have not seen anything that makes me believe that the number of tools and components that companies use will shrink dramatically; data centers are getting more diverse, and they’re looking for systems that embrace that diversity.

So how does OpenStack enable that diversity?

The overall strategy of OpenStack is to be an integration engine for all those technologies that matter. That includes virtual machines, which a lot of people traditionally think of OpenStack for. But we have a lot of users deploying OpenStack to manage bare metal; they’re using OpenStack to manage servers for big data workloads or running containers to get maximum performance.

In fact, right now there’s a lot of excitement about containers. There’s Docker and the associated services, there’s the Open Container Initiative and frameworks like Kubernetes and Mesos. But what sometimes gets left out of the equation there is that these technologies expect to have a Linux server to run on. So the question is where does that server come from? How does it get compute and network? How do you automate the management of that underlying infrastructure?

We’ve seen existing enterprise users – like Time Warner Cable – lay OpenStack down as the foundation of their basic infrastructure. That gives them the ability to automatically provision and manage those Linux servers that these higher level container frameworks run on. It also has the benefit of tying into all of the work they’ve put into securing and monitoring their network with OpenStack.

One of the big opportunities going forward is how we take some of the services we have in OpenStack, like Ironic bare metal management service and the Nuetron networking service, and change the way people think about OpenStack. Take those two, integrate them with Kubernetes and you’ve got a bare metal, highly performing, fully automated container stack. A lot of these popular services right now provide velocity and opportunity at the developer level, but they don’t go all the way down to the bottom of the data center. That’s where the opportunity is.

What would you say to the argument that most all workloads will go to the public cloud eventually, so any investment in private cloud is basically futile?

A lot of the OpenStack environments we have absolutely make use of public cloud, but their OpenStack private clouds are growing as well. It’s not that companies like Wal-Mart, PayPal and American Express are standing up OpenStack as a stopgap while they move everything to the public cloud. They’re using them both and their usage of cloud is growing overall. Hyperscale public cloud will absolutely be a piece of the future of the cloud market, but really the future of the cloud market is the overall IT market of several trillion dollars. It’s just not realistic to think all of that will move to the public cloud.

Let’s talk about the current status of the project. Is there a focus of the latest release of OpenStack, named Mitaka?

The releases are huge now, with hundreds of features, but I think manageability, scalability and end user experience has been a focus. Some advancements I’m most excited about are on the manageability and scalability side. OpenStack is very flexible, which is part of the power of it, but that means it can be difficult to first set up. Both the Nova Compute project and Keystone identity management project – two of the most widely deployed components of OpenStack - both focused significantly on improving the configuration and initial deployment steps. They removed configuration options, defined more defaults, and combined steps, which results in a much simpler process for setting up those two critical components of OpenStack.

What do you see as the biggest improvements still needed in OpenStack?

We’ve really had a focus on the user experience for the cloud developer and application developer. But as the number of projects and capabilities has grown, I think there are still some inconsistencies across the services. There are efforts in the community to ensure we’re offering a standard across the project. We’ve done a good job delivering a robust set of infrastructure services, now we need to make sure it’s a really great experience, not only for the cloud operators, but for the end users too.

One of the ongoing criticisms of the project is that it is complicated to deploy and manage. There’s a broad array of vendors that seem willing to help with that. How do you ensure that improvements to the project promote simplicity? Don’t vendors have an incentive to keep the project complicated so they can sell services to manage that complexity?

One of the trends that has been most exciting for me personally is how actively engaged operators and users are getting directly in the community. That’s why we build the software, so having them contribute directly is really powerful. If you look at some of the top contributors in the Mitaka cycle, there are users from the telecom industry, enterprise users like Yahoo and Wal-Mart. That’s one of the things we try to encourage at the foundation, because they’re the ones who are going to keep it grounded and headed in the right direction. I think there’s a lot we can continue to simplify, but there are also things about cloud in general that are complicated just due to how powerful the technology is. You don’t want to oversimplify it because you would lose the benefit of being able to integrate with all of these systems that OpenStack can pull together.

Four years ago in the early days of OpenStack there was a debate about which open source cloud platform would win out among OpenStack, Eucalyptus and CloudStack. Hewlett Packard Enterprise bought Eucalyptus and Citrix recently sold off its stake in CloudStack. OpenStack seems to have won. What was it about OpenStack that allowed it to thrive?

Mark Collier (Chief Operating Officer of OpenStack Foundation): I believe that one of the most important cultural things about OpenStack is that users started it. Rackspace wanted to use the software to provide a service, but it wasn’t try to sell software; NASA wanted to solve a technical problem of building a cloud for scientists - they weren’t trying to sell software. That really helped the ecosystem grow because lots of companies saw that this was a level playing field; it wasn’t one dominant company trying to get the lion’s share of the opportunity.

It really was a user-driven thing. That didn’t mean there wasn’t opportunity for vendors, it actually meant there was more opportunity because everyone has a fair shot. We eventually set up a foundation, and the governance model reflects that to this day. Those are the types of beginnings of a project that can really light a fire of growth in a community and can lead it to hopefully becoming an industry standard because every company, whether they’re a user or a vendor feels like they have a chance to be a part of it. Some of the other projects you mentioned took a more traditional software, vendor-driven model, which I think is a lot harder to balance when it comes to an open source project where you’re just trying to solve technical problems.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Must read: Hidden Cause of Slow Internet and how to fix it
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.