A few months ago at the OpenStack Summit in Austin, Texas, Don Rippert, IBM’s general manager of cloud strategy, challenged the various players involved in the OpenStack initiative to demonstrate that OpenStack distributions are, in fact, interoperable—between each other and across on-premises, public cloud and hybrid cloud deployments.
The Interop Challenge had a very good basis, since one of the major criticisms of OpenStack has been that there is very little consistency between distributions, and as a result, users need to chose their “flavor” of OpenStack and stick to it.
So, it is interesting to hear that IBM will announce the results of that challenge from back in April—experiments at running unmodified applications on any OpenStack cloud. The headline of the findings is that 17 individual OpenStack cloud vendors successfully completed the interoperability challenge. Names such as IBM (of course), but also AT&T, Canonical, Cisco, Deutsche Telkom, DreamHost, Fujitsu, Huawei, HPE, Intel, Linaro, Mirantis, OSIC, OVH, Rackspace, Red Hat, Suse and VMWare have successful proven interoperability.
This is no small feat. OpenStack was created back in 2010, and since then, thousands of individual developers, dozens and dozens of different companies and, perhaps most important, many discrete and often opposing commercial drivers have come into play. In this environment of complexity, building a product that is consistent across vendors is incredibly hard. Add to that the fact that the project has swung around the place over the years (the year of OpenStack PaaS!, the year of OpenStack for Telco!, the year of OpenStack for enterprise!), and you have a pretty impressive result.
Of course the foundation spins that as a good thing, saying OpenStack is ever evolving and ever improving. But with this evolution comes complexity. As IBM stated in its announcement: “The potential of OpenStack was limited by the lack of proof of interoperability among various OpenStack environments.”
Rippert focused in on this issue:
“For successful open-source projects, customers need three things: innovation, integration and interoperability. Until now, we had innovation and integration, but interoperability was a gaping hole. We at IBM understand that there will never be one single cloud, and customers need to be able to use the combination of cloud solutions that best meets their needs. To enable them to do that in a painless, seamless way, we needed to be truly interoperable with other providers. Today at this deadline and culmination of the Interop Challenge, we’ve achieved that interoperability.”
In terms of how it was actually structured, the Interop Challenge uses deployment and execution of an enterprise workload with automated deployment tools, demonstrating the capabilities of OpenStack as a cloud infrastructure that supports enterprise applications.
It also needs to be pointed out that this is not the first effort at interoperability among the OpenStack community, but previous interoperability initiatives focused on the use of API tests (e.g., OpenStack RefStack) and governance initiatives (OpenStack Defcore committee). Those previous efforts arguably moved the community closer to interoperability, but they were limited in their success because they were unable to achieve true scenario-level interoperability.
In announcing the successful challenge, there is a seemingly unprecedented level of consistency with half a dozen or more vendors being quoted in the release. This in itself is a significant achievement given the aforementioned commercial drivers that everyone faces.
This challenge, and the success that it has seen, is a good thing. Of course, there is some detail around the complexity of the reference application that needs to be answered, and interoperability is, of course, a continuing story. But giving credit where it’s due, this marks a significant milestone for the OpenStack project.
This article is published as part of the IDG Contributor Network. Want to Join?