What has OpenStack done for me lately? The next 5 issues to address

By focusing on these critical areas, the OpenStack community can extend fully-open private and hybrid cloud infrastructure to everyone

openstack logo cloud

This contributed piece has been edited and approved by Network World editors

OpenStack has been on a roll, seeing increased adoption across the business world, highlighted by major deployments from leading organizations like Verizon, BBVA, and NASA Jet Propulsion Laboratory, as well as continued growth in the contributing community. But what’s next?

While it’s nice to see the success of OpenStack in the enterprise, the community cannot rest on its proverbial laurels. Here’s what the OpenStack community and ecosystem need to accomplish next:

* Containers, containers and ... containers.  OpenStack isn’t the hottest open source technology on the block anymore, that title is now owned by Linux containers. An application packaging technology that allows for greater workload flexibility and portability, support for containerized applications will be key to OpenStack moving forward, especially as enterprise interest intersects both Linux containers and OpenStack.

OpenStack services -- like Neutron for networking and Cinder for block storage -- can already be abstracted and made available via docker- (container runtime) and Kubernetes- (container orchestration) based platforms. This is important as container-based infrastructure grows, but it’s not as critical as the second need: running containers themselves on OpenStack.

While containerized applications are, at their core, applications, they have a different set of needs than the traditional, typical virtual machine-based cloud applications that one would usually see on an OpenStack-based cloud. To make OpenStack more container friendly, we need to better expose the underlying plumbing of OpenStack, the networking, storage and management pieces comprising the framework, to container technologies. We’re seeing that happening, highlighted by projects like Kuryr and morerecently KubeVirt, but these efforts need to be expanded and delivered as supported products for the future.

* Playing nice with public cloud. OpenStack is already known for providing a powerful private cloud platform, but moving forward, OpenStack will have to serve as a bridge between private cloud infrastructure and public clouds.

This is easier said than done. OpenStack APIs need to manipulate resources across OpenStack and the public cloud, allowing it to function as a single control point for services within and without of an organization. This means that OpenStack provides the powerful, flexible infrastructure for private clouds while also serving as the cloud management platform for public instances.

Projects like OpenStack Omni exist that drive towards delivering upon this promise, but the community at large needs to understand that OpenStack’s future isn’t necessarily to supplant the public cloud, but rather to augment it. Acting as a hybrid gatekeeper, it gives organizations more control over how they consume resources, both public and private, and allows for greater adoption of a DevOps-style culture and software as infrastructure, both keys to the future of IT.

* Opening the door to SDX. By and large, one of OpenStack’s greatest successes is how the framework integrates with software-defined technologies. OpenStack itself delivers a software-defined integration layer across compute, storage and networking. KVM (Kernel-based Virtual Machine) has become the de facto hypervisor for OpenStack and projects like Manila and Ceph provide common ground for software-defined storage solutions - the last leg of the stool is software-defined networking (SDN), something that remains critical for OpenStack’s future.

Software defined networking certainly exists as a space outside of OpenStack, with players like VMware, Cisco, Nokia/Nuage, Big Switch, and Juniper (just to name a few) offering a host of solutions, and community driven projects like OpenDaylight. It’s this varied space that is something that the OpenStack community needs to address in 2017, specifically by providing a seamless “better together” experience with as many SDN providers and choices as possible.

This need for expanded choice also plays into the fact that OpenStack must more tightly tie in with SDN in general. As OpenStack-based clouds expand, it’s expected that SDN deployments will expand linearly, given the close relationship between the two technologies. Without tighter integration, growing enterprises would be facing two separate “islands” to manage: one around OpenStack, and one around SDN.

* Sunrise on Day 2. The “Day 1” operations of OpenStack–the installation, configuration and deployment -- are largely settled.  While complexities remain, OpenStack’s setup and initial deployments are documented and codified. With that hurdle cleared, the community now needs to set its collective sights on the Day 2 set of management requirements, such as scaling, monitoring, and troubleshooting, as well as  compliance and optimization, especially when it comes to specific deployments outside of “vanilla” settings.

Look at it this way: All of the components of an OpenStack implementation, from network modules to storage to integrated APIs generate extensive amounts of data in the form of logs, systems checks and other, harder to classify information, all of which require some level of analysis to be useful. As deployments expand to encompass tens to hundreds of thousands of nodes, this data analysis becomes critical. Otherwise, how will an IT department understand what their cloud is doing? Or where they’re wasting resources? Or if they have a vulnerability?

What is needed now is an effort from the OpenStack community to standardize how this data is aggregated and consumed. Beyond being better able to pinpoint and solve problems in OpenStack infrastructure, a standardized data analysis model would also help enterprise IT teams to find the best place for a given workload in their infrastructure, helping to eliminate wasted resources, and leading to improved stability.

* Stable at any size. While scale is critical to an even broader set of OpenStack deployments (and the expansion of existing environments), it can’t exist in a vacuum. Along with scalability, OpenStack needs to deliver stability, especially as deployments grow not only in size, but importance to the business.

From a scalability standpoint, the OpenStack community must focus on adding more control. While OpenStack certainly can scale, not every service inherent to a deployment needs to scale at the same time - consumption should dictate what needs to grow with the broader infrastructure.

The community is starting to deliver this functionality with “composable services,” and soon “composable upgrades,” which make extensive use of automation technologies like Ansible to help users design scalability based on need. That said, granular control of scaling is an area that continues to require attention from the OpenStack ecosystem. It’s closely related to being able to easily identify which components are causing a bottleneck and fixing these problems on the fly.

Another key piece of the stability equation is enhancing security and compliance for mission-critical workloads running on OpenStack infrastructure. The pace of innovation within the community has, at times, led to changing APIs and even protocols between releases, which has made it difficult for risk-averse industries to adopt the technology outside of R&D environments. But if the community can focus in 2017 and stick to a common set of foundational code, the risk attached to OpenStack innovation goes away almost immediately.

By focusing on these five critical areas for the future, the OpenStack community can extend fully-open private and hybrid cloud infrastructure to everyone, even those organizations that are hesitant to add a seemingly complex, rapidly-evolving technology to their operations.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Copyright © 2017 IDG Communications, Inc.