At the recent OpenStack summit in Austin, Texas, infrastructure company CoreOS demonstrated Stackanetes, a new initiative it dreamed up that is designed to make it easier for organizations to utilize applications sitting on top of Kubernetes.
Kubernetes is, of course, the open source container management initiative that was borne out of the internal systems that Google uses to manage its own infrastructure.
Stackanetes came from CoreOS's focus on delivering what it calls GIFEE (Google's Infrastructure for Everyone). The idea is that currently only massive organizations like Google have the ability to run these highly efficient platforms. CoreOS wants to democratize that ability.
+ More on Network World: Kubernetes – the platform for running containers - is getting more enterprisey +
So, Stackanetes is an initiative that allows organizations to "Kubernetize" their OpenStack deployments and therefore bring a consistency to their organizations. The thesis is that organizations will use Kubernetes for their cloud-native applications, making the ability to use Kubernetes for their more traditionally architected applications built for virtualized environments an attractive option.
The notion of a common platform for both virtualized and cloud-native applications has some merits, but there are arguments on both sides. Via email I quizzed Wei Lien Dang, head of product at CoreOS, on some issues related to Stackanetes.
Ben: First, looking at CoreOS's reasons for doing this and what it means for the business itself. There are many who suggest that CoreOS has a serious need to narrow its focus. Some might say that Stackanetes is yet another distraction. How do you think about the various priorities in the context of yet another new initiative?
Wei: "We at CoreOS believe in bringing GIFEE and helping businesses around the world to be successful on their journey with virtualization, distributed systems, Kubernetes and containers.
The movement around containers and distributed systems is one of the largest shifts in infrastructure platforms since cloud itself. Such a change creates a lot of confusion, particularly around platforms that are seen to be similar in nature. And we want to bring the worlds of VMs and containers together to lessen confusion and increase adoption and success. To that end, we are developing Stackanetes to bring benefits of OpenStack alongside the power and automation of Kubernetes. We believe that once it is easier to deploy and manage OpenStack, we’ll see rapid acceleration in adoption, quality and development of the project.
Stackanetes is very much a part of the GIFEE focus. In fact, one can argue that Stackanetes brings even more of a focus to the CoreOS solutions portfolio in delivering a single platform that can manage VMs and containers. Enterprises and customers have expressed significant pain points with OpenStack lifecycle management and more, and Stackanetes is a specific solution that alleviates these pains.
Also, remember OpenStack is just software. CoreOS is working with partners like Intel and the OpenStack community to make it easier to take advantage of Kubernetes."
Ben: It seems that there are two somewhat conflicting approaches. One is cloud-native applications sitting in containers alongside more traditional legacy virtualized apps. In that scenario, the two types of applications are separate and there is no need to focus on reconciling the way they are both managed. The other is Kubernetes covering both virtualized applications running on OpenStack and cloud-native ones. What do you think about these two very different approaches?
Wei: "Both scenarios you described are valid, and we are engaged with customers and partners to enable Kubernetes on virtualization and infrastructure as a service (IaaS).
By running OpenStack as an application on Kubernetes, we can pull together the entire data center into a single platform that has been proven by hyperscale giants. The power to manage and deploy OpenStack becomes as simple as any application running on Kubernetes, providing enterprises with a path to get the benefits of both containers and virtualization-based IaaS.
Regarding cloud-native apps in containers in VMs alongside legacy virtualized applications, one thing that isn’t ideal is that legacy VM-based infrastructure is less agile and requires more maintenance.
With the approach of Kubernetes covering both via OpenStack as an application, companies will be able to scale their operations teams and the infrastructure in the way Google does. That’s a huge win for companies of any size.
Common misconceptions have been 'containers and VMs don’t work together' or 'legacy apps can’t work with containers.' That is not true, of course. The benefits of container-enabled infrastructure are that developers can focus on developing their apps, and infrastructure can focus on core competencies thanks to the higher level of abstraction."
Ben: More generally, what are your thoughts on the seeming ever-increasing complexity at the orchestration and general infrastructure layer and the need (or otherwise) to simplify?
Wei: "Any movement of new software will start with confusion, thus vendors and community members need to work together to bring education to all. Look at the thousands of people at OpenStack Summit, which shows the growing community and involvement—and it will only get bigger.
With this era of computing we are in, the infrastructure layer is in a renaissance, and it is actually being simplified thanks to commodity hardware that is available to consume today. The end result is any complexity of orchestration should be abstracted from end users because the infrastructure is trusted to be managed in a modern way and is “invisible”—essentially plumbing that works well. Kubernetes abstracts away the complexity of orchestration. Kubernetes and containers are here to simplify the infrastructure layer, and now we are at the brink of enterprise companies strategizing their modern infrastructure to take advantage of this.
Open source is the way to enable change and adoption, especially in infrastructure. We solve all the hardest problems out in the open to help drive us all toward the common goal of running GIFEE—hyperscale infrastructure that focuses on securely and reliably deploying and managing distributed applications. As a universal scheduler built by the experts that created Google’s infrastructure along with a thriving community, Kubernetes has opened up a world of possibilities by treating the data center as one object.
In the infrastructure side of things in particular, there are only a handful of ways to do things. What we are seeing is a shift in enterprises looking for consistency, efficiency and more cost-effective ways to run business. This next era of GIFEE is here because businesses are seeking benefits that comes from running hyperscale infrastructure."
Ben: Diving into specific value propositions, what do you feel is the biggest value proposition for Stackanetes—supporting both types of application or the fact that it enables the formerly difficult to manage OpenStack to become somewhat self-healing?
Wei: "The biggest value proposition of Stackanetes will be benefits of consistent deployments across environments thanks to running OpenStack on containers via a shared container management platform. Bringing OpenStack and Kubernetes together means dynamic management of the OpenStack cluster itself, self-healing of the OpenStack Control plane components and painless upgrades with Kubernetes. Stackanetes also helps solve the pain points of OpenStack lifecycle management."
It's hard when things aren't black and white. Much of what Wei says makes sense. As one OpenStack insider said to me at the summit: if you boil it down, since Kubernetes is an application platform, and OpenStack can be thought of as an app, why not? This brings the complexity of running OpenStack (which would exist somewhere) and to a common deployment platform. It makes OpenStack more readily manageable, an important requirement of increasing OpenStack adoption.
But other disagree and suggest that the virtual machine is the best foundational element, and legacy applications will run directly on VMs with cloud-native applications running within containers sited on top of VMs. That camp suggests that it doesn't make sense to artificially insert Kubernetes underneath existing legacy applications. As the saying goes, it seems to be turtles all the way down.
One thing is for sure: This debate won't be resolved anytime soon. Much water will pass under the bridge before the arguments about what is the base layer par excellence is over.
This article is published as part of the IDG Contributor Network. Want to Join?