Elastic cloud apps are great, but how do we protect the containers that power them?

As we think about deploying containerized applications in the cloud, we first need to be confident that they are sufficiently secure and protected

How do we protect containerized apps?

Increasingly, organizations are recognizing—and taking advantage of—the benefits of cloud-based apps.

The compute, storage and I/O cloud infrastructure is dynamic, allowing for new virtual resources to be created and made available to the application at a moment’s notice. Also, each cloud application is componentized into a number of container functional units that can be added, deleted or changed as needed.

This latter point is the buzz of our industry—containerization.

As we march into a world of dynamic containerized applications, however, we need to keep in mind that there are subtle differences between them and their static virtual machine (VM) predecessors.

For one thing, a static VM-based application is an isolated workload in a virtual machine, whereas a container is just one of many components of a partitioned workload constructed by chaining all the components together. This difference has profound implications on protecting application communications.

+More on Network World: Containers: Most developers still don’t understand how to use them+

Also, a static VM-based application is communicated with via the network address of its VM, whereas each container in a containerized application is its own network addressable entity. This is important because protecting applications is now moving from protecting just the VM to protecting each container and the communication links that interconnect them. While sounding like a concern, this can actually be a key attribute of containerized applications and offers a new opportunity to protect and secure what we deploy in and across public and private clouds.

Securing containerized apps

As we think about deploying applications in the cloud, we first need to be confident that each container and the images they host come from a known trusted source. Fortunately, this is becoming less of a concern, as container authors have the ability to sign an image they include in the containers they build.

While containers are more trustworthy, communications between containers are much less so. For instance, restricting access to one container from another is a tough policy to enforce and affects containers regardless of whether they are deployed within VMs or directly over Linux. Another major concern comes from the fact that a container’s network daemon runs in user space and is therefore discoverable and vulnerable to user space attacks.

To address the issues of access control and the risks of container user space attacks, stronger user space protection is required. One approach may be to develop a memory architecture that gives selected user space container memory the same type of protection afforded to kernel space memory; Intel’s Software Guard Extension (SGX) is a viable example of such a trend.

Another inter-container communication issue that has a significant impact on container network security comes from the sheer number of containers deployed in a cloud-native application. Instead of protecting the network periphery of a VM, we now have to deal with the network boundary of each and every one of the large number of container instances.

Further, in a containerized application, a container may be deployed in a public cloud, private cloud or an enterprise data center. This requires that container-specific network policies be insulated from the infrastructure they run on regardless of where they are deployed, and it necessitates a way to automate network configuration and policies across hybrid clouds and on-premise data centers. I call these new hybrid cloud applications “vertical cloud deployments” to highlight the fact that diverse cloud and network policies and configurations, as well as security control, must now be well insulated and controlled to achieve application elasticity.

For inter-container communications, we have much work to do to take us from where we are today to where we need to be as far as automated configuration, central network and security policy control, and continuous DevOps are concerned.

The new world of micro-workloads

Much like VM hypervisors have perfected VM workload scheduling, resource management and migration, we now have to shift our focus to the next frontier—the new “micro-workload world” powered by containers and microservices. In this new envrionment, a microservice or container is a basic unit that is network visible and accessible. Each instance needs to be authenticated via a “virtual secure boot.” Further, each can define how it is to be access controlled, by whom and the actions that can be triggered. Welcome to the new world.

As we tackle these new problems, we can borrow a chapter from what we learned developing software-defined networks (SDN). SDNs can be credited for giving birth to what we now understand as container and VM networking. SDN provided an open framework that promotes the separation of control plane and data plane where software-defined features in an open control plane can drive and define what is to be executed by the network.

Likewise, the issues that we are dealing with for containerized micro-workloads can be better addressed if we break out the control part of the problem and make it open so that control and management policies for these micro-workloads can be in the hands of those who create and manage them. At the same time, the containers and microservices themselves become smart enough to enforce centrally controlled access and network security polices.

This points to a vision where container interconnections are application-aware such that application-specific access control rules can be configured, enforced or dynamically changed based on a newly enacted policy that could prevent the spread of a real-time threat.

In a data center environment where millions of containers might be actively deployed, the ability to prevent the spread of a threat is arguably more important than the detection. The ability to policy-control the communication links between containers could also be used to enable the notion of a load-balancing container group in which the number of downstream container instances that are linked to by an upstream container could be scaled based on load.

In short, as workloads are divided into container-created micro-workloads, infrastructure and application services need to move to become micro-workload aware. In that context, access control, link protection, container identity management and virtual network support for container linking all have to evolve.

This article is published as part of the IDG Contributor Network. Want to Join?

To comment on this article and other Network World content, visit our Facebook page or our Twitter stream.
Must read: Hidden Cause of Slow Internet and how to fix it
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.