Are inter-container communications the Achilles’ heel of latency-sensitive cloud apps?

For the cloud to become truly 'enterprise-hardened,' application performance challenges must be addressed

Are inter-container communications the Achilles’ heel of latency-sensitive cloud apps?
Credit: Thinkstock

Containerization exploits the idea that cloud applications should be developed on a microservices architecture and be decoupled from their underlying infrastructure.

That is not a new concept; software componentization dates back to Service-Oriented Architectures (SOA) and the client-server paradigm. De-coupling applications from their underlying infrastructure aligns with today’s vision that efficient data centers should provide an on-demand resource pool that offers instances of various software-definable resource types spawned as needed. As demand for an application grows, requiring additional resources to support it, the services could span over multiple servers (a cluster) distributed within a data center or across a globally distributed infrastructure.

Componentizing an application has been proven to offer a lot of benefit in terms of scalability and development speed. But nothing is free. As more containers are linked together, the inter-container communication overhead grows.

And it is not just the complexity of the network in between containers that is the problem; excessive data copying can be another. Most performance-sensitive applications are smart enough to avoid transferring large data sets from one container to another. But for the data that is transferred, the latency induced by chaining containers can be alarmingly high. If not given proper consideration, this could be the Achilles’ heel of latency-sensitive cloud applications.

Problems caused by the kernel virtual switch

To make matters worse, Linux uses a virtual switch implemented as a kernel software facility to manage inter-container communication. This is known to be very CPU-intensive and competes for the same CPU cycles needed by the application it serves. It can also trigger excessive inter-core data copying on the multi-core-based servers extensively used in today’s data centers. Finally, if the inter-container communication spans across two different servers, the kernel virtual switch is still involved because container networking is based on virtual networking principles that decouple it from the underlying physical network.

While latency concerns clearly stand out, it is not the only problem cloud applications might encounter due to container chaining. The link between two containers may be exposed to a different network or cloud environment that warrants a different configuration for access control, policy control, link protection or container authentication, depending on where they are deployed in the public cloud, in a private cloud or in an on-premise data center. These issues are largely unaddressed by container networking today.

Don’t get me wrong—I’m a big proponent of containers, and I believe we can dress Achilles’ wound, making his heel as invulnerable as other parts of his body. But there is still serious work to do and opportunities for innovation before containers are truly “enterprise hardened.” Our industry has made amazing progress thanks to the wonderful work by those who have contributed thus far to create today’s cloud-native ecosystem. And as we bring enterprise applications to the cloud, a new set of challenges are exposed. Before we can fully exploit the promise of the enterprised hardened cloud, these challenges must be addressed.

In a way, we can consider containerization as a trend toward micro-segmentation of our data center networks. No longer do we build physical network services to protect them; we can build them to protect our application services instead. That movement will give birth to a whole new generation of innovations that are application specific. Instead of building a big fat router, for example, inter-container routing can become intelligent enough to find the best downstream container equally capable of providing the service with best QoS. Network security will still be essential, but protecting applications and containers directly will become far more important.

What we need is an intelligent software-based container fabric that is smart enough to address the excessive latency concerns introduced by container networking while preserving the benefits of the decoupling it brings. Inter-container communication can be moved into an application-aware, smart inter-connect that automatically adapts to application workload scenarios and the underlying networks.

The notion of an application-aware fabric or bus has long existed; the Enterprise Service Bus (ESB) has been used by enterprises for decades. What we need is a container incarnation of that concept—a low-latency, application-aware, secure path between micro services. That, in my mind, is the dressing Achilles needs for his wounded heel.

I look forward to hearing your thoughts.

This article is published as part of the IDG Contributor Network. Want to Join?

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Must read: Hidden Cause of Slow Internet and how to fix it
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.