Who doesn’t love the fundamental promise of containers? Simple development, segmented applications, rolling changes, etc. They are certainly a blessing to both developers and operations. But if not thoughtfully designed, container virtual networking could be the curse that plagues us for years.
Let’s start with a little perspective. The rise and wide deployment of virtual machines and containers coincides with mainstream data center networking evolving from a hierarchical layer 2/3 network to a flatter layer 2 interconnect. Since cloud infrastructure is inherently multi-tenant, traditionally virtual LANs have been used to isolate applications and tenants sharing a common infrastructure. But as containerized applications explode in number, the VLAN maximum size limit of 4,096 becomes grossly inadequate for very large cloud computing environments.
+ Also on Network World: Video: Virtual networking's killer use case +
This limitation is why we are moving to Virtual Extensible LANs (VXLANs) that add a 24-bit segment ID and increase the number of available VLANs to effectively 16 million. As with VLANs, only VMs or containers within the same VXLAN can communicate with each other. Tunneling of VXLAN traffic becomes necessary when two VXLANs are logically interconnected via an external layer 2 or layer 3 network to facilitate cross-server VM migration or to enable inter-VM, inter-container communication. Of course, VXLANs are not the only way to interconnect VMs and containers. Project Calico, for instance, has advocated a layer 3 approach that promises an easier co-existence with our IP-centric internet.
Regardless of the approach, container and VM networking both exploit virtual networking principles. This means they operate based on their own private IP and MAC addresses, they require their own network address tables management, and they interface to public networks via purpose-built network gateways. But contrary to the original intent of container networking being a lightweight approach in lieu of VMs, the most common practice today is to run containers over VMs. This defeats the whole purpose of containers and results in multiple virtual networking layers that increase virtualization overhead, all under the name of separating cloud application logic from the infrastructure.
Of course, virtual networking is a necessity, but we seem to be at a crossroads where various approaches are being applied to different environments and specific workloads, adding significant, though unintended, burden to network and data center operators. Further, the added complexity greatly compromises both security and performance, the very reasons virtualization was conceived.
3 virtualization challenges to overcome
We are now confronted with three new challenges in our divided virtualization landscape. First, diverse approaches to virtual networking add both management and execution overheads to our infrastructure. Second, we are faced with multiple approaches to manage and synchronize various IP and MAC tables, address management, and route table updates. And finally, we are saddled with different ways to provision access control rules to VMs, containers and bare metal servers.
On virtualization overheads, thanks to SR-IOV, network flow cut-through can be achieved regardless of virtualization technologies in use, provided tunneling like VXLAN can be supported in cut-through mode. While SR-IOV-based environments like Amazon’s AWS support this fairly seamlessly, traditional TCP Offload Engines (TOEs) are likely to encounter problems due to their inherent unawareness of virtual networking end point requirements.
The second challenge of managing IP/MAC tables and addresses can be a real nightmare in a mixed virtualization environment. Fortunately, on the IP side, native Linux IP tables and Open VSwitch framework can at least serve as two common denominators for the various frameworks that employ them. But for container virtual networking, a common networking framework for all containers becomes one of the most pressing issues for containers to reach mainstream deployment.
Finally, the issue of access control policy management and enforcement is probably the toughest. The diversity among various access control frameworks makes central policy control of all those rules very hard to manage, let alone enforce. There is no question that a broad base normalization framework for access control policies convergence is in need. Further, it is important to point out that containers represent both a migration from VM-centric static workloads to microservice-based sub-workloads and the functional need to provision and enforce network access control rules down at the container level.
Containers are certainly a blessing and will likely be the foundation of a new 10-year cycle in cloud-native application deployment. But if not properly and thoughtfully addressed, container networking will be the curse that keeps us from achieving the full potential of the new environments we are counting on.
This article is published as part of the IDG Contributor Network. Want to Join?