Network engineers are from Mars, application engineers are from Venus

We need to build a bridge between these groups and rise above the current set of solutions, or we will experience catastrophic infrastructure failures.

planet mars close up with sunrise in space 158456999
Thinkstock

Application and network engineers see the world differently. Unfortunately, these differences often result in resentment, with each party keeping score. Recently, application engineers have encroached on networking in a much bigger way. Sadly, if technical history repeats itself, we will revisit many of the long-ago problems again as application engineers rediscover the wisdom held by networking engineers.

There are many areas of network engineering and application engineering where there is no overlap or contention. However, the number of overlapping areas is increasing as the roles of network and application engineers expand and evolve.

Venn diagram network engineering vs application engineering 128 Technology

Application engineers will try to do anything they can with code. I’ve spoken to many network engineers who struggle to support multi-cast. When I ask them why they are using multi-cast, they nearly always say, “The application engineers chose it, because it's in the Unix Network Programming book.” The Berkley Socket programming interface permits using multi-cast. The application engineers then provide lost packet recovery techniques to deliver files and real-time media using unicast and multicast. The Berkeley Socket does not easily support VLANs. Thus VLANs have always been the sole property of the network engineer. Linux kernel network programming capabilities in recent years become much more capable, allowing engineers to use Berkeley Packet Filters (BPF) and Openflow (vSwitch) along with the traditional IP Filters to get new layers of network programmability. Open Stack neutron plug-ins are providing dynamic endpoint reachability through APIs. The overlapping areas are increasing. The general programmability and “how-to” that is being exposed by public clouds is appealing to application engineers.

Application engineers do not have the accumulated wisdom of the network engineers. They are comfortable with IPSEC tunnels that provide point-to-point connections instead of routed network connections. Just look at the mess that many companies are trying to unravel as they are now asking networking engineers to help manage their thousands of VPCs that were haphazardly acquired by application engineers.

With SDN technologies making rapid inroads to the networking space, many claim the primary advantage will be innovation and programmability – both of which will usher in steady improvements. Sadly, the easy path to solving problems is not innovation at the routing layer, but by going over the top of – or around – the current network. Current network standards are guaranteed to change glacially due to being totally controlled by major suppliers. The only viable strategy for innovation has been non-standards based over the top.

However, most traditional network engineers simply do not have the skills to instantly become programmers. However, our industry needs the wisdom and knowledge from the past to avoid the catastrophic events that will likely occur. Examples of increased chaos in networking include:

  • Over 50 SD-WAN companies and solutions, none of which have any interoperability with each other. All but 128 Technology use tunnels with proprietary structures. The good solutions interoperate with network protocols (BGP, OSPF), but support policies to route packets over tunnels that are inviolate of routing table design (Hub and Spoke SD-WAN).
  • The concept of “virtual networking” where overlay networks operate independently of underlay networks. There are calls for standards to connect the overlay and underlay to improve the routing science in the overlay.
  • The intermixing of IPv6 and IPv4 through the use of Carrier Grade NATs and mixed addressed tunnels.
  • Dynamic DNS becoming a network load balancer instead of using network routing technology.
  • Proprietary IP routing technologies being developed by Google, Facebook, AWS that will not interoperate with any standards-based networking.
  • Hosting public addresses inside data centers at the end of a tunnel that would not be supported by public routing tables, essentially violating the concepts of BGP.

Furthermore, solutions to many of the security and scale problems in the Internet have been standardized, but never implemented. BGP-SEC, DNS-SEC and many other standards have been created, but with poor uptake.

We do need a revolution in the industry. We need to go back to our roots, and ask why did IP win out over SNA, DECNET, Vines, Arcnet, ATM, Framerelay, X.25. As we look back, IP was a networking technique that was open, scalable and could be a superset of all known networking solutions – becoming an “internetworking protocol.” Now our global networks are fragmented. IPv6 doesn’t share routing information with IPv4. Major technology companies are using proprietary systems. Entropy has occurred, and internetworking is on the decline. The new trend in the industry is to use point-to-point tunnels to create connectivity. We need to pray to the gods of technology that we can stop all this anti-networking and get back to performing internetworking. We need to securely internetwork:

  • Virtual Private Clouds (V4 & V6)
  • IPv4 Internet
  • IPv6 Internet
  • Mobile Networks (V4 & V6)
  • Corporate Networks (V4 & V6)
  • Private Data Centers (V4 & V6)
  • Wide Area Networks

I think the application guys can show us the way, if we but listen. The application guys have long ago left bit-wise solutions and moved to textual big data solutions. Isn’t this what Facebook, Google and Amazon have done? Why can’t Internet addressing and routing problems be solved by new addressing that is semantic in nature? Addressing that works across all address spaces and scales infinitely. In fact, the Domain Name system controlled by the application guys arguably has already done this. Dynamic DNS is providing real-time routing intelligence. Many public websites use a single physical address for a large number of application websites, and use the application controlled SNI field in a Client Hello of TLS negotiation to enable routing and security associations for a service. Content Distribution Networks also utilize DNS infrastructure to insert themselves into applications. DNS and SNI sniffing are primary means of securing networks, providing the only true application recognition that works. The application guys are already performing semantic-based networking. The networking guys need to embrace these concepts. Why can’t network routing tables route to named services?

The networking guys can provide the necessary wisdom to prevent rediscovery of all the problems that have been overcome. Building scalable networks that are loop-free is not easy. Enforcing protocols and standards to prevent networks from failing catastrophically is still necessary. And most of all, demanding interoperability and security that are “baked in” can only come from a body of professionals that believe networking and interoperability are the pillars of any network. This core principal simply is not part of the application world.

Network engineers are from Mars, application engineers are from Venus. But we need to build a bridge between these groups and rise above the current set of solutions, or we will experience catastrophic infrastructure failures. The bulk of investment must come from networking professionals to change from CCIE CLI jocks to semantic networking leaders. The onus falls on those that have resisted change for so long. It’s time for a second internetworking revolution.

This article is published as part of the IDG Contributor Network. Want to Join?

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Related:
Now read: Getting grounded in IoT