Americas

  • United States
Contributor

13 debts of tunnel networks

Opinion
Jul 16, 20187 mins
NetworkingVPN

Tunnels in networking may solve some problems but create a mammoth set of long term technical debts that will ultimately have to be paid in full.

bolts of light speeding through the acceleration tunnel 95535268
Credit: Thinkstock

Tunnels for networking are not good. We see a real-life example taking place with the twelve Thai boys that were stuck at the end of a tunnel with a very narrow section under water preventing passage. The tunnel offered them only one way out, and the particular path was not passable. This is what happens in networks. We’re thankful for the heroic rescue of these brave boys, but networks don’t always fare as well.

You will hear others speak about how a tunnel-based virtual network is the next amazing trend in networking. In fact, an analyst recently told me tunnels are great. And they are, when used for the purpose they were intended. But, using tunnels to get aggregates of packets to go where they wouldn’t go otherwise is dangerous, and will lead to the accumulation of technical debts.

As described below, in many of these new cases, tunnels are used for aggregates of users, flows and applications. Using tunnels this way, we are taking on large amounts of technical debt and I predict there will be a day of reckoning.

1st debt: Routed as an aggregate – only one pathway to use

Secure tunnels look like a singular long-lived network flow to core routers. Routers and switches will “hash” the singular tunnel flow onto a singular path. Not knowing what else is on this path, or the current conditions over time, the tunnel performance will be tied for very long periods of time to a single path. If the pathway gets degraded, you will not have the ability to route around to better pathways.

2nd debt: Tunnels have no flow control – poor for mixed media

Branch offices and data centers can send/receive network data much faster than the network. To slow applications down, routers drop packets. Applications understand that when a packet is dropped – they need to slow down. Tunnel protocols do not have flow control, window size controls, or retransmission. Tunnels require that the internal application provide this capability on its own for its own benefit. Thus if a tunnel had a single application in it, it would work perfectly. If the network wanted to slow the application down, it simply drops a tunnel packet, and the singular application will slow down. When tunnels are used for aggregates of applications, with mixed media (voice/video and web traffic) the results can be very negative and highly unpredictable.

SD-WAN companies will tell you they “order and provide QoS at entry” to the tunnel. This simply will not work. Consider a tunnel with 100 unique sessions in it. When a 101st session is added, and it goes through its “ramp up” waiting for the network to “slow it down,” the network will likely drop packets from the existing sessions, slowing down the wrong session. The core routers drop a packet from the aggregate flow to “slow it down,” it likely will not be a packet from the session in the tunnel that needs to “slow down.” So we slow down the wrong flow, and let the new flow “accelerate.” If each of the unique sessions were not in the tunnel, the core routers would treat them uniquely, and drop the correct packets on the correct flows. Media flows are consistent and predictable, but when mixed in with bursty web traffic, the chance of a packet drop is increased dramatically. SD-WAN vendors know this and offer media packet duplication or forward error correction for media as a solution. Forward error correction increases bandwidth adding to our technical debt.

3rd debt: Tunnels waste 30% of the network capacity

Tunnels add a lot of additional bandwidth per packet sent. It is generally accepted that with standard Internet traffic mixes, the additional bandwidth is 30 percent more. Take your total transport costs and multiply it by 1.3. This is a substantial long-term cost of using tunnels.

4th debt: Decrease in effective packet size

Many modern protocols figure out what the maximum size packet they can send. When a tunnel is used, the maximum useful packet payload is reduced. When you need to move a large file, or transfer a lot of data, it will indeed take more packets, and more time to transfer the same amount of data.

5th debt: Increased packet fragmentation

Upon entry to a tunnel, if a packet is larger than can be supported, it is split into two packets. This is called fragmentation. The packet pieces are sent in two or more packets, and then reassembled on the other side. This takes processing power, memory, and extra CPU. There are also many complications when a router drops a fragment.

When a tunnel is established, the negotiation for security keys requires physical time. Normally this isn’t a problem, but when a tunnel connection is dropped, or needs to be moved, it may cause all internal applications to reset due to the delay.

7th debt: Network routing features disabled

The tunnel will obfuscate all of the internal flows, providing very little if any useful information to the core of the network. This in fact is by design. But this prevents the core network from providing any routing or security capabilities. For example, if a carrier offered differentiated services, it would not be possible for the Tunnel owner to take advantage of it. If there were denial of service attack inside the tunnel, the core network would not be able to assist in stopping it.

8th debt: Network tools not useful

Networking engineers frequently use tools to figure out why the network isn’t working. Many of these tools will not work correctly in the presence of a tunnel or may provide very misleading answers. Network probes are typically unable to get data from the insides of a tunnel, invalidating their use.

9th debt: Aggregate use tunnels violate fundamental security rules

Thou shall not bridge networks! Tunnels join address spaces between networks in a bi-directional way. This essentially creates an open door between two networks. Further provisioning complications and actions must be taken to manage security risks when using tunnels.

10th debt: Re-encryption penalty

All modern software applications use encryption. Encryption is virtually free now, and one would be remiss to not use it. IPSEC tunnels also use encryption. Thus for most tunneled modern applications, encryption is being done twice, wasting CPU and bandwidth, with no advantage.

11th debt: Best current practice requires two tunnels

Because networks that support tunnels fail, most users of tunnels require that there be two tunnels for each communication path. This is also standard practice for networking with AWS and Azure. Each tunnel has overhead to keep it ready for use, which adds to our accumulated debt. Having two tunnels does however create some routing problems. Best current practice is to run BGP over the tunnels to prevent routing loops. The cost of running BGP inside the tunnel adds to our accumulated debt.

12th debt: Tunnels do not support network segmentation

IPSEC tunnels do not support VLAN’s. Customers seeking secure segmentation often are forced to use separate tunnels for each segment or MP-BGP and VRF’s to separate user groups one-from-another.  Even with a modest number of branches this quickly become untenable.

13th debt: Hub-and-spoke debt

To avoid the complexity of managing an n-squared set of tunnels between all sites, best current practices suggest a data center hub, with branch spokes. This works well for everything but real time media going from branch to branch. The incremental latency and wasted bandwidth adds to our technical debt.

When the number of tunnels gets large, and the technical debt has piled up so high, seek out solutions to your networking problems that don’t use tunnels. Networking professionals need to solve the problems by finding ways to get packets to go where they wouldn’t go otherwise, without tunnels. What is needed is innovation at the routing layer.

Contributor

Patrick MeLampy is a co-founder and Chief Operating Officer at 128 Technology, a company that is attempting to "Fix the Internet."

Prior to 128 Technology, MeLampy was Vice President of Product Development for Oracle Communications Network Session Delivery products. Prior to Oracle, MeLampy was CTO and founder of Acme Packet, a company acquired by Oracle in February of 2013 for $2.1 billion dollars.

MeLampy has an MBA from Boston University, and an engineering degree from the University of Pittsburgh. He has 28 years of experience and has been awarded 35 patents in the telecommunications field.

The opinions expressed in this blog are those of Patrick MeLampy and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.