We continue our discussion of what can be done to address the impact of packet loss on application performance over the WAN. We've covered three of the six different possibilities listed in the first column of this arc. Today, we'll cover two more techniques that are particularly valuable for enterprises running TCP applications across very long distances, such as when crossing oceans: enabling end stations to react more quickly to loss, and avoiding much of the WAN-based loss in the first place.
We saw previously that drastically reducing the number of packets that traverse the WAN using application-specific technologies like replicated file service, local web caching, or CIFS proxies can greatly improve application performance in the face of WAN packet loss, as can data deduplication. Last time, we saw that under certain circumstances, Forward Error Correction (FEC) can sometimes be of value. The same is true of the TCP termination technology implemented by almost all WAN Optimization appliance implementations, if it's combined with a different technique than standard TCP (Transmission Control Protocol) for communicating between the two locations over a WAN known to be private/dedicated.
Having a dedicated, private network between each pair of enterprise locations is usually impractical because it is not cost effective, in addition to being difficult to manage. The traditional approach to minimizing WAN packet loss, of course, has been to use Frame Relay and now Multiprotocol Label Switching (MPLS). Where MPLS bandwidth is too expensive, or where the need for reliable access to public cloud-based services is critical, newer service solutions are available that leverage the ubiquity and low cost of public Internet links for connectivity while delivering reliability and predictable application performance in the face of WAN packet loss.
Borrowing and expanding upon techniques made popular in the Content Delivery Networking (CDN)/Application Delivery Networking (ADN) space, offerings like WAN Optimization as-a-Service, or Network-as-a-Service, do exactly this. Such a solution functions by having globally distributed Points of Presence (POPs) close to end-user locations, and using a multi-segment TCP optimization technology approach.
A multi-segment approach, in addition to centralizing the complexity and management of higher-level WAN Optimization technologies at colocation-based POPs, enables full use of bandwidth and more optimized, predictable performance for TCP applications running across long-distance networks subject to packet loss. This approach addresses the congestion-based loss problems faced by public Internet connections in multiple, complementary ways.
With globally distributed POPs close to end-user locations and using IP connectivity from Tier 1 ISPs, such a solution minimizes the amount of first- and last-mile congestion – and so packet loss – experienced in the part of the network where, at least for domestic networks, most loss occurs.
By performing TCP termination and other TCP optimization functions per segment from edge to core and between core locations, rather than solely between distant appliances, as traditional WAN Optimization Controller (WOC) appliance solutions do, TCP connections between enterprise locations and the colo-based POPs are optimized to use available bandwidth and quickly retransmit packets when first- or last mile-packet loss does occur. It is this fast reaction when loss occurs that is the key to minimizing the impact of packet loss on the WAN and delivering consistent, predictable performance.
In addition, having a dedicated, reliable low-loss core network connecting the colo-based POPs eliminates most of the "middle mile" congestion problems that Internet-based connectivity frequently faces. IPSec VPNs connections between locations across the Internet – and to a lesser extent, international MPLS connections across oceans – by contrast, frequently have very long latencies and experience congestion due to the economics of WANs in general and the Internet in particular, the nature of BGP, and "hot potato" routing, which causes ISPs to have traffic exit their networks as quickly as possibly if the final destination is not on their own network, even though that can mean that the resulting latency and loss are a lot longer than they need to be. This is the primary reason the public Internet does not deliver the predictable performance that a private WAN solution like MPLS does.
Within the core, connections between POPs are optimized for high-bandwidth and high-latency transfers, using the TCP optimization techniques noted last time, because the core network can be guaranteed to have sufficient bandwidth and be low-loss and low-jitter, thus enabling maximum throughput for large transfers, even in the face of moderate amounts of packet loss that would kill the performance of an ordinary TCP session-based approach.
Such a colo-based POP approach is unique in enabling fast reaction to packet loss. In terms of avoiding WAN-based loss in the first place, the dedicated core makes it comparable to the MPLS approach to minimizing WAN packet loss for domestic networks, and in some instances it can actually be better at minimizing loss versus MPLS for intercontinental connections, in addition to being superior at reacting to loss that does occur.
Next time, we'll conclude our look into techniques that address the performance problems caused by WAN packet loss.
A twenty-five year data networking veteran, Andy founded Talari Networks, a pioneer in WAN Virtualization technology, and served as its first CEO, and is now leading product management at Aryaka Networks. Andy is the author of an upcoming book on Next-generation Enterprise WANs.