Addressing WAN latency issues in application performance

Different networking and non-networking technologies needed to fully address all the factors impacting application performance over the WAN

Over the last few months in this column, we've described the various technologies that make up the Next-generation Enterprise WAN (NEW) architecture, and some of the high-level benefits of each. We also spent two columns on the factors that most impact application performance over the WAN. I'd like to spend the next few columns talking about which of the various technologies addresses the different issues impacting application performance over the WAN, and which technologies address which issues best. We'll cover not just the "biggest, baddest" newer networking technologies – WAN Optimization, WAN Virtualization and Network-as-a-Service – and other key technology components of the NEW architecture like colocation and synchronized, replicated file service, but also other, older technologies, either specialized or simply taken for granted, as well.

As a quick synopsis of the prior columns on the scourges of application performance over the WAN, the first contention is that WAN-specific application performance is driven entirely by three factors: latency, packet loss and bandwidth. Latency gets broken down into the (mostly) "fixed" component related to the speed-of-light given how many route miles a packet must travel, and the variable component (jitter) affected by queuing congestion at routers along a packet's path.

Additional factors affecting application performance involve the nature of how TCP (Transmission Control Protocol) works: the bandwidth-delay product, how TCP does congestion control/congestion avoidance, and the "chattiness" of certain applications or protocols, most notably Microsoft's CIFS protocol for file service, and HTTP. The analysis will get a bit complicated because the additional factors are greatly affected by latency and loss, but we'll try to point out the interplay as we go along.

Note that I assume throughout that, as with most things in life and in business, cost matters. That is, if cost were no consideration, we could do a WAN design with very high bandwidth point-to-point WAN links between all locations. While this would not solve every single application performance problem – notably, the "fixed" component of latency – in fact it would solve most of them. But the truth is that even for the largest, most profitable enterprises, costs do matter.

We'll start with latency, covering each component of latency separately. First, the "fixed" component.

While the speed-of-light is indeed immutable (if we're not getting into sci fi...), there are a number of techniques to address the "fixed" component of latency.

One expensive possibility just mentioned is to buy a direct, dedicated point-to-point link between each pair of locations where having the lowest possible latency between locations is critical. For financial institutions running real-time trading applications, this is probably the way to go. It might also be the best answer for moving virtualized server loads across a metropolitan area to deliver the highest possible levels of redundancy and availability. But besides the expense, this method clearly doesn't scale.

The next two techniques involve application layer solutions. Doing replicated file service avoids WAN latency in accessing files, delivering actual LAN-speed performance, not just "LAN-like performance," because all client access to the data is in fact done locally. This involves application-layer management and providing servers and storage (virtual or otherwise) to make possible, but as we saw in our last column, the costs involved in doing this are shrinking every month.

Virtual desktop technology, as first popularized by Citrix, is an application layer method to addressing the application latency involved in client-server connectivity, by having the client and the server be in essentially the same location (data center or LAN), and thus eliminating WAN latencies from the equation. However, in the process of doing this, the end-user interaction with the "remote desktop" – i.e. the GUI, mouse clicks and keyboard entries – needs to be handled properly across the WAN. And therefore, VDI (Virtual Desktop infrastructure) is both a solution to WAN latency issues, and a problem which itself must be addressed.

Another technique to address "fixed" latency is to do caching – specifically "static" caching of objects. Local web caches do this, as do Content Delivery Networks (CDNs) such as those from Akamai. [Note that dynamic caching and similar techniques that WAN Optimization offers, which require at least one round-trip across the WAN, really address bandwidth and application chattiness far more than they do speed-of-light "fixed" latency; we'll cover these soon enough].

A networking approach to "fixed" latency which can be appropriate for many international connections, especially those across oceans and/or using Internet connections rather than private ones, is Network-as-a-Service. Connections between locations across the Internet, using IPSec, SSL VPNs or anything else, frequently have very long latencies even in the best case of lightly loaded networks, due to the economics of the Internet, the nature of BGP, and "hot potato" routing, which causes ISPs to have traffic exit their networks as quickly as possibly if the final destination is not on their own network even though that can mean that route miles traveled are a lot longer than they need to be. A Network-as-a-Service solution with a dedicated core and colocation-based Points of Presence (PoPs) close to end user locations avoids these routing issues and can deliver lower latencies (the results can be even more impressive in dealing with network congestion, but we'll get to that next time). Even high-cost MPLS connections, albeit less frequently, sometimes traverse far more route miles than necessary when connecting North American sites to countries like Australia, Israel or China, and so have greater "fixed" latency than a well-designed Network-as-a-Service solution will.

Finally, if using either Network-as-a-Service or WAN Virtualization as part of your WAN design, an approach to reducing WAN latency which can be a lot less expensive and more scalable and more generalized than buying a mesh of point-to-point links, is to leverage colocation facilities. By deploying applications, probably leveraging server virtualization, at centralized colo facilities "closer" to your end user locations, the fixed latency between the application and the users accessing them is reduced. 

There are of course tradeoffs here – a database application that needs to be run in a single location cannot be "close" to all users worldwide, and even an application which can be beneficially run separately in 3 to 10 locations worldwide (depending on the size of the enterprise, the number of remote locations supported, and how much latency reduction is desired) involves additional computing and management costs to run it in multiple locations – but for those applications which are most impacted by latency, these NEW architecture technologies can be extremely cost-effective ways to address WAN performance problems which previously had no practical solution. And as we'll see when we consider the effects of packet loss and the variable, congestion-based component of latency in upcoming columns, this combination of techniques is perhaps ideally suited to solving the general case problem, and to enabling high-performance, reliable access to public cloud services and SaaS.

As you can see, there are a number of techniques that can address the "fixed" component of WAN latency in the quest to improve application performance. Next time we'll move on to looking at the even larger number of techniques that address the much thornier problem of jitter, the variable, congestion-based component of latency.

A twenty-five year data networking veteran, Andy founded Talari Networks, a pioneer in WAN Virtualization technology, and served as its first CEO, and is now leading product management at Aryaka Networks. Andy is the author of an upcoming book on Next-generation Enterprise WANs.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Copyright © 2012 IDG Communications, Inc.