What can be done about WAN packet loss and its impact on WAN application performance?

Multiple approaches are possible, and drastically reducing the number of packets that need to traverse the WAN is one good one

Last time, we delved into the reasons that packet loss has such an enormous impact on application performance over the WAN in the first place. This time we'll begin to look at the ways that various WAN technologies and techniques – both those that are part of the Next-generation Enterprise WAN (NEW) architecture as well as others – address the problems that packet loss on the WAN cause.

In terms of the factors that most impact WAN application performance, packet loss, by the design of TCP (Transmission Control Protocol), is deliberately the scourge of application performance over the WAN, as a key means of ensuring that network bandwidth is used efficiently and fairly "on average" without being overly congested and causing the whole packet transmission system to collapse.

What can be done about packet loss? Well, at a standards-compliant end station, pretty much nothing. But for an intelligent device in the middle of the network, and especially one at a key WAN edge location, there are many possibilities. I can think of at least six different approaches to minimizing the impact of WAN packet loss on application performance:

    - Drastically reduce the number of WAN packets transmitted.

    - React differently to loss (if good knowledge of the network in between).

    - Mitigate the effects of the loss and hide it from the end station.

    - Enable the end stations to react more quickly to loss.

    - Avoid much of the loss in the first place.

    - Avoid the additional loss that often follows after a burst of loss.

[Note that I'm largely excluding from this conversation packet loss caused at your own WAN edge device because you don't have enough first-mile WAN bandwidth and have multiple applications and/or users competing for that limited bandwidth. As we covered in an earlier column, having more bandwidth is a good idea and in many cases will improve application performance, but the packet loss I refer to here occurs somewhere in the middle of the WAN, or inbound at your last mile edge, independent of how much data you are offering to the WAN.]

This time, we'll cover the techniques that drastically reduce the number of WAN packets transmitted, leaving the other approaches for future columns.

Application layer solutions are the first, most obvious approach here.  Doing replicated file service avoids WAN packet loss in accessing files, delivering full LAN-speed performance, because all client access to the data is in fact done locally.

Similarly, "static" caching of objects via a local web (HTTP) object cache completely avoids WAN access for those objects, and thus any impact from packet loss.

Beyond these, drastically reducing the number of packets transmitted is an area where WAN Optimization offerings do a great job.  Now, since we're talking about reducing the number of packets transmitted, you might think first of memory-based compression, which is one of the techniques almost every WAN Optimization solution offers. Memory-based compression can reduce the time it takes to do the first-time transmission of data – a factor of two for compressible data is typical – but in fact it doesn't do proportionately better in the face of packet loss than when there is little or no loss. Reducing the amount of data sent by 50% doesn't really help that much when it comes to packet loss and its impact on a window-based protocol like TCP. So while memory-based compression certainly doesn't hurt here, it's not really the answer when the problem is WAN packet loss.

There are two other technologies in most WAN Optimization products that do have a large performance impact in the face of packet loss: data deduplication, and CIFS-specific application proxy.

Data deduplication essentially does "dynamic" caching of data locally, and while this requires at least one round-trip across the WAN, it will always involve far fewer such round-trip transactions than when the data is not stored locally. Besides saving bandwidth and speeding up data transfers in the more typical case of little to no packet loss, the application speed-up is proportionately greater still in the face of any meaningful amount of packet loss. And data deduplication is usually applicable for any application, not just for file access.

For the very chatty Microsoft CIFS protocol, data deduplication is usually combined with an application-specific proxy that will reduce round-trip requests still further. By essentially doing local CIFS termination, a CIFS proxy provides much faster access to files on a remotely located file server even for the first access. The impact on application performance of the combination of data deduplication and CIFS proxy can be 10 to 40 times even when there is no packet loss; in the face of packet loss, the additional benefit can be another 2x to 10x, meaning a combined performance impact of anywhere from 20x to 400x or more. For files that have been previously accessed across the WAN, this is essentially full LAN-speed performance, versus the very slow, often unusable WAN performance under packet loss if accessing large files across a WAN completely unaided.

While we'll talk about other techniques to address WAN packet loss in upcoming columns, using at least one or the other of WAN Optimization or replicated file service is a critical component of delivering good performance in accessing large data files in the face of WAN packet loss.

A twenty-five year data networking veteran, Andy founded Talari Networks, a pioneer in WAN Virtualization technology, and served as its first CEO, and is now leading product management at Aryaka Networks. Andy is the author of an upcoming book on Next-generation Enterprise WANs.

Copyright © 2012 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022