How to address WAN jitter issues for real-time applications

Traditional WAN Optimization techniques don’t help much, so other solutions are needed.

We continue to cover the broad topic of which of the various technologies – including those that are part of the Next-generation Enterprise WAN (NEW) architecture, like WAN Optimization, WAN Virtualization and Network-as-a-Service, and other, older technologies as well – best address the different issues impacting application performance over the WAN.

Last time we covered those techniques that address the variable queuing congestion-based component of WAN latency, also known as jitter, as it affects TCP-based interactive applications or other data transfer applications. Today, we address the smaller number of techniques for dealing with jitter for real-time applications like VoIP or videoconferencing.

With TCP applications, high amounts of jitter cause applications to perform poorly, but at least they still perform at the end of the day, however frustrating the slow performance may become. High jitter for a real-time application can make them unusable, as meaningful two-way communication becomes impossible. Per last time, the size of buffers in the typical WAN router are between 100 and 200 milliseconds, and congestion on the WAN can sometimes add 400 milliseconds or more, especially over long distances. Real-time applications will almost always have jitter buffers to handle small to moderate amounts of jitter without the application users noticing, but these buffers are usually on the order of 60 to 100 milliseconds in length, so jitter beyond 100 milliseconds can effectively ruin real-time communications, as those packets delivered so late are effectively lost.

As noted last time, it is essential to implement QoS properly on your WAN to ensure that the performance of your real-time and interactive applications aren't hurt by use of limited last-mile bandwidth by your own other applications. And you need to have sufficient bandwidth at each WAN link to support the application, of course. For VoIP, this is not usually an issue, but for videoconferencing it can be; high-definition video will typically require 1 to 1.5 Mbps of bandwidth, and not everyone has that much upstream bandwidth available on their branch WAN links, or has it only if the video has strict priority over all other applications used, which is often not acceptable.

The traditional solution to avoiding WAN jitter problems is to buy MPLS to connect all of your locations together and implement their QoS system. This expensive solution does solve the problem for domestic connections, and if the MPLS provider offers connectivity to your overseas locations, it should work for them as well. If your provider doesn't offer direct connectivity for your overseas locations, or your enterprise budget can't afford it, however, then you need to find an alternative. And because MPLS is a very expensive solution that offers relatively little bandwidth for other, more bandwidth-intensive applications, even domestically it's not an option for many enterprises.

So what alternatives do you have? Unlike for other kinds of applications, if your WAN connectivity is Internet-based, rather than using MPLS or some other kind of single vendor private WAN, the options are actually somewhat limited. Network-as-a-Service and WAN Virtualization are pretty much the only choices out there that address WAN jitter in a meaningful way. WAN Optimization appliance solutions, while they provide value in numerous other ways, really cannot do anything about high jitter caused by congestion on the WAN.

As we saw last time, Network-as-a-Service can address congestion-based performance problems in international connections, particularly those across oceans. A Network-as-a-Service solution with a dedicated core network and colocation-based Points of Presence (PoPs) close to end-user locations addresses the issue of peering point-based congestion that occurs frequently in the Internet by bypassing the public Internet altogether as the means of connecting the PoPs, and thus delivering stable, low-jitter connectivity between locations across the globe. If you have good last-mile connections to your provider's PoPs, it can be a great solution to congestion-based latency in the Internet "middle mile," at a fraction of the cost of MPLS.

The other alternative, WAN Virtualization, addresses congestion-based jitter directly as it occurs. WAN Virtualization continuously measures one-way latency across all of the possible paths between any two locations, and when it detects significant jitter on a path, it will quickly move traffic off that path onto a better-performing path, limiting use of the now slower congested path only to applications, like file transfers, that consume bandwidth but are not otherwise sensitive to higher latency. Some WAN Virtualization solutions can even replicate real-time application traffic between locations across multiple paths, eliminating any effects of high jitter or packet loss, and delivering "platinum" quality voice even when using Internet connections which are exhibiting large amounts of congestion-based packet loss and high amounts of jitter. WAN Virtualization can be used to augment or replace MPLS connectivity, and could be the way for some enterprises to afford sufficient bandwidth to be able to deploy next-generation applications, VoIP and videoconferencing on a single converged enterprise WAN.

Next time, we'll look at the techniques for dealing with limited bandwidth on the WAN.

A twenty-five year data networking veteran, Andy founded Talari Networks, a pioneer in WAN Virtualization technology, and served as its first CEO, and is now leading product management at Aryaka Networks. Andy is the author of an upcoming book on Next-generation Enterprise WANs.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Now read: Getting grounded in IoT