• United States

Why the ‘trombone’ effect is problematic for Enterprise Internet access

Mar 04, 20135 mins
Cisco SystemsCloud ComputingMPLS

Increased latency, often with congestion, leading to sluggish, unpredictable application performance for branch-based users can undermine cloud computing efforts.

Last time, we began looking at the “trombone” effect, what it is, and why it existed. Here, we’ll delve more into why this “trombone” effect is a problem for Enterprise WAN design going forward.

One of our theses here is that the Enterprise WAN is going to need a lot more Internet bandwidth going forward. It’s pretty much impossible to have too much Internet bandwidth. Yet most enterprises at most remote locations have precious little bandwidth, and Internet access there is slow and inefficient.

The “trombone” effect is a big reason why Internet access from a branch is slow. It results from the hub-and-spoke architecture of a typical enterprise WAN, where access to the Internet is done only from headquarters or a tiny number of data centers. So traffic between a branch user and an Internet-based site is backhauled over the corporate MPLS WAN, through the data center, then “tromboned” through to its Internet destination, then back to that data center, and finally is sent back over the corporate WAN to the original site.

Obvious problem number one with this type of network design is the huge increase in latency experienced by users at remote locations accessing sites on the Internet. The “tromboning” can add 30 to 80 milliseconds of access latency for U.S. branch users, even when the private MPLS links being used are not congested. Depending on the internal WAN design, the additional latency can be even greater for international locations.

Problem number two is that those internal MPLS links frequently will be congested. Because MPLS is so expensive, the links are often not very big (1.5 Mbps to 4 Mbps is typical) and so can easily become congested when 2 or more users are accessing anything – intranet servers or public Internet-based resources – simultaneously. Since even those enterprises with good QoS policies will rarely prioritize traffic to or from the Internet, performance of any Internet-based application can become very sluggish from the additional 100 to 200 milliseconds of latency, and also the packet loss experienced, when the internal links become congested. 

For casual web surfing, this might not be much of a problem, but when using SaaS or public-cloud based services for important or mission-critical applications, this can easily be the difference between an acceptable application experience and an unusable one. And while it is technically possible to prioritize certain Internet access traffic over others within the private WAN, it is very difficult, and sometimes expensive, to do in practice.

It should also be apparent that this solution doesn’t scale. Even where the added latency of the corporate “trombone” under lightly loaded conditions is not too onerous, as both general Internet access (from BYOD and other sources) and more mission-critical cloud computing access increases and competes with existing intranet traffic, the cost of MPLS bandwidth means it’s simply not an option to grow the size of those MPLS links with the increase in bandwidth demanded.

And for the part of the traffic that is just “casual” access, the reliability and predictability of the private WAN part of the trombone is “wasted,” as you’re paying for high price/bit access for one part of the connection, yet at the mercy of the public Internet for the much greater part of each packet’s route to and from the end user.

Delivering relatively predictable, reliable access over the public Internet is no easy task. It’s hard enough to do this from a headquarters or data center location in a large North American city using high-speed TDM or Ethernet Internet access links. In that case, you can usually get things to work “pretty well most of the time.” If your Internet connection comes from a provider at the core of the Internet, and the site you’re accessing also is well connected to the core in the U.S., and not a whole continent away, you can usually do somewhat better than that – but only for the users located at headquarters or where the data center is located.

But even when all of those conditions are met, branch users and those located across oceans, especially those with very thin access pipes, will necessarily experience far worse performance accessing applications or services on the Internet under such a design.

Predictable, high-performance network access will be necessary for almost all public or hybrid cloud computing efforts to succeed. Unfortunately, “trombone” designs for Internet access are likely to undermine such efforts almost as soon as they begin.

Next time, we’ll look at techniques for how to avoid the “trombone” effect, and see how the Next-generation Enterprise WAN (NEW) architecture addresses enterprise Internet access, eliminating almost all of the disadvantages the older design possesses while retaining all of the benefits of that design in terms of security and ease of management.

A twenty-five year data networking veteran, Andy founded Talari Networks, a pioneer in WAN Virtualization technology, and served as its first CEO, and is now leading product management at Aryaka Networks. Andy is the author of an upcoming book on Next-generation Enterprise WANs.