Skip Links

Inbound QoS -- control at your front door

By Miles Kelly, senior director of product marketing, Riverbed Technology, special to Network World
December 12, 2012 05:58 PM ET

Network World - This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.

Network congestion has compelled organizations to deploy traffic shaping and quality of service (QoS) appliances just before the WAN router to control outbound traffic. But in today's complex environments, organizations are rethinking how to manage the onslaught of data flowing across the network, and the focus of congestion control has increasingly shifted to traffic flowing inbound from the many data sources.

While outbound QoS has been sufficient for controlling corporate network traffic in the past, two significant trends are driving the need for a shift to inbound QoS:

1) Any-to-any networks and applications: The adoption of mesh network topologies has enabled routing of business application traffic directly from one branch office to another. The types of applications that traverse these paths commonly include VoIP, desktop videoconferencing, and unified communications tools such as Microsoft Lync. When a user in one branch calls a user in another branch office, the VoIP call gets routed point-to-point without requiring traffic to be backhauled to the corporate data center. As a result, this traffic competes with application data coming from the data center as well as from other sites. In this case, outbound traffic at any given location has little if any effect. The only place where all sources of incoming traffic can be effectively controlled is at the receiving location itself.

BLOG POST: What to do when you need more enterprise WAN bandwidth

TECH ARGUMENT: IETF vs. ITU: Internet standards face-off

2) SaaS: A number of organizations have adopted SaaS applications and public cloud services, and lower-cost direct connections to the Internet for branch offices to access the SaaS resources. Applications accessed over the Internet compete for bandwidth with recreational traffic at the branch office, meaning business applications may struggle for needed resources. Data incoming from the corporate data center over a private WAN only exacerbates the problem.

To ensure critical applications perform predictably with a high level of performance it is essential to control less important traffic and to make room on the network for vital data to get through. Placing devices at third-party websites to do outbound QoS is generally not an option, making the point at which traffic enters the corporate network the only possible place to adequately control bandwidth usage.

How is controlling inbound different?

At first glance it may appear that QoS should function the same whether it is deployed on the outbound or the inbound direction. However, there are two subtle but important differences that come into play when controlling application traffic with inbound QoS:

  1. Inbound QoS happens after traffic has traversed the WAN
  2. Inbound QoS happens after traffic has gone through a bottleneck

For inbound QoS to function, the inbound traffic control solution must be the point at which traffic is queued -- essentially the bottleneck. Traffic arriving on-site after being rate-limited by an upstream router renders the QoS solution implemented at the receiving location ineffective. The upstream bottleneck is commonly an unmanaged first-in-first-out (FIFO) queue that gives no consideration to the determined business requirements of the receiving organization and especially for latency-sensitive applications such as VoIP will negatively impact the performance.

Our Commenting Policies
Latest News
rssRss Feed
View more Latest News