CCVP: Quality of Service (QoS) Basics – Part 1

Based on your feedback requesting what topics you’d like to see addressed in this blog, this is the first in a series of entries addressing Cisco’s Quality of Service (QoS) course, and the corresponding exam. In this posting, let’s identify the major topic areas on the QoS course and exam.

Introduction to QoS

-  You should understand the need for QoS in today’s converged networks (i.e. networks that combine voice, data, and video).

- Be able to identify the three broad categories of QoS mechanisms (i.e. Best Effort, Integrated Services, and Differentiated Services).

- Understand the math behind Differentiated Services Code Point (DSCP) QoS markings.

- Distinguish between different approaches for QoS configuration (i.e. Command Line Interface (CLI), Modular QoS CLI (MQC), AutoQoS VoIP, AutoQoS Enterprise, QoS Policy Manager (QPM)).

MQC

- This, in my opinion, is the most important topic in the entire QoS course. MQC is a three-step process for configuring a series of class-based QoS mechanisms.

AutoQoS

- AutoQoS VoIP (available for some routers and switches) and AutoQoS Enterprise (available for some routers) is a great starting point for QoS configuration. Using one or two commands, you can apply an appropriate QoS configuration to a router or switch interface.

Classification and Marking

- Cisco recommends that you classify and mark traffic as close to the source as possible. In other words, you can use a series of classification mechanisms (e.g. using access lists or Network-Based Application Recognition (NBAR)) to identify traffic types and place those traffic types into different defined classes of traffic. Those classes of traffic can then be marked (e.g. altering bits in a packet’s header to identify that packet’s relative level of priority). Then, the next router or the next switch in the packet’s path can very quickly, very efficiently, look and that marking and make a forwarding decision or a dropping decision based on that marking.

- In addition to using a router to apply markings, many Cisco Catalyst switches are also capable of performing marking. In fact, many such switches support remarking, which involves taking a Layer 2 marking (i.e. a Class of Service (CoS) marking) and converting that marking into a corresponding Layer 3 marking (e.g. a DSCP marking).

Queuing

- Imagine a router with two interfaces, a Fast Ethernet interface connected to a LAN and a T1 interface connected to a WAN. Obviously, there is a big speed mismatch between these two interfaces. Traffic could be coming into a router at a rate approaching 100 Mbps, while traffic can only leave the router at a rate of 1.544 Mbps. Does the router simply drop the excess traffic? Not necessarily. The router will allocate a chunk of memory (often called a buffer or queue) to store excess packets until the bandwidth demand declines to the point where packets can be removed from the queue and sent out of the WAN interface. The algorithm that determines how queued packets are emptied from the queue is called a queuing algorithm. Cisco’s flagship queuing algorithm for a router is called Low Latency Queuing (LLQ), while the primary queuing algorithm used on a Cisco Catalyst switch is Weighted Round Robin (WRR).

- LLQ can provide a minimum bandwidth guarantee to multiple traffic classes. However, LLQ can also place one or more traffic classes (such as voice traffic) into a priority queue. Traffic in the priority queue gets to go first, out ahead of the other traffic types, but only to a certain point. What I mean is that this priority traffic will not starve out other traffic types, because the LLQ configuration not only places latency-sensitive traffic in a priority queue, it limits the amount of bandwidth used by that priority traffic.

Congestion Management

- If a queue fills to capacity, newly arriving traffic can be dropped (due to the queue being full). This behavior is called tail drop. Tail drop not only drops traffic, without paying attention to its priority marking, it can lead to another nasty symptom called TCP synchronization, which is the act of multiple TCP flows simultaneously going into TCP Slow Start (which reduces the TCP window size). The bottom line is that TCP synchronization leads to a very inefficient use of bandwidth. To help prevent this type of behavior, a QoS mechanism called Weighted Random Early Detection (WRED) can notice that the queue depth is increasing and start dropping packets (based on a packet’s Layer 3 priority marking), thus preventing the queue from ever filling to capacity.

Traffic Conditioners

- While QoS mechanisms are often thought of as tools used to guarantee a minimum amount of bandwidth for various traffic types, traffic conditioners set a “speed limit” on specific traffic types (e.g. music downloads from the Internet).

- There are two primary categories of traffic conditioners: policing and shaping. Both mechanisms can specify a bandwidth limit, called a Committed Information Rate (CIR). However, by default, policing drops packets sent in excess of the CIR. Conversely, shaping stores the excess packets in a queue and sends the excess packets after bandwidth becomes available.

Link Efficiency Mechanisms

- Link efficiency mechanisms attempt to make the most efficient use of limited WAN bandwidth. The two primary categories of link efficiency mechanisms are: compression and Link Fragmentation and Interleaving (LFI).

- Compression could be either payload compression or header compression. In the voice world, voice media packets use Real-time Transport Protocol (RTP), which is a Layer 4 protocol. Interestingly, the combined Layer 3 and Layer 4 header information totals 40 bytes in size, while the default voice payload for the G.729 codec is only 20 bytes. The header is twice the size of the payload! Fortunately, RTP Header Compression (cRTP) can logically “compress” this Layer 3 and Layer 4 header information from 40 bytes down to only 2 or 4 bytes (4 bytes if UDP checksums are being used). As a result, using cRTP can in some cases more than double the call carrying capacity of a link.

- LFI can be useful for slower speed links (i.e. less than 768 kbps) that simultaneously carry voice and data traffic. To illustrate the need for LFI, imagine a 1500 byte data frame exiting a serial interface running at 56 kbps. The serialization delay, that is the time it takes to clock the frame out on the wire, is approximately 214ms. This is before the voice frame even gets out on the wire. This kind of delay can destroy voice quality. With LFI, a mechanism (e.g. Multilink PPP, FRF.12, or FRF.11 Annex C) can chop up the big data frame, and then, just like you’re shuffling a desk of cards, the voice frames are interleaved in amongst the now-fragmented data frames. As a result, the voice frames get out of the interface sooner.

Summary

Now that you’ve been exposed the some of the fundamental elements of the QoS course and exam, I’ll follow up in the next few blogs with configuration examples. By the way, remember that Cisco now includes simulation questions in many of their exams. So, you should be prepared to actually perform QoS configurations in the exam environment.

By the way, I’m going to be out of the office for the next several days (going on a Disney cruise with my family). So, I won’t be posting again until Dec. 22. So, in addition to this entry addressing QoS fundamentals, I wanted to point you towards some extra study resources that I’ve created. First, Cisco Press often makes free chapters available from some of their books. The following links provide free content from a couple of Cisco Press titles I’ve written addressing QoS.

Excerpt from CCVP QOS Quick Reference Sheets (Digital Short Cut)

IP Telephony Flash Cards Chapter (WRED)

You can also get a free sample of a practice exam I authored and a free QoS configuration video by visiting the QoS area of the www.voipcertprep.com web site.

I hope you’ve found this initial QoS discussion valuable, and I look forward to dissecting some QoS configs when we visit again (Dec. 22).

Take care,

Kevin 

Related:

Copyright © 2008 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022