Chapter 1: Introduction to Cisco Wide Area Application Services (WAAS)

Cisco Press

1 2 3 4 5 Page 2
Page 2 of 5

Some applications and protocols have since added semantics that help to minimize the bandwidth inefficiencies of applications operating in WAN environments. For instance, the web browsers of today have built-in client-side caching capabilities. Objects from Internet sites and intranet applications that are transferred over the WAN have metadata included in the protocol header that provides information to the client browser, thus allowing the browser to make a determination of whether or not caching should be used for the object. By employing a client-side cache in such applications, the repeated transmission of objects can be mitigated when the same user requests the object using the same application. Although this improves performance for that particular user, this information goes completely unused when a different user attempts to access that same object, as the application cache is wholly contained on each individual client and not shared across multiple users. Application-level caching is isolated not only to the user that cached the object, but also to the application within that user's workstation. This means that while the user's browser has a particular file cached, a different application has no means of leveraging that cached object. Some applications require that software upgrades be added to provide caching functionality.

Although the previous two sections focused primarily on latency and bandwidth utilization as application layer performance challenges, the items discussed in the next section, "Network Infrastucture," also impact application layer performance. The next section focuses primarily on the network infrastructure aspects that impact end-to-end performance, and also discusses how these challenges have a direct impact on L4–7 and end-to-end performance.

Network Infrastructure

The network itself also creates a tremendous number of application performance barriers. In many cases, the challenges found in L4–7 are exacerbated by the challenges that are manifest in the network infrastructure itself. For instance, the impact of application layer latency is amplified when network infrastructure latency is high. The impact of application layer bandwidth inefficiencies are amplified when the amount of available bandwidth in the network is not sufficient. Packet loss has an adverse effect on application performance, generally indirectly, as transport protocols react to loss events to normalize connection throughput around the available network capacity. This section focuses specifically on the issues that are present in the network infrastructure that negatively impact application performance, and also examines how these issues impact the L4–7 challenges discussed previously. These issues include bandwidth constraints, network latency, and loss and congestion.

Bandwidth Constraints

Network bandwidth can create performance constraints related to application performance. Bandwidth found in the LAN has evolved over the years from Fast Ethernet (100 Mbps), to Gigabit Ethernet (1 Gbps), to 10-Gigabit Ethernet (10 Gbps), and eventually 100-Gigabit Ethernet (100 Gbps) will begin to be deployed. Generally speaking, the bandwidth capacity on the LAN is not a limitation from an application performance perspective. WAN bandwidth, on the other hand, is not increasing as rapidly as LAN bandwidth, and the price per megabit of bandwidth is significantly higher than it is on the LAN. This is largely due to the fact that WAN bandwidth is commonly provided as a service from a carrier or service provider, and the connections must traverse a "cloud" of network locations to connect two geographically distant networks. Most carriers have done a substantial amount of research into what levels of oversubscription in the core network are tolerable to their customers, with the exception being dedicated circuits where the bandwidth is guaranteed.

Nevertheless, WAN bandwidth is far more costly than LAN bandwidth, and the most common WAN circuits found today are an order of magnitude smaller in bandwidth than what can be deployed in a LAN. The most common WAN link found in today's remote office and branch office environment is the T1 (1.544 Mbps), which is roughly 1/64 the capacity of a Fast Ethernet connection, which is in today's network environments being phased out in favor of Gigabit Ethernet.

When examining application performance in WAN environments, it is important to note the bandwidth disparity that exists between LAN and WAN environments, as the WAN is what connects the many geographically distributed locations. Such a bandwidth disparity makes environments where nodes are on disparate LANs and separated by a WAN susceptible to a tremendous amount of oversubscription. In these cases, the amount of bandwidth that is able to be used for service is tremendously smaller than the amount of bandwidth capacity found on either of the LAN segments connecting the devices that are attempting to communicate. This problem is exacerbated by the fact that there are commonly tens, hundreds, or even in some cases thousands of nodes that are trying to compete for this precious WAN bandwidth.

Figure 1-3 provides an example of the oversubscription found in a simple WAN environment with two locations, each with multiple nodes attached to the LAN via Fast Ethernet (100 Mbps), contending for available bandwidth on a T1. In this example, the location with the server is also connected to the WAN via a T1, and the potential for exceeding 500:1 oversubscription is realized.

Figure 1-3

Network Oversubscription in a WAN Environment

When oversubscription is encountered, traffic that is competing for available WAN bandwidth must be queued to the extent allowed by the intermediary network devices. The queuing and scheduling disciplines applied can be dictated by a configured policy for control and bandwidth allocation (such as quality of service, or QoS) on the intermediary network elements. In any case, if queues become exhausted, packets must be dropped, as there is no memory available in the oversubscribed network device to store the data for service. Loss of packets will likely impact the application's ability to achieve higher levels of throughput and, in the case of a connection-oriented transport protocol, likely cause the communicating nodes to adjust their rate of transmission to a level that allows them to use only their fair share of the available bandwidth.

As an example, consider a user transmitting a file by way of the File Transfer Protocol (FTP). The user is attached to a Fast Ethernet LAN, as is the server, but a T1 WAN separates the two locations. The maximum achievable throughput would be limited by the T1, as it is the slowest link in the path of communication. Thus, the application throughput (assuming 100 percent efficiency and no packet loss) would be limited to roughly 1.544 Mbps (megabits per second), or 193 kBps (kilobytes per second). Given that packet loss is imminent, and no transport protocol is 100 percent efficient, it is likely that the user would see approximately 90 percent of line-rate in terms of application throughput, or roughly 1.39 Mbps (174 kBps).

Taking the example one step further, if two users were performing the same test (FTP transfer over a T1), the router queues (assuming no QoS policy favoring one user over the other) would quickly become exhausted as the connections began discovering available bandwidth. As packets begin to get dropped by the router, the transport protocol would react to the loss and adjust throughput accordingly. The net result is that both nodes would rapidly converge to a point where they were sharing the bandwidth fairly, and connection throughput would oscillate around this point of convergence (roughly 50 percent of 1.39 Mbps, or 695 kbps, which equals 86.8 kBps). This example is simplistic in that it assumes there is no packet loss or latency found in the WAN. The impact of transport protocols will be examined as part of the discussions on network latency, loss, and congestion.

Network Latency

The example at the end of the previous section did not take into account network latency. Network latency is the amount of time taken for data to traverse a network in between two communicating devices. Network latency is considered the "silent killer" of application performance, as most network administrators have simply tried (and failed) to circumvent application performance problems by adding bandwidth to the network. Put simply, network latency can have a significant effect on the amount of network capacity that can be consumed by two communicating nodes.

In a campus LAN, latency is generally under 1 ms, meaning the amount of time for data transmitted by a node to be received by the recipient is less than 1 ms. This number may of course increase based on how geographically dispersed the campus LAN is, and also on what levels of utilization and oversubscription are encountered. In a WAN, latency is generally measured in tens or hundreds of milliseconds, much higher than what is found in the LAN. Latency is caused by the propagation delay of light or electrons, which is generally 66 percent of the speed of light (or 2 ∴ 108 meters per second). Although this seems extremely fast on the surface, when stretched over a great distance, the latency can be quite noticeable. For instance, in a network that spans 3000 miles (4.8 million meters), the distance between New York and San Francisco, it would take roughly 24.1 ms in one direction for a packet to traverse the network from one end to the other. This of course assumes no serialization delays, loss, or congestion in the network, and that the most direct route is chosen through the network with little to no deviation in distance. It would therefore take at least 52.8 ms for a transmitting node to receive an acknowledgment for a segment that was sent, assuming no time was required for the recipient to process that the data was received.

Figure 1-4 shows how latency in its simplest form can impact the performance of a telephone conversation, which is analogous to two nodes communicating over an internetwork with 1 second of one-way latency.

Figure 1-4

Challenges of Network Latency

The reason network latency has an impact on application performance is two-fold. First, network latency introduces delays that impact mechanisms that control rate of transmission. For instance, connection-oriented, guaranteed-delivery transport protocols such as TCP use a sliding-window mechanism to track what transmitted data has been successfully received by a peer and how much additional data can be sent. As data is received, acknowledgments are generated, which not only notify the sender that the data is received, but also relieves window capacity so more data can be transmitted if available. Transport protocol control messages are exchanged between nodes on the network, so any latency found in the network will also impact the rate at which these control messages can be exchanged. Overall, this impacts the rate at which data can be drained from a sender's transmission buffer into the network. This has a cascading effect, which causes the second impact on application performance for those applications that rely on transport protocols that are susceptible to performance barriers caused by latency. This second impact is discussed later in this section.

Latency not only delays the receipt of data and the subsequent receipt of the acknowledgment for that data, but also can be so large that it actually renders a node unable to leverage all of the available bandwidth capacity. This problem is encountered when the capacity of the network, which is the amount of data that can be in flight at any one given time, is greater than the sliding-window capacity of the sender. For instance, a DS3 (45 Mbps, or roughly 5.63 MBps) with 100 ms of latency can have up to 563 KB (5.63 MBps ∴ .1) of data in flight and traversing the link at any point in time (assuming the link is 100 percent utilized). This "network capacity" is called the bandwidth delay product (BDP), and is calculated by multiplying the network bandwidth (after conversion to bytes) by the amount of latency. Given that many computers today have only a small amount of memory allocated for each TCP connection (64 KB, unless window scaling is used), if the network BDP exceeds 64 KB, the transmitting node will not be able to successfully "fill the pipe." This is primarily due to the fact that the window is not relieved quickly enough because of the latency, and the buffer is not big enough to keep the link full. This also assumes that the recipient has large enough buffers on the distant end to allow the sender to continue transmission without delay.

Figure 1-5 shows an example of how latency and small buffers render the transmitter unable to fully capitalize on the available bandwidth capacity.

Figure 1-5

Latency and Small Transmission Buffers

The second impact on application performance is related to application-specific messages that must be exchanged using latency-sensitive transport protocols. Most applications today are very robust and require that a series of messages be exchanged between nodes before any real "work" is done. In many cases, these control messages are exchanged in a serial fashion, where each builds upon the last until ultimately small pieces of usable data are exchanged. This type of behavior, where applications exhibit send-and-wait behavior, is also known as "application ping-pong," because many messages must be exchanged in sequence and in order before any actual usable data is exchanged. In many cases, these same applications exchange only a small amount of data, and each small piece of data is followed by yet another series of control messages leading up to the next small piece of data.

As this section has shown, latency has an impact on the transmitting node's transport protocol and its ability to effectively utilize available WAN capacity. Furthermore, applications that exhibit "ping-pong" behavior are impacted even further due to the latency encountered when exchanging application layer messages over the impacted transport protocol. The next section examines the impact of packet loss and congestion on throughput and application performance.

Loss and Congestion

Packet loss and congestion also have a negative impact on application throughput. Although packet loss can be caused by anything from signal degradation to faulty hardware, it is most commonly the result of either of the following two scenarios:

  • Internal oversubscription of allocated connection memory within a transmitting node

  • Oversubscribed intermediary network device queues

Related:
1 2 3 4 5 Page 2
Page 2 of 5
IT Salary Survey 2021: The results are in