Chapter 1: Introduction to Cisco Wide Area Application Services (WAAS)

Cisco Press

1 2 3 4 5 Page 4
Page 4 of 5

Each chunk that is identified is assigned a 5-byte signature. This signature is used as the point of reference on each Cisco WAAS device for that particular chunk of data. As DRE is encoding data, if any chunk of data is found within the DRE compression history, it is considered redundant, and the signature is transmitted instead of the chunk. For instance, if a 32-KB chunk was found to be redundant and was replaced with the associated signature, an effective compression ratio of over 500:1 would be realized for that particular chunk of data. If any chunk of data is not found in the DRE compression history, it is added to the local compression history for later use. In this case, both the chunk and the signature are transmitted to allow the peer to update its DRE compression history.

Figure 1-8 illustrates the encoding process.

Figure 1-8

Data Redundancy Elimination Encoding

After the encoding process is complete, the encoding WAE transmits the encoded message with the message validity signature that was calculated for the original block of data. Aside from the message validity signature, the encoded message contains signatures for data patterns that are recognized as redundant, and signatures and data for data patterns that are identified as nonredundant.

Message Validation

DRE uses two means of verifying that encoded messages can be properly rebuilt and match the original data being transmitted. As the decoding WAAS device (closest to the recipient) receives an encoded message, it begins to parse the encoded messages to separate signatures that were sent without an associated chunk of data (redundant data that should exist in the compression history) and signatures that were sent with an accompanying chunk of data (nonredundant data that should be added to the compression history).

As the decoding WAE receives an encoded message, each signature identifying redundant data is used to search the DRE compression history and is replaced with the appropriate chunk of data if found. If the signature and associated chunk of data are not found, a synchronous nonacknowledgment is sent to the encoding WAE to request that the signature and chunk of data both be re-sent. This allows the WAE to rebuild the message with the missing chunk while also updating its local compression history. For chunks of data that are sent with an accompanying signature, the local compression history is updated, and the signature is removed from the message so that only the data remains.

Once the decoding WAAS device has rebuilt the original message based on the encoded data and chunks from the compression history, it then generates a new message validity signature. This message validity signature, which is calculated over the rebuilt message, is compared against the original message validity signature generated by the encoding WAAS device. If the two signatures match, the decoding WAAS device knows that the message has been rebuilt correctly, and the message is returned to the TCP proxy for transmission to the recipient. If the two signatures do not match, the decoding WAAS device sends a synchronous nonacknowledgment over the entire message, requesting that the encoding WAAS device send all of the signatures and data chunks associated with the message that failed decoding. This allows the decoding WAAS device to update its compression history and transmit the message as intended.

Persistent LZ Compression

Cisco WAAS can also employ Persistent LZ Compression, or PLZ, as an optimization based on configured policy. PLZ is a lossless compression algorithm that uses an extended compression history to achieve higher levels of compression than standard LZ variants can achieve. PLZ is helpful for data that has not been identified as redundant by DRE, and can even provide additional compression for DRE-encoded messages, as the DRE signatures are compressible. PLZ is similar in operation to DRE in that it uses a sliding window to analyze data patterns for redundancy, but the compression history is based in memory only and is far smaller than that found in DRE.

Transport Flow Optimization

Cisco WAAS TFO is a series of optimizations that is leveraged for connections that are configured for optimization. By employing TFO, communicating nodes are shielded from performance-limiting WAN conditions such as packet loss and latency. Furthermore, TFO allows nodes to more efficiently use available network capacity and minimize the impact of retransmission. TFO provides the following suite of optimizations:

  • Large initial windows: Large initial windows, found in RFC 3390, allows TFO to mitigate the latency associated with connection setup, as the initial congestion window is increased. This allows the connection to more quickly identify the bandwidth ceiling during slow-start and enter congestion avoidance at a more rapid pace.

  • Selective acknowledgment (SACK) and extensions: SACK, found in RFCs 2018 and 2883, allows a recipient node to explicitly notify the transmitting node what ranges of data have been received within the current window. With SACK, if a block of data goes unacknowledged, the transmitting node need only retransmit the block of data that was not acknowledged. SACK helps minimize the bandwidth consumed upon retransmission of a lost segment.

  • Window scaling: Window scaling, found in RFC 1323, allows communicating nodes to have an enlarged window. This allows for larger amounts of data to be outstanding and unacknowledged in the network at any given time, which allows end nodes to better utilize available WAN bandwidth.

  • Large buffers: Large TCP buffers on the WAAS device provide the memory capacity necessary to keep high-BDP WAN connections full of data. This helps mitigate the negative impact of high-bandwidth networks that also have high latency.

  • Advanced congestion avoidance: Cisco WAAS employs an advanced congestion avoidance algorithm that provides bandwidth scalability (fill the pipe, used in conjunction with window scaling and large buffers) without compromising on cross-connection fairness. Unlike standard TCP implementations that use linear congestion avoidance, TFO leverages the history of packet loss for each connection to dynamically adjust the rate of congestion window increase when loss is not being encountered. TFO also uses a less-conservative backoff algorithm should packet loss be encountered (decreasing the congestion window by 12.5 percent as opposed to 50 percent), which allows the connection to retain higher levels of throughput in the presence of packet loss. Cisco WAAS TFO is based on Binary Increase Congestion (BIC) TCP.

Figure 1-9 shows a comparison between typical TCP implementations and TFO. Notice how TFO is more quickly able to realize available network capacity and begin leveraging it. When congestion is encountered, TFO is able to more intelligently adjust its throughput to accommodate other connections while preserving bandwidth scalability.

Figure 1-9

Comparison of TCP Reno and Cisco WAAS TFO

Whereas this section focused on the WAN optimization components of Cisco WAAS, the next section focuses on the application acceleration components of Cisco WAAS.

Application Acceleration

Application acceleration refers to employing optimizations directly against applications or the application protocols that they use. Whereas WAN optimization refers to techniques employed generally against a network layer or transport layer protocol (Cisco WAAS employs them against the transport layer), application acceleration is employed at higher layers. The optimizations found in application acceleration are in many ways common across applications and application protocols, but because they must be specific to each application or application protocol, these optimizations may be implemented differently.

Ensuring application correctness (don't break the application), data integrity (don't corrupt the data), and data coherency (don't serve stale data) is of paramount importance in any application acceleration solution. With WAN optimization components, ensuring these items is generally easy, as the optimizations employed are done against a lower layer with well-defined semantics for operation. With application acceleration, however, ensuring these items is more difficult, as applications and application protocols are more diverse, complex, and finicky with respect to how they must be handled.

Table 1-2 lists the high-level application acceleration techniques that can be found within Cisco WAAS. Note that this list is not all-inclusive, and focuses on the techniques that are commonly applied to accelerated applications, but others certainly exist.

Table 1-2 Cisco WAAS Application Acceleration Techniques

Acceleration Technique

Functional Description and Value

Object caching

Object caching allows Cisco WAAS to, when safe, store copies of previously accessed objects (files, other content) to be reused by subsequent users. This only occurs when the application state permits caching, and cached objects are served to users only if application state requirements are met and the object has been validated against the origin server as having not changed. Caching mitigates latency (objects served locally), saves WAN bandwidth (does not have to be transferred over the WAN), minimizes server workload (does not have to be transferred from the server), and improves application performance.

Local response handling

By employing stateful optimization, Cisco WAAS can locally respond to certain message types on behalf of the server. This only occurs when the application state permits such behavior, and can help minimize the perceived latency as fewer messages are required to traverse the WAN. As with object caching, this helps reduce the workload encountered on the server while also improving application performance.

Prepositioning

Prepositioning is used to allow an administrator to specify what content should be proactively copied to a remote Cisco WAAS object cache. This helps improve first-user performance by better ensuring a "cache hit," and can also be used to populate the DRE compression history. Population of the DRE compression history is helpful in environments where the object being prepositioned may be written back from the remote location with some changes applied, which is common in software development and CAD/CAM environments.

Read-ahead

Read-ahead allows Cisco WAAS to, when safe, increase read request sizes on behalf of users, or initiate subsequent read requests on behalf of users, to have the origin server transmit data ahead of the user request. This allows the data to reach the edge device in a more timely fashion, which in turn means the requesting user is served more quickly. Read-ahead is helpful in cache-miss scenarios, or in cases where the object is not fully cached. Read-ahead minimizes the WAN latency penalty by prefetching information.

Write-behind

Write-behind allows Cisco WAAS to, when safe, locally acknowledge write requests from a user application. This allows Cisco WAAS to streamline the transfer of data over the WAN, minimizing the impact of WAN latency.

Multiplexing

Multiplexing refers to a group of optimizations that can be applied independently of one another or in tandem. These include fast connection setup, TCP connection reuse, and message parallelization. Multiplexing helps overcome WAN latency associated with TCP connections or application layer messages, thereby improving performance.

The application of each of these optimizations is determined dynamically for each connection or user session. Because Cisco WAAS is strategically placed in between two communicating nodes, it is in a unique position not only to examine application messages being exchanged to determine what the state of the connection or session is, but also to leverage state messages being exchanged between communicating nodes to determine what level of optimization can safely be applied.

As of Cisco WAAS v4.0.13, Cisco WAAS employs these optimizations against the CIFS protocol and certain MS-RPC operations. WAAS also provides a local print services infrastructure for the remote office, which helps keep print traffic off of the WAN if the local file and print server have been consolidated. Releases beyond v4.0.13 will add additional application protocols to this list.

The following sections provide an example of each of the application acceleration techniques provided by Cisco WAAS. It is important to note that Cisco WAAS employs application layer acceleration capabilities only when safe to do so. The determination on "safety" is made based on state information and metadata exchanged between the two communicating nodes. In any circumstance where it is not safe to perform an optimization, Cisco WAAS dynamically adjusts its level of acceleration to ensure compliance with protocol semantics, data integrity, and data coherency.

Object and Metadata Caching

Object and metadata caching are techniques employed by Cisco WAAS to allow an edge device to retain a history of previously accessed objects and their metadata. Unlike DRE, which maintains a history of previously seen data on the network (with no correlation to the upper-layer application), object and metadata caching are specific to the application being used, and the cache is built with pieces of an object or the entire object, along with its associated metadata. With caching, if a user attempts to access an object, directory listing, or file attributes that are stored in the cache, such as a file previously accessed from a particular file server, the file can be safely served from the edge device, assuming the user has successfully completed authorization and authentication and the object has been validated (verified that it has not changed). Caching requires that the origin server notify the client that caching is permitted through the use of opportunistic locks or other state propagation mechanisms.

Object caching provides numerous benefits, including:

Related:
1 2 3 4 5 Page 4
Page 4 of 5
IT Salary Survey 2021: The results are in