Chapter 1: Introduction to Cisco Wide Area Application Services (WAAS)

Cisco Press

1 2 3 4 5 Page 5
Page 5 of 5
  • LAN-like access to cached objects: Objects that can be safely served out of cache are served at LAN speeds by the WAE adjacent to the requester.

  • WAN bandwidth savings: Object caching minimizes the transfer of redundant objects over the network, thereby minimizing overall WAN bandwidth consumption.

  • Server offload: Object caching minimizes the amount of workload that must be managed by the server being accessed. By safely offloading work from the server, IT organizations may be in a position to minimize the number of servers necessary to support an application.

Figure 1-10 shows an example of object caching and a cache hit as compared to a cache miss.

Figure 1-10

Examining Cache Hit and Cache Miss Scenarios

As shown in Figure 1-10, when a cache hit occurs, object transfers are done on the LAN adjacent to the requesting node, which minimizes WAN bandwidth consumption and improves performance. When a cache miss occurs, the object is fetched from the origin server in an optimized fashion and, if applicable, the data read from the origin server is used to build the cache to improve performance for subsequent users. This is often referred to as the "first-user penalty" for caching.

Prepositioning

Prepositioning is a function by which an administrator can specify which objects should be proactively placed in the cache of a specific edge device or group of edge devices. By using prepositioning, an administrator can ensure high-performance access to an object for the first requesting user (assuming caching is safe to be used for the user's session), eliminating the first-user penalty. Prepositioning is helpful in environments where large object transfers are necessary. For instance, CAD/CAM, medical imaging, software distribution, software development all require the movement of large files, and prepositioning can help improve performance for remote users while also offloading the WAN and servers in the data center. Prepositioning can also be used as a means of prepopulating the DRE compression history.

Read-Ahead

Read-ahead is a technique that is useful both in application scenarios where caching can be applied and in scenarios where caching cannot be applied. With read-ahead, a Cisco WAAS device may, when applicable, increment the size of the application layer read request on behalf of the user, or generate additional read requests on behalf of the user. The goal of read-ahead is two-fold:

  • When used in a cache-miss scenario, provide near-LAN response times to overcome the first-user penalty. Read-ahead, in this scenario, allows the WAE to begin immediate and aggressive population of the edge cache.

  • When used in a scenario where caching is not permitted, aggressively fetch data on behalf of the user to mitigate network latency. Read-ahead, in this scenario, is not used to populate a cache with the object, but rather to proactively fetch data that a user may request. Data prefetched in this manner is only briefly cached to satisfy immediate read requests that are for blocks of data that have been read ahead.

Figure 1-11 shows an example of how read-ahead can allow data to begin transmission more quickly over the WAN, thereby minimizing the performance impact of WAN latency.

Figure 1-11

Read-Ahead in Caching and Noncaching Scenarios

Write-Behind

Write-behind is an optimization that is complementary to read-ahead optimization. Whereas read-ahead focuses on getting the information to the edge more quickly, write-behind focuses on getting the information to the core more quickly—at least from the perspective of the transmitting node. In reality, write-behind is a technique by which a Cisco WAAS device can positively acknowledge receipt of an application layer write request, when safe, to allow the transmitting node to continue to write data. This optimization is commonly employed against application protocols that exhibit high degrees of ping-pong, especially as data is written back to the origin server.

As an optimization that positively acknowledges write requests that have not yet been received by the server being written to, write-behind is only employed against protocols that support information recovery in the event of disconnection (for instance, through temporary files) and is only employed when safe to do so. For applications that do not support information recovery in the event of loss, this optimization cannot be safely applied.

Multiplexing

Multiplexing is a term that refers to any process where multiple message signals are combined into a single message signal. Multiplexing, as it relates to Cisco WAAS, refers to the following optimizations:

  • TCP connection reuse: By reusing existing established connections rather than creating new connections, TCP setup latency can be mitigated, thereby improving performance. TCP connection reuse is applied only on subsequent connections between the same client and server pair over the same destination port.

  • Message parallelization: For protocols that support batch requests, Cisco WAAS can parallelize otherwise serial tasks into batch requests. This helps minimize the latency penalty, as it is amortized across a series of batched messages as opposed to being experienced on a per-message basis. For protocols that do not support batch requests, Cisco WAAS may "predict" subsequent messages and presubmit those messages on behalf of the user in an attempt to mitigate latency.

This section focused on the application-specific acceleration components of Cisco WAAS, including caching, prepositioning, read-ahead, write-behind, and multiplexing. The next section focuses on the integration aspects of Cisco WAAS as it relates to the ecosystem that is the enterprise IT infrastructure, as well as additional value-added features that are part of the Cisco WAAS solution.

Other Features

Cisco WAAS is a unique application acceleration and WAN optimization solution in that it is the only solution that not only provides the most seamless interoperability with existing network features, but also integrates physically into the Cisco Integrated Services Router (ISR). With the Cisco ISR, customers can deploy enterprise edge connectivity to the WAN, switching, wireless, voice, data, WAN optimization, and security in a single platform for the branch office. (The router modules and the appliance platforms are examined in the next chapter.) The following are some of the additional features that are provided with the Cisco WAAS solution:

  • Network transparency: Cisco WAAS is fundamentally transparent in three domains—client transparency, server transparency (no software installation or configuration changes required on clients or servers), and network transparency. Network transparency allows Cisco WAAS to interoperate with existing networking and security functions such as firewall policies, optimized routing, QoS, and end-to-end performance monitoring.

  • Enterprise-class scalability: Cisco WAAS can scale to tens of gigabits of optimized throughput and tens of millions of optimized TCP connections using the Cisco Application Control Engine (ACE), which is an external load-balancer and is discussed in detail in Chapter 6, "Data Center Network Integration." Without external load balancing, Cisco WAAS can scale to tens of gigabits of optimized throughput and over one million TCP connections using the Web Cache Coordination Protocol version 2 (WCCPv2), which is discussed in both Chapter 4, "Network Integration and Interception," and Chapter 6.

  • Trusted WAN optimization: Cisco WAAS is a trusted WAN optimization and application acceleration solution in that it integrates seamlessly with many existing security infrastructure components such as firewalls, intrusion detection systems (IDS), intrusion prevention systems (IPS), and virtual private network (VPN) solutions. Integration work has been done on not only Cisco WAAS but adjacent Cisco security products to ensure that security posture is not compromised when Cisco WAAS is deployed. Cisco WAAS also supports disk encryption (using AES-256 encryption) with centrally managed keys. This mitigates the risk of data loss or data leakage if a WAE is compromised or stolen.

  • Automatic discovery: Cisco WAAS devices can automatically discover one another during the establishment of a TCP connection and negotiate a policy to employ. This eliminates the need to configure complex and tedious overlay networks. By mitigating the need for overlay topologies, Cisco WAAS permits optimization without requiring that administrators manage the optimization domain and topology separate from the routing domain.

  • Scalable, secure central management: Cisco WAAS devices are managed and monitored by the Cisco WAAS Central Manager. The Central Manager can be deployed in a highly available fashion using two Cisco WAAS devices. The Central Manager is secure in that any exchange of data between the Central Manager and a managed Cisco WAAS device is done using SSL, and management access to the Central Manager is encrypted using HTTPS for web browser access or SSH for console access (Telnet is also available). The Central Manager provides a simplified means of configuring a system of devices through device groups, and provides role-based access control (RBAC) to enable segregation of management and monitoring. The Central Manager is discussed in more detail in Chapter 7, "System and Device Management."

Summary

IT organizations are challenged with the need to provide high levels of application performance for an increasingly distributed workforce. Additionally, they are faced with an opposing challenge to consolidate costly infrastructure to contain capital and operational expenditures. Organizations find themselves caught between two conflicting realities: to distribute costly infrastructure to remote offices in order to solve performance requirements of a growingly distributed workforce, and to consolidate costly infrastructure from those same remote offices to control capital and operational costs and complexity. Cisco WAAS is a solution that employs a series of WAN optimization and application acceleration techniques to overcome the fundamental performance limitations of WAN environments to allow remote users to enjoy near-LAN performance when working with centralized application infrastructure and content.

Copyright © 2007 Pearson Education. All rights reserved.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Related:

Copyright © 2008 IDG Communications, Inc.

1 2 3 4 5 Page 5
Page 5 of 5
IT Salary Survey 2021: The results are in