Chapter 1: Introduction to Cisco Wide Area Application Services (WAAS)

Cisco Press

IT organizations are struggling with two opposing challenges: to provide high levels of application performance for an increasingly distributed workforce, and to consolidate costly infrastructure to streamline management, improve data protection, and contain costs. Separating the growing remote workforce from the location that IT desires to deploy infrastructure is the wide-area network (WAN), which introduces tremendous delay, packet loss, congestion, and bandwidth limitations, all of which can impede a user's ability to interact with applications in a high-performance manner.

Cisco Wide Area Application Services (WAAS) is a solution designed to bridge the divide between application performance and infrastructure consolidation in WAN environments. By employing robust optimizations at multiple layers, Cisco WAAS is able to ensure high-performance access to distant application infrastructure, including file services, e-mail, intranet, portal applications, and data protection. By mitigating the performance-limiting factors of the WAN, Cisco WAAS not only improves performance, but also positions IT organizations to better consolidate distributed infrastructure to better control costs and ensure a stronger position toward data protection and compliance.

The purpose of this book is to discuss the Cisco WAAS solution in depth, including a thorough examination of how to design and deploy Cisco WAAS solutions. This chapter provides an introduction to the performance barriers that are created by the WAN, and a technical introduction to Cisco WAAS. This chapter also examines the software architecture of Cisco WAAS, and outlines how each of the fundamental optimization components overcomes those application performance barriers. The chapter ends with a discussion of how Cisco WAAS fits into a network-based architecture of optimization technologies, and how these technologies can be deployed in conjunction with Cisco WAAS to provide a holistic solution for improving application performance over the WAN.

Understanding Application Performance Barriers

Before examining how Cisco WAAS overcomes performance challenges created by network conditions in the WAN, it is important to have an understanding of how those conditions in the WAN impact application performance. Applications today are becoming increasingly robust and complex compared to applications ten years ago, and it is expected that this trend will continue. Many enterprise applications are multitiered, having a presentation layer (commonly comprised of web services), which in turn accesses an application tier of servers, which interacts with a database tier (commonly referred to as an n-tier architecture). Each of these distinct layers commonly interacts with one another using middleware, which is a subsystem that connects disparate software components or architectures. As of this writing, the majority of applications in use today are client/server, involving only a single tier on the server side (for instance, a simple file server). However, n-tier application infrastructures are becoming increasingly popular.

Layer 4 Through Layer 7

Server application instances, whether single-tier or n-tier, primarily interact with user application instances at the application layer of the Open Systems Interconnection (OSI) model. At this layer, application layer control and data messages are exchanged to perform functions based on the business process or transaction being performed. For instance, a user may 'GET' an object stored on a web server using HTTP. Interaction at this layer is complex, as the number of operations that can be performed over a proprietary protocol or even a standards-based protocol can be literally in the hundreds or thousands. Between the application layers on a given pair of nodes exists a hierarchical structure of layers between the server application instance and user application instance, which also adds complexity—and performance constraints.

For instance, data that is to be transmitted between application instances might pass through a shared (and prenegotiated) presentation layer. This layer may or may not be present depending on the application, as many applications have built-in semantics around data representation. This layer is responsible for ensuring that the data conforms to a specific structure, such as ASCII or Extensible Markup Language (XML).

From the presentation layer, the data might be delivered to a session layer, which is responsible for establishing an overlay session between two endpoints. Session layer protocols provide applications with the capability to manage checkpoints and recovery of atomic upper-layer protocol (ULP) exchanges, which occur at a transactional or procedural layer as compared to the transport of raw segments (provided by the Transmission Control Protocol, discussed later). Similar to the presentation layer, many applications may have built-in semantics around session management and may not use a discrete session layer. However, some applications, commonly those that use remote procedure calls (RPC), do require a discrete session layer.

Whether the data to be exchanged between a user application instance and server application instance requires the use of a presentation layer or session layer, data to be transmitted across an internetwork will be handled by a transport protocol. The transport protocol is primarily responsible for data multiplexing—that is, ensuring that data transmitted by a node is able to be processed by the appropriate application process on the recipient node. Commonly used transport layer protocols include the Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Stream Control Transmission Protocol (SCTP). The transport protocol is commonly responsible for providing guaranteed delivery and adaptation to changing network conditions, such as bandwidth changes or congestion. Some transport protocols, such as UDP, do not provide such capabilities. Applications that leverage UDP either implement their own means of guaranteed delivery or congestion control, or these capabilities simply are not required for the application.

The components mentioned previously, including transport, session, presentation, and application layers, represent a grouping of services that dictate how application data is exchanged between disparate nodes. These components are commonly called Layer 4 through Layer 7 services, or L4–7 services, or application networking services (ANS). L4–7 services rely on the packet routing and forwarding services provided by lower layers, including the network, data link, and physical layers, to move segments of application data in network packets between nodes that are communicating. With the exception of network latency caused by distance and the speed of light, L4–7 services generally add the largest amount of operational latency to the performance of an application. This is due to the tremendous amount of processing that must take place to move data into and out of buffers (transport layer), maintain long-lived sessions between nodes (session layer), ensure data conforms to representation requirements (presentation layer), and exchange application control and data messages based on the task being performed (application layer).

Figure 1-1 shows an example of how L4–7 presents application performance challenges.

Figure 1-1

L4–7 Performance Challenges

The performance challenges caused by L4–7 can generally be classified into the following categories: latency, bandwidth inefficiencies, and throughput. These are examined in the following three sections.

Latency

L4–7 latency is a culmination of the latency components added by each of the four layers involved: application, presentation, session, and transport. Given that presentation layer, session layer, and transport layer latency are typically low and have minimal impact on overall performance, this section focuses on latency that is incurred at the application layer. It should be noted that, although significant, the latency added by L4–7 processing in the node itself is typically minimal compared to latency found in the network itself, and far less than the performance impact of application layer latency caused by protocol chatter over a high-latency network.

Application layer latency is defined as the operational latency of an application protocol and is generally exhibited when applications or protocols have a "send-and-wait" type of behavior. An example of application layer latency can be observed when accessing a file on a file server using the Common Internet File System (CIFS) protocol, which is predominant in environments using Windows clients and Windows servers, or network-attached storage (NAS) devices that are being accessed by Windows clients. In such a case, the client and server must exchange a series of "administrative" messages prior to any data being sent to a user.

For instance, the client must first establish the session to the server, and establishment of this session involves validation of user authenticity against an authority such as a domain controller. Then, the client must establish a connection to the specific share (or named pipe), which requires that client authorization be examined. Once the user is authenticated and authorized, a series of messages is exchanged to traverse the directory structure and gather metadata. After the file is identified, a series of lock requests must be sent in series (based on file type), and then file I/O requests (such as read, write, or seek) can be exchanged between the user and the server. Each of these messages requires that a small amount of data be exchanged over the network, causing operational latency that may be unnoticed in a local-area network (LAN) environment but is significant when operating over a WAN.

Figure 1-2 shows an example of how application layer latency alone in a WAN environment can significantly impede the response time and overall performance perceived by a user. In this example, the one-way latency is 100 ms, leading to a situation where only 3 KB of data is exchanged in 600 ms of time.

It should be noted that although the presentation, session, and transport layers do indeed add latency, it is commonly negligible in comparison to application layer latency. It should also be noted that the transport layer performance itself is commonly subject to the amount of perceived latency in the network due to the slowness associated with relieving transmission windows and other factors. The impact of network latency on application performance is examined in the next section, "Network Infrastructure."

Figure 1-2

Latency-Sensitive Application Example

Bandwidth Inefficiencies

The lack of available network bandwidth (discussed in the section, "Network Infrastructure") coupled with application layer inefficiencies in the realm of data transfer creates an application performance barrier. This performance barrier is manifest when an application is inefficient in the way information is exchanged between two communicating nodes. For instance, assume that ten users are in a remote office that is connected to the corporate campus network by way of a T1 (1.544 Mbps). If these users use an e-mail server (such as Microsoft Exchange) in the corporate campus network, and an e-mail message with a 1-MB attachment is sent to each of these users, the e-mail message needs to be transferred once for each user, or ten times. Such scenarios can massively congest enterprise WANs, and similarities can be found across many different applications:

  • Redundant e-mail attachments being downloaded over the WAN multiple times by multiple users

  • Multiple copies of the same file stored on distant file servers being accessed over the WAN by multiple users

  • Multiple copies of the same web object stored on distant intranet portals or application servers being accessed over the WAN by multiple users

In many cases, the data contained in objects being accessed across the gamut of applications used by remote office users will likely contain a significant amount of redundancy. For instance, one user might send an e-mail attachment to another user over the corporate WAN, while another user accesses that same file (or a different version of that file) using a file server protocol over the WAN. The packet network itself has historically been independent of the application network, meaning that characteristics of data were generally not considered, examined, or leveraged when routing information throughout the corporate network.

Related:
1 2 3 4 5 Page 1
Page 1 of 5
The 10 most powerful companies in enterprise networking 2022