The case for WAN acceleration as NFV

Why this optimization function is ideally suited for network function virtualization

The case for WAN acceleration as NFV
iStockphoto

Previously, I discussed the benefits of using regional performance hubs to support new data patterns associated with the increasing use of cloud applications such as Salesforce.com and Office365.

Just as business applications have transitioned to an “as a service” model, so will many network-based functions such as firewalls, IPS, IDS, etc. using network function virtualization (NFV). Although there hasn’t much been public discourse yet on WAN Optimization as a service, it is ideally suited for being “NFV-ed.”

+ Also on Network World: Reinventing the WAN +

The primary role of WAN optimization is to overcome application performance bottlenecks associated with network architectures that were designed for data center, not cloud-based, applications. To function properly, most applications need good bandwidth and low latency. In large-scale WAN deployments, latency, bandwidth constraints and packet loss are inevitable.

WAN optimization is achieved using two primary components. 

  1. Application agnostic
  2. Application specific

1. Application agnostic

This method combines TCP optimization, compression and data suppression. 

The original TCP RFC was written in 1981 and was carefully designed to adapt to varying types of link characteristics, including bandwidth, packet loss and delay. Today’s applications and network topologies, however, are significantly different from those of the 1980s. At that time, TCP/IP congestion was largely due to a handful of nodes on a shared network of limited scale, and most data was text-oriented. In 2016, a single corporate user can easily consume tens or even hundreds of megabytes of data in a single transfer.  

TCP uses bidirectional message exchanges between two nodes to establish a connection. Once the connection is established, TCP goes into network resource discovery mode, which is based on network capacity (bandwidth) and latency. It is adaptive for both the application process and the network.

As the name implies, compression shrinks the amount of data being sent across the network to minimize bandwidth consumption. However, with the commoditization of bandwidth, this is becoming less of an issue. Since WAN bandwidth has traditionally been significantly lower than what is available in the LAN, many remote branch MPLS sites were, and still are, connected via T1 (1.544 Mbps). This is changing as much higher bandwidth in the WAN becomes available.

Data suppression can eliminate the transfer of redundant data across the network. Since we are not deploying WAN accelerators at every location and recommending to just use TCP optimization, this function should also be handled at performance hubs instead.

It is much more efficient to centralize data suppression rather than having each location perform it for the same shared application. This approach also reduces the duplication of data from the data center to a few performance hub sites versus from the data center to thousands of branch locations.

2. Application specific

This approach places application accelerators at each end of a WAN connection for flow optimization and latency mitigation. It improves application performance and minimizes bandwidth consumption through object caching, read-ahead, write-behind and message prediction. The biggest benefit of application acceleration is that it does not require changing the application’s behavior.

However, as more and more applications move to the cloud, and with most large cloud providers residing in the metro area, it’s time to rethink application acceleration.

Some corporate applications, especially in regulated industries like financial services, will take some time to move to the cloud. In cases where applications remain in the data center, using application-specific function and data duplication at performance hubs can provide a better use experience and support a large number of branches on a regional basis.

Figure 1 shows how WAN optimization can be transformed by decoupling TCP optimization and application acceleration. In this model, TCP optimization is performed by an onsite router, while application acceleration occurs at performance hubs. This approach is similar to the way content delivery networks (CDN) are architected to improve the user experience by bringing content closer to the consumer.

nww wanaccelerationnfv figure1

Figure 1

While this model does not directly eliminate the shortcomings of TCP connections, it significantly improves the performance of this aging protocol by removing latency and bandwidth limitations. It represents another step towards transforming the network from a data-center-centric architecture to a performance-hub-based architecture.

Copyright © 2016 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022