The Consequences of WAN Service History

QoS, compression and “WAN Optimization” technologies introduced due to limited bandwidth of expensive private WANs

In our last few columns, we looked at the history of the Enterprise WAN, with the key takeaway being that reliable but very expensive private WAN services Frame Relay and now MPLS have been the dominant technologies for almost 20 years now, and we looked at the key factors affecting application performance over the WAN: bandwidth, latency, and packet loss, as well as "chattiness" issues and some of the reasons for latency and packet loss on the WAN.

Beginning here and over the next few columns, let's now take a look at some of the consequences of WAN Service history, including how different WAN technologies were developed to address these various issues. This time, we'll essentially be covering technologies first introduced between about 1996 and 2004.

This being an opinion column, and not objective reporting nor a scholarly journal, I make no claims that this is an exhaustive history, but I do think it covers the key technologies and the evolution of WAN Optimization technologies, which are still relevant today.

LANs, of course, have "thrown bandwidth at the problem" for more than 20 years now. Yes, 802.1p was introduced along the way to provide basic QoS in the form of traffic prioritization at the Ethernet MAC layer, but that was pretty much it in terms of performance management. More switching, line rate layer 3 forwarding, and the moves to 100 Mbps Fast Ethernet, then Gigabit Ethernet and now 10 Gig Ethernet have pretty much been the answer. (Addressing manageability and ease of provisioning and other LAN OpEx issues are, of course, different matters entirely.)

WANs, by contrast, have not been able to just throw bandwidth at the problem, in large part because MPLS is so expensive that doing so simply wasn't an option. And even though Internet connectivity did and has gotten ever better price/performance, because it's not reliable enough, very few enterprise until very recently have been able to use it as a primary part of their intranet WAN connectivity.

Before going further, I want to note that as a network guy, I will talk primarily here about the networking-based technologies and solutions to WAN issues. These roughly equate to L1 - L7 of the OSI reference model. There is one application-level solution worth noting at this point, and that is the Citrix "terminal server" approach, first introduced in the early 1990s, which today has evolved into what is commonly referred to as VDI (Virtual Desktop infrastructure), to enable remote users to access a computer or server at a different location, presenting display information to the user, and sending keystrokes and mouse clicks from the user to the server. This was done for multiple reasons, but as it relates to the WAN, it allowed applications, especially client-server applications designed to run over a high bandwidth, low-latency LAN, to be usable across a WAN, where running them in traditional client-server fashion across the WAN would have failed because the combination of low bandwidth and much higher latency would cause them to run unacceptably slowly, if they could even work at all. To this day, there are multiple reasons to use VDI over the WAN, and support for legacy client-server applications remains one. We'll return to VDI and its relationship to the Next-generation Enterprise WAN (NEW) architecture in future columns.

Arguably the first company to really address WAN performance issues, and certainly the first one to build a successful business around them, was Packeteer. Packeteer introduced PacketShaper in 1997. It was a "QoS box". It focused on classification, prioritization and visibility. These were/are good things even if bandwidth wasn't a scarce commodity, but especially given that it was. (Note that Cisco put basic QoS functionality into its WAN routers in the 1990s, but the capabilities were far more limited than Packeteer's.)

Packeteer was also ahead of its time with its now somewhat underappreciated "inbound TCP rate control" technology. It's of course straightforward for any forwarding device to control what traffic flows out of it onto the next link. TCP rate control, and other similar technologies which don't violate Packeteer's patents, are able to control the flow of TCP traffic into a WAN link through techniques involving modifying the TCP congestion window size and delaying when acknowledgement packets get returned to the sending hosts on the other side of the WAN link. This technology made PacketShaper particularly useful for front-ending Internet WAN links at data centers and large sites. That said, PacketShaper was always relatively expensive and hard to cost justify for deployments on other links, it was fairly difficult to configure, and related to the previous two points, had scaling issues, and so for the most part remained a niche product.

Next came Peribit, which introduced its first product in 2001. It was a two-ended appliance-based solution that at first focused on compression. By leveraging CPU cycles and large amounts of cheap DRAM, it was able to deliver compression ratios far better than existing LZS-based compression approaches – which worked per packet and used very little memory – and was able to deliver a two-to-three-times bandwidth advantage on average. This was achievable because enterprise WAN bandwidth had not increased nearly as fast as Moore's Law had improved CPU speed and memory price/bit (for reasons we've covered previously). Peribit's typical early sale was for customers using Frame Relay with branches running at rates between 64KB and T1/E1, talking to a data center using fract-T3/E3. The cost savings alone for avoiding a bandwidth upgrade, especially for international locations, made the solution compelling. Peribit quickly added classification, prioritization, some visibility features and some TCP acceleration features and became essentially the first WAN Optimization company.

In 2004, Orbital Data introduced a two-ended appliance solution designed to address the bandwidth-delay product issue that limits the ability of TCP to do large data transfers across long distances. Their solution allowed you to fully utilize the bandwidth of LFNs (Long Fat Networks) over long distances with high latency. While what Orbital solved was originally a niche issue, the issue became more general as compression became more prevalent.

Also in 2004, Riverbed Technology delivered its first product, also a two-ended appliance-based solution for WAN Optimization. In addition to using CPU and large DRAM, Riverbed introduced the use of a hard disk to be able to deliver compression of 2.5x - 4x on average, and in certain real-life situations really could do 100x. If Peribit was the VisiCalc of the WAN optimization industry, Riverbed was the Lotus 1-2-3. Within a fairly short time, Riverbed became the number one player in WAN Optimization, surpassing Peribit (which was acquired by Juniper) and Cisco, a position they retain to this day. In future columns, we'll go into more detail on other WAN Optimization capabilities, and which in particular I believe enabled Riverbed to become the industry leader.

What all of these developments have in common – and share as well with WAN Virtualization, a technology first introduced several years later – is statefulness, and the deployment of cheap CPU, memory, and (sometimes) disk capacity for expensive WAN bandwidth and to overcome high WAN latencies. In our next column, we'll take a closer look at the issues surrounding statefulness and data networks.

A twenty-five year data networking veteran, Andy founded Talari Networks, a pioneer in WAN Virtualization technology, and served as its first CEO. Andy is the author of an upcoming book on Next-generation Enterprise WANs.

Copyright © 2012 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022