Moving on in our coverage of which of the various technologies – including those that are part of the Next-generation Enterprise WAN (NEW) architecture, like WAN Optimization, WAN Virtualization and Network-as-a-Service, and others – best address the different issues impacting application performance over the WAN, we now turn to the issue of what to do about limited Enterprise WAN bandwidth.
I'm certainly not the first one to note that you can never be "too rich, too thin or have too much bandwidth." If cost were no issue you could simply buy ever more MPLS bandwidth at each of your locations to satisfy the demands of bandwidth-hungry applications. But since this is a very expensive solution, it's not a practical option for most enterprises.
The most obvious answer here is to deploy WAN Optimization. WAN Optimization's memory-based compression can reduce the bandwidth consumed by unencrypted large files (which aren't video) by 30% - 50%. More impressively, this disk-based data deduplication technology from most of the leading WAN Optimization vendors can reduce WAN bandwidth consumption by 98% or 99% – giving you the equivalent of 100 times the bandwidth temporarily – for the second and successive access to data which has already traversed a given WAN connection.
On average, WAN Optimization customers see a 2x to 4x effective increase in available bandwidth. While WAN Optimization is more typically sold today based on its application acceleration capabilities, the ability to avoid an MPLS bandwidth upgrade has been the cost justification to get the CIO or CEO to sign on the dotted line for many a WAN Optimization deployment.
Traditional client-server applications don't run well over WANs because of both bandwidth and latency issues. So an application-layer solution addressing these problems is to use virtual desktop technology, such as those from Citrix Systems, to run the client software at the same data center location as the server. This is an ideal solution to that particular problem, and has been for years, and so I doubt many enterprises these days are trying to run "fat" client-server applications over thin WAN pipes.
Reducing the amount of bandwidth that goes across your existing WAN links is one solution to bandwidth limitations. The other main solution is to leverage less expensive Internet bandwidth, while still delivering business-quality reliability and application performance predictability. Two techniques described frequently in these columns do exactly that.
Network-as-a-Service allows the use of Internet connections rather than requiring MPLS connections, giving you the bandwidth savings and application acceleration benefits that technology offers at the same time, without the high cost of MPLS. The multi-segment TCP architecture leveraging colocation-based Points of Presence (PoPs) close to end user locations interconnected with a dedicated private core delivers predictable low loss, low jitter connectivity across the network core, avoids Internet "middle mile" congestion issues, and minimizes the impact of first mile / last mile loss when it does occur (we'll talk more about addressing packet loss impact on WAN performance in the next few columns). With or without the WAN Optimization capability, reliable bandwidth costs are lower than those of MPLS, so you can afford more bandwidth for your dollar.
WAN Virtualization allows enterprises to aggregate WAN connections to augment or replace MPLS links, using any kind of Internet bandwidth, including inexpensive broadband connections like cable or DSL at branch locations. Since Internet bandwidth via broadband or at colocation facilities cost a tiny fraction of what MPLS does per megabit, enterprises can get 30 to 100 times the bandwidth per dollar versus buying more MPLS bandwidth. Using a dual-ended system which does for the WAN what RAID did for storage, WAN Virtualization enables the creation of an enterprise WAN which is lower cost, massively better cost/bit, higher capacity, and yet is actually more reliable than the best single vendor MPLS WAN. WAN Virtualization is also complementary to existing WAN Optimization deployments, and so it's a great way to address bandwidth limitations for those enterprises who have already deployed WAN Optimization some years ago and are trying to figure out what to do now.
Finally, there is another application-layer solution that we've discussed previously, which can reduce overall bandwidth consumption. But it comes with an asterisk. Doing distributed, replicated file service avoids consuming any notable amount of WAN bandwidth when accessing files, while delivering actual LAN-speed performance, because all client access to the data is in fact done locally. But because this replication happens whenever a file is changed, that means a lot of bandwidth is used to push the file out over the network to each location where the file/directory is replicated. Thus, it is actually a very bad idea for saving bandwidth for WANs that have little of it to begin with – say, those with only T1/E1 connections to most branches. This is, after all, why WAN Optimization won over WAFS many years ago.
However, if you are leveraging either Network-as-a-Service or WAN Virtualization to take advantage of inexpensive Internet bandwidth for your WAN, and in fact have a reasonable amount of downstream bandwidth at each location (at least 6 Mbps is a good general rule here, though requirements can vary depending on how many files are replicated and how frequently they change), then replicated file service can be a great complement which obviates the need for even larger amounts of bandwidth while also improving end user response times. And having the centralized "root" of the file service at a colo facility where bandwidth is abundant and very inexpensive makes this an even more viable and more attractive solution. Where bandwidth is concerned, then, this is one of those "it take money to make money" kind of solutions: once you save on bandwidth costs by leveraging Internet connectivity using the NEW architecture technologies, you find you can save even more, and have a permanent, long-term solution to addressing the cost of WAN bandwidth.
A twenty-five year data networking veteran, Andy founded Talari Networks, a pioneer in WAN Virtualization technology, and served as its first CEO, and is now leading product management at Aryaka Networks. Andy is the author of an upcoming book on Next-generation Enterprise WANs.