In the last few columns we've covered two technologies - server virtualization and colocation - that while not WAN technologies per se, are critical components of the Next-generation Enterprise WAN (NEW) architecture, along with WAN Optimization and WAN Virtualization and/or Network-as-a-Service. In this and the following column, I want to cover the last additional technology I consider an integral part of the NEW architecture: distributed/replicated/synchronized file services.
RELATED: The benefits of Colocation facilities for the Next-generation Enterprise WAN
Colocation facilities – the hidden enabler of the Next-generation Enterprise WAN
When we discuss distributed file services for enterprise WANs, two major variants need to be considered. The first is something best known as Wide Area File Services (WAFS), delivered primarily by networking vendors in the past decade. The second is more typical of file service or storage vendors, and actual distributed and replicated file service, which synchronizes the files across multiple servers. As an example of this second type, I will mostly use Microsoft's DFS (Distributed File Service) Replication. This is by no means the only one out there, nor necessarily the most functional, but I suspect it is the one with which the majority of IT folks have at least a passing familiarity.
Consistent with one of the themes of this column, before looking at what the future holds, I believe it will be beneficial to first examine and understand the relevant history as it relates to the enterprise WAN. Towards that end, before describing how the WAN is about to change, let's first see why the last time around WAFS was defeated so soundly by the WAN Optimization alternative, and why for the most part replicated, synchronized file services like DFS Replication have yet to become major forces on most enterprise WANs.
Since I think it's fairly safe to assume that readers of this column have at least a passing familiarity with WAN Optimization and its key capabilities, the easiest way to describe WAFS is to compare it to WAN Optimization. Where WAN Optimization can do compression and disk-based data deduplication on all TCP traffic (indeed, many WAN Optimization implementations can do this for UDP traffic as well), WAFS is essentially a file cache. In other words, it performs the data deduplication function that WAN Optimization does, but only at the entire-file level, and only for file service. Actually, to be fair, some WAFS implementations could send only the changes to a file, rather than the whole file when a portion of the file changed, mirroring the capability that WAN Optimization offered on that point as well. But WAFS only works for file access, and only for those protocols (e.g. CIFS, NFS, ...) it was specifically programmed to handle.
This means that it didn't deliver bandwidth savings or application acceleration for any other applications or protocols - you know, trivial things like email, HTTP/HTTPS, etc. Given how expensive MPLS is for relatively little bandwidth, this was a very big deal. And while WAFS did address the single biggest WAN access issue of the last decade - enabling CIFS file service to perform reasonably well across a WAN - that was about all it did.
So both WAFS and WAN Optimization addressed the key issue of CIFS performance (CIFS being the one protocol/application that truly was broken on the WAN), delivering "LAN-like" performance when accessing files on remote Microsoft file servers, a key capability during a time of data center consolidation. "LAN-like," by the way, is industry-speak meaning performance that is faster than native-CIFS over the WAN the first time a file is accessed from a given location (though usually nowhere near as fast as actual LAN access), and pretty close to actual LAN speed file access the second and successive times the file is accessed. Because for most WAFS implementations most of the data was only stored centrally, with local versions only of files cached after being previously accessed across the given WAN link, true LAN speed performance all the time was generally not the case. So in the one place it might have held an advantage over WAN Optimization, it was merely equal.
Where WAFS was pretty much a one-trick pony, WAN Optimization, by contrast, supported bandwidth savings and the application speed-up benefits that came with it across most applications, and added major value for HTTP and Microsoft's MAPI email protocol in particular, did memory-based compression (to speed up access to files or other data the first time they are accessed from a given location), and provided other capabilities like application performance visibility and QoS. And without suggesting that WAN Optimization was or is particularly easy to install, it was certainly no more difficult to install than WAFS solutions.
So WAFS offered no meaningful advantages over WAN Optimization, while WAN Optimization did offer several advantages over WAFS. More capability, and support for more applications, enabled WAN Optimization delivered mostly by a number of start-ups to fairly easily defeat WAFS in the market, even when industry giant Cisco was one of those pushing a WAFS-based approach.
Next time we'll look at how WAN Optimization compares to distributed replicated file services as alternatives looking backwards, as well as how such file services can change the nature of the enterprise WAN going forward, especially in conjunction with other key technologies as part of the NEW architecture.
A twenty-five year data networking veteran, Andy founded Talari Networks, a pioneer in WAN Virtualization technology, and served as its first CEO, and is now leading product management at Aryaka Networks. Andy is the author of an upcoming book on Next-generation Enterprise WANs.