Having covered over the last several columns the key WAN factors affecting application performance as well as the "chattiness" issues of certain applications, a lot of the history of the Enterprise WAN and the consequences of that history, all of which together have driven technological innovation in WAN Optimization and WAN Virtualization technology, including the growth of WAN Optimization from nothing to a multi-billion dollar industry, over the next 3 columns I want to spend some time covering the three other technologies that I consider critical components of the Next-generation Enterprise WAN (NEW) architecture - server virtualization, colocation facilities, and distributed/synchronized file services.
These three technological capabilities have arisen over the past several years to add value to IT in computing and networking quite independent of the Enterprise WAN. Together with WAN Optimization and WAN Virtualization, they will help create a critical mass greater than the sum of the parts, which will revolutionize the Enterprise WAN, in turn enabling and accelerating the enterprise move to leverage cloud computing. Rather than a detailed summary of each, I will focus on how they relate to the Enterprise WAN, and to each other. First up: server virtualization.
As a networking guy, I'm the first to admit that there are much better sources than I for explaining computing virtualization and server virtualization.
Of course, it's quite easy to get confused when the topic of virtualization comes up, with all of the possible variants, and of different places where virtualization can be used.
I will begin and end by telling you that the most important point to take away from this column is that, as far as the enterprise WAN and the NEW architecture are concerned, server virtualization makes possible a small physical footprint for a multitude of enterprise applications, and that small footprint enables these applications to be run cost effectively from a colocation facility. While I will cover a number of other concepts relating to virtualization here, that is the only one critical to remember.
While virtualization of computing goes back decades to the mainframe world, in the microprocessor-based world it was popularized several years ago by VMware. The ability to run multiple Virtual Machines (VMs) on a single server platform offers numerous benefits, including more efficient use of computing resources and so lower compute expense, increased security and greater uptime than running multiple applications on a single operating system instance, reduced computing management expenses, lower energy and cooling costs, reduction in the number of LAN switching ports to buy and manage to support a smaller number of physical servers, etc.
The ability to do server consolidation accelerated the existing trend to data center consolidation (fewer data centers for a global organization) because of the OpEx and CapEx and security benefits of doing such a consolidation. Data center consolidation was further enabled and accelerated by the introduction of WAN Optimization technology, as we covered last time, since WAN Opt enables acceptable application performance for a wide variety of applications even when a server is now hundreds or thousands of miles away from the users it is serving. Wan Optimization enabling greater sales of server virtualization over the last several years is certainly the case, but it's almost the reverse of the main point I'm trying to make in this column, because that is a case of a WAN technology enabling computing technology change. [The NEW architecture will also do such a thing, for cloud-based computing, but that's a topic for previous and future columns.]
Don't confuse server virtualization with "network virtualization," which refers to one or both techniques to divide a LAN on a campus or data center into multiple virtual networks, and/or the ability to easily support multiple virtual networks on a single compute device (which likely is running multiple virtual machines). All such references to "network virtualization" in one way or another are talking about how to most easily deal with multiple logical LANs (usually L2 LANs, though sometimes L3 LANs) in a data center or campus, often to deal with IP addressing issues.
It's also important not to confuse server virtualization with desktop virtualization - something first made popular by Citrix, and originally known as "Citrix," terminal services, or remote desktop services. Desktop virtualization has a key role in many environments, including enterprise WAN environments, but logically it is separate from server virtualization, especially in terms of how it is used. While there are many variants, and several different reasons for doing desktop virtualization and the associated virtual desktop infrastructure (VDI), it is essentially about running a user ("desktop") application on a remote computer (a server). Desktop virtualization was first designed simply to allow remote users to run applications which weren't written to run over the WAN at all. It's now used to save both capital and operating costs as well. But in the WAN environment in particular, it's most often used to deliver acceptable performance for an application run over a high-latency WAN, for applications - especially client-server applications - which were originally written and designed for high-bandwidth, low-loss LAN environments. Now, it's true that there are significant technology overlaps between server virtualization and desktop virtualization - witness Citrix acquisition of XenSource, and subsequent renaming of Citrix product lines - but the use cases are fairly different.
For some applications, WAN Optimization can't really deliver acceptable performance over a WAN, and so desktop virtualization is used essentially as an application overlay ability to run the application remotely (back at the data center) and present the screen to the user while sending back mouse clicks and key clicks to the server. While desktop virtualization is not a part of the NEW architecture per se, in fact WAN Virtualization and the NEW architecture offer superior VDI performance to what MPLS and customer premise-based WAN Optimization appliances can deliver; we'll come back to the topic of VDI in a future column.
Back again to server virtualization. Besides the small colo footprint for enterprise computing applications, server virtualization also enables the ability to run a number of networking-related applications on a single computing appliance or server. While this could include heavily CPU and timing-intensive networking "applications" like WAN Optimization or WAN Virtualization, most typically, it would be at most one of these high-intensity uses together with low-CPU and/or less time-sensitive apps like DNS, DHCP or even local file service. Looking forward into the NEW architecture, such a capability gets more appealing as newer management technologies enable centralized management of the services (e.g. distributed/replicated file services) and reliable, continuous access to the distributed servers (e.g. via WAN Virtualization) is assured. No one wants to support large numbers of servers in a branch any more, but including one or two to greatly improve user experience, if in fact the services can be centrally managed, backed up, etc., can have a great appeal.
Now, this server virtualization ability is related to the recent introduction of "virtual appliances" for networking functionality. A "virtual appliance" means that rather than purchasing an integrated hardware/software system for a networking component that is forwarding packets, you instead purchase only the software from your networking vendor, and run it on industry standard hardware (or perhaps another networking vendor's hardware).
It's important to understand that these are not the same thing, and it's critical to decide what problem you are trying to solve in looking at deploying a networking capability as a "virtual appliance." There are two specific pluses of using "virtual appliance" capabilities from vendors that offer them. It avoids the sometimes time-consuming difficulty of shipping a specialized piece of hardware to countries (e.g. India, China, some South American countries) where importing hardware can be problematic. Second, it can be beneficial for an enterprise running something like WAN Optimization in the cloud to work with an appliance (virtual or not) they own and managed at their own locations; for many clouds, it's not an option to deploy your own hardware. [Of course, if you obtain your WAN Optimization as a service, you can completely avoid this issue, but that's yet another topic for a future column...]
It's equally important, however, to recognize the downsides of "virtual appliances." It can be difficult doing the deployment, getting everything set up just right. For high-intensity applications, there is also the problem of delivering sufficient performance. If the virtual appliance is for WAN Optimization and the link is just a T1/E1, there might not be a problem. And I'm certainly not suggesting that it's impossible to run high-performance network packet forwarding applications on industry standard hardware, because often it is. But "often" does not mean "always," and getting it right is often not easy to do even when it is possible.
Server virtualization is a critical component of the NEW architecture because it enables enterprises to take advantage of the networking bandwidth cost and application performance benefits of deploying enterprise applications at a colocation facility. It also offers the secondary benefit for customers looking for the best possible user experience (and so performance of a wide range of applications) of running a set of networking services at a branch with a small physical and management footprint. Just don't confuse that with thinking that "virtual appliances are the answer going forward"; while virtual appliances do and will have their place, they are unlikely to be a panacea or replace dedicated networking equipment in most locations of most Enterprise WANs, because the needs and design of implementing and deploying data path forwarding devices cost effectively and reliably are often quite different from doing the same for computing hardware.
A twenty-five year data networking veteran, Andy founded Talari Networks, a pioneer in WAN Virtualization technology, and served as its first CEO, and is now leading product management at Aryaka Networks. Andy is the author of an upcoming book on Next-generation Enterprise WANs.