An evolutionary WAN path to cloud computing with WAN Virtualization

NEW architecture, colocation enable at-your-own-pace migration to leveraging public or hybrid cloud services

Previously, we covered the network benefits of WAN Virtualization and how WAN Virtualization delivers benefits “beyond the WAN”  to areas like security, backup, file access, DR/business continuity, and private cloud computing. In the third and final piece in this set, we'll look at how WAN Virtualization combined with the strategic use of colocation facilities as part of the Next-generation Enterprise WAN (NEW) architecture enables an at-your-own-pace migration to leveraging public cloud computing services and even building so-called hybrid clouds.

Note that this discussion is not about the data center LAN architecture of cloud computing, but rather about how to provide secure, reliable and predictable access to (private or public) cloud computing services for users at all of your enterprise locations.

MORE FROM THIS AUTHOR: WAN Virtualization’s benefits 'beyond the WAN' for security, cloud computing

Let’s recall why we need anything other than plain-old-Internet access for cloud computing. The answer is the same for why enterprises deploy MPLS WANs today: reliability and application performance predictability (“small q” quality-of-service, that is). If the Internet was reliable enough, there would be no large MPLS market today; rather all enterprise WANs would run as VPNs over public Internet connections. But since unaided – say, by WAN Virtualization and the NEW architecture – the Internet only works pretty well most of the time, MPLS is a $15B+ annual business, despite the high cost of MPLS bandwidth.

Now, why does anyone think that if the public Internet hasn’t been good enough all along for the enterprise WAN, it will somehow be reliable and predictable enough for enterprise use of public cloud computing services for mission-critical applications?

The answer, of course, is that it isn’t. And yet putting an MPLS connection at every service location where you will access a public cloud service would not only be unacceptably expensive, it would also be very difficult to manage.

Enter the NEW architecture, with WAN Virtualization, and colocation in starring roles.

The basics for this are as follows:

Using WAN Virtualization, make one or more colo facilities a part of your current enterprise WAN. WAN Virtualization’s ability to utilize and aggregate a multi-path WAN fabric, reacting in real-time to not just network failures but even packet loss and latency/jitter, is the key to delivering application performance predictability over otherwise “best efforts” Internet connections. The colo facility delivers diverse, very inexpensive data center bandwidth, and, of course, proximity to public cloud services at that facility, as well as connection into the core of the Internet.

If using WAN Optimization today, deploy it at the colo as well, retaining all of its benefits for enterprise data and application access.

Backhaul all Internet traffic through the colo facility (facilities), where of course you have your favorite security devices installed.

This gives you inexpensive, scalable bandwidth, and more importantly, means you get the same network security and the same (in fact, usually better) reliability and QoS that you have with your private WAN today.

Thanks to server virtualization, the footprint for deploying applications at the colo can be fairly small. So with this NEW architecture, a colo is the perfect place to do a private cloud deployment, even if you intend to own and manage all of the equipment and services that make up the private cloud.

Having built a private cloud at the colo facility, it becomes much simpler to start to leverage public services offered at that location from a variety of providers. Using simple Gigabit Ethernet cross connect(s) in building, you could, say, start to leverage Hardware-as-a-Service or Storage-as-a-Service – the combination of which are known as Infrastructure-as-a-Service (IaaS) – while maintaining complete software and management control over your computing environment.

If you want to take advantage of email security service, say, from a service provider at the same colo facility, again this is fairly straightforward to accomplish without impacting performance and while also maintaining control of your network security. The same is true of even more generalized Software-as-a-Service if offered from that colo facility. Taking advantage of Unified Communications-as-a-Service, or any hosted communications service, is much more straightforward to do and less risky when attached to your private network at LAN speeds within the same building, and then utilizing the same QoS and reliability mechanism across the WAN that you have for the rest of your applications thanks to WAN Virtualization.

Best practices for building hybrid clouds – combining private cloud implementation with public cloud services - would be worth a whole column of its own. But clearly, solving performance issues around trying to move large virtual machine instances and their data sets across limited bandwidth and especially meaningful distances (and, therefore, latency) is pretty near the top of the list. Building your hybrid cloud and doing “cloudbursting” is much easier once you have a private cloud at the same colo facility.

[Data center experts know that if you have a private data center in a metro area with access to fiber at your building, then DWDM or point-point links connecting you to a colo in lieu of WAN Virtualization is an alternative. And indeed, for Disaster recovery / business continuity this can be an excellent approach. But this applies to only a few enterprises, and helps only their data center connectivity, without solving the general problem of cost-effective WAN access for all enterprise locations. Further, to support hybrid clouds, even the few milliseconds of latency between data centers can have an enormous impact on data center application performance, problems that can be avoiding by being in the same colocation facility as your “burstable” computing capacity.]

Finally, for public cloud services that are not actually based at the same colo facility – for example, Saleforce.com hosted at another facility 100 miles away – complete "four nines" network reliability and performance predictability cannot be guaranteed. But using the NEW architecture to do all Internet access through a colo facility, which is almost by definition attached to the core of the Internet, can get you the next best thing – “3 ½ nines”, if you will. The problems with Internet performance are rarely in the Internet core, but rather on first mile / last mile links, or peering points. WAN Virtualization has eliminated the first mile / last mile link issue, and being at a well-connected core location drastically reduces the chance that peering point congestion will negatively impact application performance in a meaningful way for any extended period of time. 

So, whether building private clouds or hybrid clouds of your own, using the NEW architecture to backhaul all Internet traffic through well-connected colo facilities will give you the best possible general public Software-as-a-Service reliability and performance predictability, at a fraction of the cost of the alternatives.

WAN Virtualization, colocation and the NEW architecture are the answer to how WAN managers can maintain the network reliability and application performance predictability users (and CIOs!) expect from today’s MPLS WANs, while maintaining control over the WAN and offering an evolutionary path to taking advantage of public cloud computing in an incremental fashion and without security or WAN performance issues.

A leading expert in WAN/LAN switching and routing, Andy founded Talari Networks, a pioneer in WAN Virtualization technology, and served as its first CEO. Andy is the author of an upcoming book on Next-generation Enterprise WANs.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Now read: Getting grounded in IoT