Cisco Subnet An independent Cisco community View more

A NEW architecture for enterprise Internet access

Benefits include lower costs, scalability and improved performance for both intranet and public cloud-based services/SaaS access, while maintaining centralized network security management.

Last time, we looked at a specific solution to the problem of the "trombone" effect in enterprise Internet access using the Next-generation Enterprise WAN (NEW) architecture. This time, we'll close out this run of columns by delving further into the benefits this NEW architecture approach delivers.

The NEW architecture avoids the trombone effect by taking advantage of two of its key elements: WAN Virtualization and colocation. It leverages perhaps multiple high bandwidth, inexpensive Internet connections to backhaul Internet traffic to a relatively nearby carrier-neutral colocation facility, which is now made part of the enterprise WAN by leveraging WAN Virtualization, instead of using expensive MPLS to first "trombone" the traffic to headquarters or a data center before actually going out over the Internet.

The advantages of this approach to enterprise Internet access versus prior approaches in general, and the "trombone" approach over MPLS in particular, are numerous.

First, the cost of bandwidth is substantially lower. At the typical smaller site in the U.S., Internet bandwidth costs between $1.50 and $15 per Mbps per month, compared to the $275 - $600 per Mbps for MPLS. Meanwhile, the cost of high-bandwidth Internet connectivity at a colo facility is between $2 and $20 per Mbps per month, compared to Internet costs (over fiber) of between $60 and $200 per Mbps per month at a large enterprise location. These order-of-magnitude differences mean you can get between 10 and 100 times the Internet bandwidth for your money.

Because the cost is so low, and Internet pipes are typically higher bandwidth, it means you can easily afford much fatter pipes than when using MPLS for almost everything. This can be the answer to offering more bandwidth for demanding BYOD users, for the ever-more-frequent synchronization of data that consumer-based cloud services offer, and for the inevitable onslaught of video – streamed, bulk transferred and/or live videoconferencing –coming to your WAN. Since the Enterprise WAN always needs more Internet bandwidth, this is also an approach that can cost-effectively scale for years.

This solution will in most cases deliver lower average latency for Internet access. More importantly, the WAN Virtualization-plus-colocation approach addresses the issue of providing predictable performance for SaaS/public cloud access. WAN Virtualization delivers predictable performance even over "pretty good" Internet connections to the colo facility. And these colo facilities are connected to the core of the Internet, ensuring fast, predictable access to services also connected to the Internet core. For especially critical cloud-based service access, you could even choose a colo site in the same facility as the service you want to use.

Better latency is nice to have for "generic" Internet access for your users, but low, predictable latency is potentially critical for enterprise use of public and hybrid cloud computing to be successful.

The other key benefit of this approach is that it completely maintains the control and centralized network security management offered by today's "trombone" approach.

In this NEW architecture, the colo is where you deploy your network security management technology such as email/web security gateways or next-generation firewalls or threat protection systems. Because colo facilities are centralized and offer diverse bandwidth connectivity, system scalability is a huge plus. An enterprise with locations all on the same continent could do this with as little as a single centralized colo facility. A global enterprise might use one to three locations in North America, one or two in Europe and one in Asia Pacific. At the other extreme, even a very large organization concerned with delivering the best possible latency would need no more than 10 - 12 locations worldwide (two to four per major continent, plus one or two others) to support a network with literally thousands of locations. When still lower-latency performance for certain locations is desired, adding another centralized colo site can be done with little difficulty.

For smaller enterprises, rather than building-it-yourself, an alternative is to use Network-as-a-Service to connect to that colocation facility where your Internet access is centralized, but that's a story to be expanded upon another day.

Higher performance, lower costs, better predictability, and added scalability, sacrificing centralized security management, make the NEW architecture a no-brainer for IT managers looking for enterprise Internet access solutions to support CIO/CEO computing initiatives as we move further into the age of the cloud.

A twenty-five year data networking veteran, Andy founded Talari Networks, a pioneer in WAN Virtualization technology, and served as its first CEO, and is now vice president of product management at Aryaka Networks. Andy is the author of an upcoming book on Next-generation Enterprise WANs.

To comment on this article and other Network World content, visit our Facebook page or our Twitter stream.
Must read: Hidden Cause of Slow Internet and how to fix it
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.