Revamping the cloud for real-time applications

Increase in use of real-time applications is creating problems for the cloud.

The cloud! The cloud! It’s hard to have any kind of conversation with any business or IT executive without the cloud coming up. Drive down 101 between San Francisco and San Jose and there are cloud billboards galore. There are cloud ads in airports, city centers, all over TV and anywhere you would look. And why not? The cloud solves all application problems, correct?

Well, not quite. While the cloud does have an outstanding value proposition and is a better application strategy than packaged applications for mobile workers, it isn’t the right model for real-time applications, such as video. The main problem is that the data centers where the cloud resources are located are generally too far away from the people they serve. When building these mega data centers, the cloud providers don’t really consider how far or close they are to users. Instead, the primary concern is to be close to cheap power or land. The latency moving the packets across the country and back can significantly hamper the performance of real-time applications.

Looking ahead, the problem is only going to get worse. Cisco’s Visual Network Index (VNI) shows that will reach a little over 37,000 PB of traffic in 2014 and will skyrocket to over 89,000 PB by 2018. In addition to video, voice is starting to chew up more bandwidth with voice recognition applications like Siri and Cortana. These applications require fast, bi-directional transport of packets. Now toss in things like real-time wearable technology, like trackers, heart monitors, and watches, and it’s easy to see there are big challenges for the cloud coming. The fact is that real-time does not work with the traditional cloud model and a different model is required for real-time traffic.

For the newcomers to online video streaming, such as ESPN, Major League Baseball, or the broadcast networks, this can pose a substantial risk. Online content is as big a part of the news and sports markets as the actual broadcasts, and the providers stand to lose the eyeballs if the performance is erratic.

An interesting alternative to the traditional cloud model is to have a number of local, edge data centers located across the country to move the content closer to the users. This is particularly true for all the cities across the country that aren’t the top-five markets (New York, DC, San Jose, LA, and Chicago). Those cities have huge population bases, but typically don’t have any cloud data centers located anywhere near them.

There’s a relatively new company called EdgeConneX that has built out the nation’s first nationwide “edge” infrastructure footprint. The company has built out edge facilities in about 20 of the “next-tier” cities, such as Miami, Seattle and Phoenix, and is planning to get to 60 by the end of 2016. EdgeConneX sells primarily to three types of companies – content providers, media companies, and fiber providers – and can save those organizations millions of dollars.

To explain the problem a little deeper, let’s take Phoenix as an example. Before EdgeConneX, the Phoenix metro area was served out of LA. While this isn’t a tremendous distance, consider that videos are pulled across that distance millions of times per year, driving bandwidth costs up for the content providers. By leveraging the EdgeConneX service, its customer saved over $150 million in five years.

For network and content providers, the cost of transport and peering is already high and will continue to grow, as HD and 4K content streams become the norm. The traditional cloud model may be fine for non-real-time applications, but a change is needed for real-time transactions. The shift to localized hosting at the edge of the network can save service providers money and remove delays due to network latency, creating a “win-win” for the providers and their customers.

Copyright © 2014 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022