Why companies are building application-specific edge delivery networks

The growing trend of companies creating their own edge delivery networks has a major payoff. They can service their applications via these networks to enable greater resilience and performance for their users.

building network construction architect
Thinkstock

There’s a trend emerging among many Internet-based companies that I find intriguing: they are creating their own edge delivery networks. Why? So that they can service their applications via these networks to enable greater resilience and performance for their users.

Rather than the standard, garden-variety content delivery networks (CDNs), these edge delivery networks are tailored specifically for the applications they’ve been built to service. In some cases, this means the edge networks leverage highly specific connectivity to regional internet service providers or between application facilities; in other cases, it means placing specialized hardware tuned to specific needs of the application in delivery facilities around the world. And most importantly, these networks are operating application-specific software and configurations that are customized beyond what’s possible in general-purpose, shared networks.

Building and operating global edge networks entails a measure of cost and complexity. So, why are organizations increasingly taking on this task?

1. Top speed achieved

For about 10 years now, end users have been getting progressively more used to high-performance applications. Google, Facebook and other Internet giants have invested in their global infrastructure, and they work hard to ensure that the experience of a user in Singapore is just as snappy as that of a user in San Francisco.

CDNs have been enabling improved global delivery of certain kinds of content for years now. They continue to evolve to address new, more dynamic use cases, but most applications still fundamentally depend upon executing code with respect to a dataset to compute a response to application requests. And unless the code and dataset are proximal to the user, the laws of physics limit the application’s performance.

Until we figure out how to beat the speed of light, to achieve blazing fast performance you’ll have to deploy your code and dataset close to your users, wherever they may be.

2. Technology has made edge networks easier to build

Historically, it has been an exercise in complexity and heterogeneity to deploy infrastructure around the world. Beyond just deployment, solving the associated operational problems of a global delivery network, like infrastructure management and application administration, not to mention data replication and consistency, has been incredibly challenging. What’s changed?

The most significant change is that public cloud and other infrastructure providers have gone global. AWS, for example, provides the same compute infrastructure around the world. Other cloud vendors have similar—even wider—coverage. For applications with more specific needs, co-location providers like Equinix that have expanded to global presence and large transcontinental backbone providers with dense coverage around the world enable companies to deploy normalized edge delivery facilities in many markets. These days, it’s possible to avoid interacting with an army of local vendors with varying product capabilities.

3. Complexity management with automated tooling

The approaches for managing widely dispersed and highly dynamic application deployments have rapidly evolved as distributed infrastructure has become more accessible. In the late 2000s, the prevalence of configuration management technologies and the emergence of the modern DevOps movement came about, driven in no small part by increasingly complex infrastructure deployments. Configuration management has matured into a broader infrastructure automation ecosystem, full of powerful tools for managing global systems spanning tens of data centers and thousands of servers.

Much more than just the latest shiny object, infrastructure as code is a necessity for managing global edge delivery networks. Thanks to the maturity of the tools available today, and their coverage of major cloud service providers like AWS, tying an edge delivery network together into a cohesive system manageable by a relatively small team is possible—more so than ever before.

4. The secret sauce: traffic management technology

There are still enormous challenges when building global delivery networks for an application. The internet is complex, and managing connectivity and systems that span the globe, even with increasingly sophisticated automation technology and easy-to-use cloud services, is no small task.

Traffic management is one of the thorniest issues for any edge delivery network. The fundamental question of any distributed delivery network is, “Which delivery infrastructure should I send this user to right now?” Solving traffic management effectively requires selecting a service endpoint in the edge network to optimize performance, using information about what’s happening in real time in the application infrastructure and on the broader internet. Luckily, these days a number of tools—generally DNS-based—exist for optimizing global traffic management.

Speedy delivery

Now that the right tools are available, this move toward building edge delivery networks that are specific to a company’s applications will continue to accelerate in popularity. With the technology for automating deployment and management of global distributed infrastructure and traffic in place, companies can more easily deploy their applications around the world at the speed their users demand.

This article is published as part of the IDG Contributor Network. Want to Join?

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Must read: 10 new UI features coming to Windows 10