Americas

  • United States
Contributor

Data-driven resource management and the future of cloud

Opinion
Sep 13, 20184 mins
Cloud ComputingData CenterHybrid Cloud

One of the top advantages afforded by the cloud is the ability to auto-scale in response to demand — a feature that has transformed what was once capacity planning into a more continuous cycle of capacity and resource management.

cloud computing - data center - network servers
Credit: Thinkstock

Cloud adoption is undoubtedly the cornerstone of digital transformation, and for many, it is the foundation for rapid, scalable application development and delivery. Companies of all sizes and from across all industries are racing to achieve the many benefits afforded by public, private or hybrid cloud infrastructure. According to a recent study, 20 percent of enterprises plan to more than double public cloud spending in 2018, and 71 percent will grow public cloud spending more than 20 percent.

Enterprises moving to the cloud are often seeking to improve employee collaboration, ensure redundancy, boost security and increase agility in application development. One of the top advantages afforded by the cloud is the ability to auto-scale in response to demand — a feature that has transformed what was once capacity planning into a more continuous cycle of capacity and resource management.

The impact of the cloud

When it comes to physical data centers, capacity planning primarily entails predicting, purchasing and installing the maximum number of servers an organization may need. As traffic needs change over time, data centers can be expanded or consolidated, but changes are slow and cumbersome. Capacity must be able to accommodate brief spikes in traffic, otherwise performance and uptime are negatively impacted. Software engineers must be, in effect, “magicians” who use predictive modeling matching application needs and traffic drivers to resource constraints in order to determine the correct capacity. While planning tools can be beneficial, the complexity of this setup causes most to take the “wait until it breaks” approach to determine what the upper limits are.

With cloud computing, a burst of traffic can be more easily addressed as modern enterprises can quickly spin up new services and capacity to dramatically improve user experiences. This flexibility allows organizations to account for both expected spikes or unexpected conditions, such as the historically well-known “Slashdot” effect. Whether it is predictable or not, we can now build applications to respond to events because of the automation and flexibility enabled by the cloud. This idea of “infinite” capacity and elastic infrastructure is appealing, and devops processes and tools like Terraform have improved the speed at which scale is possible. At NS1, for example, our team has built what is essentially a push-button operation to stand up a new cloud deployment of our entire platform, from scratch. This process — which previously would have taken weeks or months — has been reduced to a matter of minutes.

Many cloud providers offer tools, such as AWS CloudFormation, that automate and deploy additional resources in a repeatable manner. Based on predetermined specifications, these tools will provision and configure new stacks and resources from a template. But the process is restricted to the service provider and more locally driven, which can limit usefulness.

Data-driven resource management

Organizations are moving increasingly to the edge, leveraging hybrid and multi-cloud approaches, and their widely distributed and dynamic infrastructures are changing constantly. As a result, resource management has become a continuous effort that requires a global view into capacity and performance, not merely a myopic view into one region or cloud instance. Real-time analytics, measured against real application metrics, give IT teams the insight needed to deploy new infrastructure and manage the use of resources to address performance issues, application needs, unexpected traffic spikes or even to control costs between cloud providers. Data can be used to balance loads between resources based on actual conditions or even to rapidly spin up new cloud instances to increase capacity in strategic geographic locations to essentially “follow the sun,” or move infrastructure around based on demand so that processing is occurring at the edge. Advanced teams also leverage network performance data for additional insights that drive resource management. Additional cloud instances can be deployed in locations when and where internet conditions are chronically slow or unpredictable.

The future of cloud and resource management

As more enterprises embrace hybrid and multi-cloud approaches and infrastructure becomes increasingly distributed and dynamic, the demand for tools that provide a vendor-agnostic, global view of existing internal and external conditions will increase. More organizations will begin correlating data from measurement tools that provide real-time visibility to automated decision making and tools programmed to redeploy resources to address demand. Continuous resource planning will become an essential part of IT operations.

Contributor

Kris Beevers leads NS1’s team of industry experts as they create products to enable companies to use DNS to build and deliver dynamic, distributed, and automated applications that delight users. He is a recognized authority on DNS and global application delivery, and often speaks and writes about building and deploying high performance, at scale, globally distributed internet infrastructure.

Kris holds a PhD in Computer Science from RPI, and prior to founding and leading NS1, he built CDN, cloud, bare metal, and other infrastructure products at Voxel, which sold to Internap (NASDAQ:INAP) in 2011.

The opinions expressed in this blog are those of Kris Beevers and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.