Introducing cloud service redundancy starts with managing application workload: how do you direct work across multiple cloud service providers? Credit: Thinkstock Resilient application architectures have evolved dramatically over the years. In the age of monolithic applications, with static application deployments in large datacenter setups, resiliency required depth and redundancy in individual deployments. It needed always-on scale to meet the maximum expected workload, along with redundant connectivity and power. Within a monolithic application environment, individual components – like servers – were expected to fail, and organizations built deployments with component-level redundancy as a result. For example, they created multiple database servers in a primary/secondary config or multiple application servers in an active/active config. Over time, as resiliency demands increased, disaster recovery (DR) setups became more prevalent. They had full application infrastructure deployments on standby with data regularly or continuously replicated. In the event of a major failure, the IT team would flip the “big switch” to shift the workload to the DR deployment. The emergence of cloud and IaaS has dramatically changed the way we think about application resiliency. Thin provisioning and auto-scaling for rapid deployment of new resources are now possible as conditions change and workloads shift. Spinning up secondary and tertiary DR environments is easy. There are now technologies that enable active/active setups, such as multi-master database replication systems and global load balancing technologies like those provided by modern DNS and traffic management services. Today, we’re seeing a new shift in the way resilient applications are built, because of the emerging criticality of cloud services in application stacks. Cloud services include Software-as-a-Service (SaaS)-style technologies like cloud storage, Database-as-a-Service (DBaaS), Artificial Intelligence-as-a-Service (AIaaS), content delivery networks (CDNs) and Managed DNS networks. Increasingly, today’s applications are built in architectures where cloud services are critical path components. What happens when a key cloud service in your stack fails? The 2017 AWS S3 outage provides a real-world example. It resulted in the failure of many major websites and applications dependent on S3’s cloud storage service. Just as redundancy was introduced in the days of monolithic application architectures, today’s cloud service-enabled applications demand redundancy at the cloud service layers of the stack: Primary/secondary cloud storage providers Multi-cloud Multi-CDN Multiple DNS networks Cloud service redundancy is critical to building resilient architectures for today’s applications, where SaaS technologies and cloud services are critical components that nevertheless can and do fail. Introducing cloud service redundancy starts with managing application workload: how do you direct work across multiple cloud service providers? DNS is one of the most powerful tools in the stack for managing workload. You can leverage the traffic management tools of modern DNS providers to weight traffic across cloud services, shift workload in response to real-time conditions and fail away from broken cloud service providers. Of course, it’s also critical to introduce DNS redundancy to mitigate the impact of major service provider outages due to attacks or other issues. Some modern DNS providers can help you easily introduce DNS redundancy by deploying multiple DNS networks. Thinking about how to improve the resiliency of your application architectures? Think about your cloud service providers and how to introduce redundancy at the cloud services layer, and talk with your DNS and traffic management provider about how to manage multi-cloud, multi-CDN and other multi-cloud services setups. Related content opinion Data-driven resource management and the future of cloud One of the top advantages afforded by the cloud is the ability to auto-scale in response to demand — a feature that has transformed what was once capacity planning into a more continuous cycle of capacity and resource management. By Kris Beevers Sep 13, 2018 4 mins Hybrid Cloud Cloud Computing Data Center opinion The key to paying down technical debt Critical to accelerating business velocity is eliminating the drag of technical debt to free up resources for growth initiatives. But paying down technical debt isn’t an easy task. In this article, learn which modernization initiatives can laun By Kris Beevers Jun 19, 2018 6 mins Enterprise Architecture IT Strategy Innovation opinion Addressing IoT security with DNS and DNSSEC Incorporating DNSSEC and ensuring the DNS setup for connected devices is secure and resilient is fundamental to IoT security and will only become more imperative in this rapidly advancing, connected world. By Kris Beevers Apr 04, 2018 5 mins Internet of Things opinion Why 5G is bringing edge computing and automation front and center The adoption of 5G and edge computing will drive new expectations for an always-on, high performing network and services, which will lead many enterprises to embrace automation. By Kris Beevers Feb 14, 2018 4 mins Small and Medium Business 5G Mobile Podcasts Videos Resources Events NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe