Cloud testing: The next generation

One of the risks of deploying Internet-scale infrastructure and applications is that, until they are put to the test, you can't have 100% confidence that they will scale as expected. Applications and infrastructure that perform well - and correctly - at nominal scale may begin to act wonky as load increases.

TECH DEBATE: Cloud providers will be better at security than you can ever be

Cloud computing and virtualization bring new challenges to testing the scalability of an application deployment. Applications deployed in a cloud environment may be designed to auto-scale "infinitely" which implies you need the same capability in a testing solution.

That's no small trick. Traditionally, organizations would leverage a load testing solution capable of generating enough clients and traffic to push an application and its infrastructure to the limits. But given increases in raw compute power and parallel improvements in capacity and performance of infrastructure solutions, the cost of a tool capable of generating the kind of Internet-scale load necessary is prohibitive.

An internal performance management engineer at F5 Networks applied some math and came up with a staggering $3 million investment required to test an Internet-scale application deployment. That's not feasible for most and not economical. So it seems that testing Internet-scale architectures is going to require Internet-scale solutions - but without the Internet-scale cost.

Cloud computing to the rescue

It's not a huge leap of logic to assume that the same operational model enabling Internet-scalability of applications could do the same - at a fraction of the cost - for Internet-scale testing solutions. All you need is a cloud-deployable load generation client, a couple of cloud computing environments and a way to control the distributed clients to generate the scale necessary to push an application and its infrastructure to the limits.

Experience says this is easier said than done. It's not the deployment that's a problem; it's the management of the distributed test clients. Distribution across multiple providers would prove a nearly insurmountable challenge for most solutions let alone the organization employing them. And you should distribute across providers for several reasons, including:

1. Location of clients matters. Whether it's location-based application logic requiring testing or the reality that applications are not stateless and require client-server affinity, location matters. Combine a narrow range of IP addresses with affinity and scalability challenges are almost certain to appear.

2. Bandwidth. Depending on the cloud provider from which you choose to launch such a test, you may find their network to be the bottleneck. Whether internal or external (to the backbone), launching the scale necessary from a single provider could prove little more than the provider has limited bandwidth or a less than optimal internal network.

3. Security alerts. A barrage of requests coming from a narrow range of IP addresses is likely to set off security mechanisms designed to detect such attacks. While generating load from multiple sites may mitigate this problem, adjust the volume down on security infrastructure.

What it comes down to is this: properly vetting the performance and scalability of Internet-facing applications and infrastructure, especially those deployed using elastic scalability, such as cloud, require the same scalability.

There are a number of solutions available that offer (nearly) push-button Internet-scale testing of applications and infrastructure. One such comes from SOASTA, with its CloudTest Grid. Another is the eponymously named LoadImpact.

INDEPTH: Cloud-onomics 101

Not to be made irrelevant in a cloud-based world, the powerhouses of infrastructure scalability testing - Ixia and Spirent - both offer cloud and virtual-based solutions. While virtualization affords the opportunity to turn idle internal resources into load generating clients, unless testing an internally-accessed application load should be generated externally to also gauge the scalability of public-facing infrastructure.

External, cloud-based solutions are generally simple, point and click, drag and drop, interfaces that allow a performance engineer to quickly spin up an Internet-scale, distributed load test without a huge investment in capital and time. This means any organization can have at its fingertips - literally - the ability to launch a distributed load test against applications and infrastructure wherever it may be deployed. This capability is paramount to ensuring auto-scaling solutions in public cloud (and private) are configured correctly under load and behave as expected without incurring an expense that sounds more like you're buying a cloud provider than simply testing an application deployed in one.

In other words, there's no excuse for not testing an application and its infrastructure to ensure correctness of architecture, of implementation, and of configuration to meet demand when it arrives.

Editors' Picks
Join the discussion
Be the first to comment on this article. Our Commenting Policies