Americas

  • United States

What happens to quality assurance in the era of hyper development?

Analysis
Mar 04, 20135 mins
Enterprise ApplicationsOpen Source

With rapidly increasing software release cycles, quality assurance practices need to change.

Release cycles are getting shorter – from months to weeks to days and even hours – allowing companies to be more competitive than ever (or at least to think they are). To a large extent, the adoption of both Agile and DevOps practices is making this possible. Agile is bridging the gap between business and development, and DevOps is enabling development to push new features into production continuously and reliably.

What implications does this have for QA? Is it possible to maintain an acceptable level of quality and a solid approach to testing when you’re all in such a hurry? What possible practice changes related to quality are needed to make this possible?

Lots of questions, and although I obviously don’t have any final answers, I did have a great discussion with Henrik Johansson (CTO for the API Testing products at SmartBear) about this. I think we came up with some good ideas (mostly his, I have to admit).

I’ve tried to categorize our ideas into two areas: organizational/process-related shifts, and implications that are more specific for testing and QA activities.

Let’s start with the organizational shifts. These are pretty much in line with the adoption of agile testing and an agile mindset in general.

  • Involve everyone, including QA, from the second a new idea is selected for implementation. If your QA professionals are to have any chance at devising a testing strategy for your new feature that is to be released tomorrow, they have to know about it now. Both for the sake of their own planning and the planning of testing efforts for the entire team. Should they use TDD or just create unit tests? What existing automated tests are in place that might help or that need to be updated? Is this a UI feature that needs usability testing? All this needs to be assessed proactively and immediately.
  • Sit together in cross-functional teams – no silos – with core competences in place. Dev sits next to QA sits next to UX sits next to Ops. Share at least one whiteboard for sketching out ideas and solving problems so you can do over-the-shoulder reviews of code, tests and UI designs. Core competences are crucial; you don’t have people who are a little good at everything, you have your hardcore developers, your expert testers and your rewarded UX designers in place. Of course, you need to have marketers, copywriters, and sales there as well. They need to know about the new features so they can plan for selling, documentation and promotions just as well as anything else. For distributed teams this will be increasingly difficult as cycles get shorter. Try to use collaboration tools as much as possible and try to maintain a “natural” dialog within the team.
  • Minimize cognitive overload. Avoid tools or processes that add administrative overhead. Do you really need traceability from commit messages to your issue tracker? Or link bugs to test definitions to your user stories? Or link design documents to your source code? Even if this can often be automated, make sure that these connections and routines actually add real value to your process going forward. Be dead-honest and ask yourself if these have solved any problems for you in the past. Do you really need them to deliver great increments to our products?

As you can see, this is pretty high-level stuff that involves the entire development organization. Let’s have a quick look at some specific testing and QA practices that we thought could facilitate the transition to shorter release cycles:

  • Automation as a safety net for core regressions. Don’t be afraid to automate – but don’t overdo it either. Since your system is evolving at a rapid pace try to keep your automated tests at a relatively “low” level that will shield them from breaking due to evolutionary changes in your solution. Run tests continuously so regressions are caught immediately. Never ignore test failures; either fix the system or fix the test.
  • More exploratory testing by QA professionals. Since release cycles are short and your system is constantly evolving, exploratory testing becomes extremely important for assessing both new and existing features. Skilled testers will assess not just functionality, but also usability, performance and even security – and since they have a high-level view of your system they will know which parts of your application need to be re-tested for each new feature added, ensuring that system dependencies do not cause unexpected results. As an added bonus, the outcome of these exploratory tests will be both a general quality assessment and also valuable input on which (new) parts of your solution might be applicable for automated tests.
  • Post-deployment monitoring. Shorter release cycles will increase the risk of new bugs slipping through the cracks – for online or hosted solutions you would set up extensive post-deployment monitoring of key functionality to find inconsistencies introduced by the latest update. Re-use test assets created for pre-deployment automation as much as possible – and use them for performance monitoring as well as functional monitoring.

Obviously, talk is cheap, and the hard part will always be to actually implement changes like these. Do you agree? Are you coming to grips with QA in a hyper-development environment with ever-decreasing release cycles? Please share your thoughts, gripes and insights below – we would love to hear them!