The new science of software testing

A look at some of the many ways to approach software testing.

By Lorinda BrandonDirector of Strategy at Smartbear Software

I didn't make my career as a tester, but it is something I’ve done and returned to over the course of my professional life. I’ve dabbled in many roles in software – tester, product manager, project manager, program manager, customer service director, and just plain management overhead. But testing has always been a draw for me and something I often still dabble in after-hours on personal projects.

I’ll be honest – I stumbled into software testing. I started testing software back in the 1980’s while working as a civilian for the Air Force. In that project, we didn’t have a bug tracking system or any kind of testing methodologies. As the sole tester in an office that was completely computer illiterate and with the development team housed in a separate building, I went by gut and jotted down any issues I found in a notebook. I would then either call the developers or walk over to see them and we would talk through my notes. Sometimes they would ‘ghost’ my terminal and watch me recreate the problem. It sounds both primitive and modern at the same time, doesn’t it?

The reality is that self-taught testers are the norm. While there are certainly students of the craft (and where there are students, of course there are teachers), many testers today learned by sitting down at a desk and going by instinct. Sure, there is usually a mentor or a manager who can walk you through the job expectations – creating test plans, logging a defect, regression testing – but the industry itself is not crawling with devotees of the science of testing. Not to say that those people don’t exist – they do and they are a force to be reckoned with.

So when did software testing begin to be organized and standardized? It’s hard to say, although some people like Matt Heusser have tried to document the history of testing in some fashion. Perhaps Joseph Juran’s Trilogy of the 1950’s - quality planning, quality control and quality improvement – are the concepts that have survived the longest, although the approach to them is ever-evolving.

In reality, not much has changed about the overall goals of testing since I started out in the discipline almost 30 years ago. The concepts are the same:

  1. Understand what is being built
  2. Plan all the flavors and paths of tests that need to be done
  3. Document the test, your expectations, and your observations about the result
  4. Communicate to the developers
  5. Re-test bug fixes
  6. Perform regression tests against functionality that might also have been affected
  7. Lather, rinse, repeat

Whether you accomplish those tasks using automation, manual testing, or exploratory testing… well, conceptually, it’s all the same. The method by which you accomplish those tasks is the mechanics and those mechanics change over time, depending on the systems/team/project you have available.

I’m more interested these days in the science/philosophy behind those activities. How do you plumb the depths of the product you are testing? How do you define your plans and communicate them effectively, both to yourself and to the team you are supporting? What are the best techniques for accomplishing any of these and how do I know which to employ?

Luckily for today’s testers, there are scientists/philosophers who are devoting their minds and time to exploring all of these concepts and how we can accomplish them better. In writing this, I tried to decide between the terms “science” and “philosophy,” but I found that I couldn’t – much of the current testing conversation falls somewhere in between. If I had to draw a line, I would draw it at test design vs. test strategy – there is science in the design, and there’s philosophy in the strategy. To me, as a self-taught veteran of the craft, I am most intrigued by the strategic discussions. For those of you not following along with today’s testing pundits, here are some nuggets to get you thinking:

Context-Driven Testing: This school of thought eschews the idea of ‘best practices’ because there is no way to identify a practice that will work across all projects at all times. To identify the right success criteria and the right testing methodology, you need to understand the context of the testing in light of the project and its end goals.

Bias in Testing: There are several different types of bias that can alter the effectiveness of software testing and it’s important to understand them in order to develop strategies to circumvent them. There is good biological sense in being wired for bias, but having that mindset color our test behavior (and hence the results) can make for inaccuracies that should be avoided. Recognizing that we all approach our day-to-day activities with some form of bias means we can proactively counteract them.

Oracles and Heuristics: How do you know if your test passed or failed? What is it about the test result that signaled one or the other to you, and what happened between the start of the test and the result that can influence that decision? The concept of oracles and heuristics is decidedly complex and nuanced, and probably not something anyone can explain in a short paragraph. At least I can’t. Because it is a subtle but important concept, it is still being debated in software testing circles.

Galumphing: This is my personal favorite and is a new testing term coined by James Marcus Bach to describe a technique of testing in which the tester inserts variations into the test that rationally shouldn’t affect the outcome of the test. I include it in the list because it speaks to an approach to software testing that many people have historically discounted because it can’t be described easily in test plans or in a defect ("why were you randomly clicking fields before hitting OK?" "uh, I don’t know – I was galumphing").

Of course, all of these are broader topics than I’ve represented here and are worth diving into if you are a software tester or work with testers. It’s refreshing to see how deeply people think about the software testing discipline, although it also seems a natural outcome of something that has become so complex in recent years. Software testers often have to juggle multiple versions in multiple environments on multiple browsers and across many devices…in some ways, being asked to make sense of it is both difficult and impossible. After all, searching for bugs is elusive – you don’t know where they are, how many there are, or even if they exist at all (well, maybe that part is a given). I guess that’s why so much of the conversation sounds more like philosophy than science.

---

About Lorinda Brandon, Director of Solutions Strategy at SmartBear

For more than 25 years, Lorinda Brandonhas worked in various management roles in the high-tech industry, including customer service, quality assurance and engineering. She is currently Director of Solutions Strategy at SmartBear Software, a leading supplier of software quality tools. She has built and led numerous successful technical teams at various companies, including RR Donnelley, EMC, Kayak Software, Exit41 and Intuit, among others. She specializes in rejuvenating product management, quality assurance and engineering teams by re-organizing and expanding staff and refining processes used within organizations. She has a bachelor’s degree in art history from Arizona State University. Follow her on Twitter @lindybrandon.

Copyright © 2013 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022