Americas

  • United States

Software testing at the end of the world (almost)

Analysis
Apr 15, 20135 mins
Enterprise ApplicationsOpen Source

Considering how predictions for information consumption in the future could relate to software testers.

Wired published an interesting article earlier this year – “The End of Web, Search and Computers as we know it” – in which David Gelernter predicts the addition of a time dimension to all information we consume on the internet; his future is made of “lifestreams,” which continuously pull and aggregate evolving data from different sources into something we consume in an adapted reading device or software. So, for example, instead of looking up the opening hours of your local brewery on their website, you would subscribe to its stream of information (including opening hours) and using a “stream-browser” to find the desired data. An obvious “real-life” example is the Facebook timeline, which aggregates information from those we follow into one always-updating stream of data – a metaphor that David predicts we will be adopting for more or less all information we consume statically today.

Although the article gets quite a bashing in the comments (an interesting read in itself), I think it does align pretty well with a general shift we are seeing from static to streaming information consumption – which is also making itself more and more evident in the world of APIs, where related technologies like WebSockets, WebHooks, WebRTM and streaming JSON and XML are becoming more and more prevalent.

So, in the general spirit of software quality, what does this mean for testing and test professionals out there? Except for the obvious need to learn some new acronyms and technologies (yay!), what dangers are lurking in the innards of real-time information applications (boo!)?

Let’s have a look at three areas that aren’t really new, but that will challenge more and more testers as this shift continues.

  1. The first challenge is that of testing an asynchronous system efficiently. For example, testing that a certain input in a web search interface gives you the expected result in your browser is far easier than sending an email and waiting for a reply in return (although it perhaps doesn’t sound like it from that description). Apart from the “disconnect” between the two events (sending and receiving messages) you have to ask some questions – how long should you wait? How can you correlate a response to a request? (you could be getting other messages at the same time). What if you expect to get repeated (and an unknown number) of emails as a response? For example, your email might kick of some process (printing of images perhaps?), which will notify you of its progress continuously and send a final message when it finishes. Given all the things that can happen between the initial action A (sending the email) and result B (getting some kind of final response), the intelligence of a real tester is often going to be required. Automated tests will have a hard time handling all of the out-of-bounds things that could occur during test execution. Perhaps automated tests could be used for testing individual (synchronous) parts of this transaction – but automated testing of the entire process would probably require you to create either a “clean” testing environment (to avoid disturbance from auxiliary events) or an “intelligent” test-script that can actually handle “natural disturbances” during the test execution.
  2. Then there is the challenge of rewritten history (yes, you read that correctly). Here’s the thing: what if information exposed asynchronously changes back in time? For example, say you are a journalist that is “tuning in” to financial company data. This data now gets revised historically as the IRS is performing an audit. What if you created an application that makes decisions based on the content of this stream? How would you handle these changes? And how would you test for these changes? (Should you even care about them?) From an information publisher side of things, the initial challenge is perhaps to decide how to publish these changes – should they be added as events at the “top” of the stream (although they describe something that happened back in time) or should they be “inserted” into the stream historically? How would that be propagated and handled?
  3. Finally, let’s consider a related challenge at the code level. More and more applications make use of the multi-core nature of their hosting devices, which allows them to split their process into multiple “threads” running at the same time. How do you test that your application makes correct use of this possibility? That it handles related error conditions? What if one core “lags behind” because the device hogs it for something else? How do you simulate these scenarios? Should you even care? Or should you simply trust that the language/runtime/OS you are using takes care of these complexities for you?

Since I have the privilege of just blurting things out from my stream of consciousness you might think that the examples I gave for each of these three areas may sound contrived, but they’re actually modeled after real-life scenarios that already challenge QA Pros out there, and as such they need to be considered with care so we don’t end up with buggier software and unhappy users.

But hey, we’re just at the beginning of this, and as long as we talk about these challenges, I know we can come up with valid strategies to tame them. Have you tried already? Awesome – please share your thoughts below!

As always, thanks for your time!