Wired published an interesting article earlier this year - "The End of Web, Search and Computers as we know it"\u00a0\u2013 in which David Gelernter predicts the addition of a time dimension to all information we consume on the internet; his future is made of \u201clifestreams,\u201d which continuously pull and aggregate evolving data from different sources into something we consume in an adapted reading device or software. So, for example, instead of looking up the opening hours of your local brewery on their website, you would subscribe to its stream of information (including opening hours) and using a \u201cstream-browser\u201d to find the desired data. An obvious \u201creal-life\u201d example is the Facebook timeline, which aggregates information from those we follow into one always-updating stream of data \u2013 a metaphor that David predicts we will be adopting for more or less all information we consume statically today.Although the article gets quite a bashing in the comments (an interesting read in itself), I think it does align pretty well with a general shift we are seeing from static to streaming information consumption \u2013 which is also making itself more and more evident in the world of APIs, where related technologies like WebSockets, WebHooks, WebRTM and streaming JSON and XML are becoming more and more prevalent.So, in the general spirit of software quality, what does this mean for testing and test professionals out there? Except for the obvious need to learn some new acronyms and technologies (yay!), what dangers are lurking in the innards of real-time information applications (boo!)?Let\u2019s have a look at three areas that aren\u2019t really new, but that will challenge more and more testers as this shift continues.The first challenge is that of testing an asynchronous system efficiently. For example, testing that a certain input in a web search interface gives you the expected result in your browser is far easier than sending an email and waiting for a reply in return (although it perhaps doesn\u2019t sound like it from that description). Apart from the \u201cdisconnect\u201d between the two events (sending and receiving messages) you have to ask some questions - how long should you wait? How can you correlate a response to a request? (you could be getting other messages at the same time). What if you expect to get repeated (and an unknown number) of emails as a response? For example, your email might kick of some process (printing of images perhaps?), which will notify you of its progress continuously and send a final message when it finishes. Given all the things that can happen between the initial action A (sending the email) and result B (getting some kind of final response), the intelligence of a real tester is often going to be required. Automated tests will have a hard time handling all of the out-of-bounds things that could occur during test execution. Perhaps automated tests could be used for testing individual (synchronous) parts of this transaction \u2013 but automated testing of the entire process would probably require you to create either a \u201cclean\u201d testing environment (to avoid disturbance from auxiliary events) or an \u201cintelligent\u201d test-script that can actually handle \u201cnatural disturbances\u201d during the test execution.Then there is the challenge of rewritten history (yes, you read that correctly). Here\u2019s the thing: what if information exposed asynchronously changes back in time? For example, say you are a journalist that is \u201ctuning in\u201d to financial company data. This data now gets revised historically as the IRS is performing an audit. What if you created an application that makes decisions based on the content of this stream? How would you handle these changes? And how would you test for these changes? (Should you even care about them?) From an information publisher side of things, the initial challenge is perhaps to decide how to publish these changes \u2013 should they be added as events at the \u201ctop\u201d of the stream (although they describe something that happened back in time) or should they be \u201cinserted\u201d into the stream historically? How would that be propagated and handled?Finally, let\u2019s consider a related challenge at the code level. More and more applications make use of the multi-core nature of their hosting devices, which allows them to split their process into multiple \u201cthreads\u201d running at the same time. How do you test that your application makes correct use of this possibility? That it handles related error conditions? What if one core \u201clags behind\u201d because the device hogs it for something else? How do you simulate these scenarios? Should you even care? Or should you simply trust that the language\/runtime\/OS you are using takes care of these complexities for you?Since I have the privilege of just blurting things out from my stream of consciousness you might think that the examples I gave for each of these three areas may sound contrived, but they\u2019re actually modeled after real-life scenarios that already challenge QA Pros out there, and as such they need to be considered with care so we don\u2019t end up with buggier software and unhappy users.But hey, we\u2019re just at the beginning of this, and as long as we talk about these challenges, I know we can come up with valid strategies to tame them. Have you tried already? Awesome \u2013 please share your thoughts below!As always, thanks for your time!