As products become more complex, testing becomes more important. We strive to tally the pluses and minuses to come up with a bottom line on a product. But it doesn't often work that way anymore. The bottom line on testing today is that there isn't a bottom line.The bottom-line approach, taken in a typical\u00a0product review\u00a0or bake-off, assumes all the various attributes of a product can be probed, graded and tallied into a single number (or small set of numbers). This gives us the testing version of the accountant's bottom line and becomes the basis for publication awards and the like.Only when the products being compared are explicitly targeted at the same market and implement the same architecture and technology does this approach have a chance at working. And such situations are rare. The higher up the stack you go, the less viable this monolithic approach becomes.It's one thing to establish a set of mandatory criteria for evaluating a group of Layer 2 switches where the core criteria are mandated by standards, and there's simply not room for massive implementation differences. It is quite another, say, for a tester to decide that certain features are mandatory for products as inherently complex and often implement fundamentally different architectures and philosophies such as intrusion-protection systems (IPS).The more complex the system, the more that bias becomes a factor. There's no way around the fact that, when evaluating at a product in toto, a gold standard needs to be established against which the product and its features are rated. Creating that description of the perfect product is unavoidably an exercise in building a bias.In many cases, creating the gold standard requires the tester to choose among various philosophies. These choices might determine the winner even before the first box is unpacked. A tester who's decided that antivirus functionality has no place in IPSs will award no "points" for it and almost relegate such products to the "loser" category before testing even begins.There are some fundamental problems here. Who gives the tester the right to declare what the perfect product should consist of? Even granting that the tester is a subject matter expert doesn't convey that privilege.I'll go on record to say what should be obvious to all: There is no product perfect for all users, needs and environments. Period.It is all about context. Which is better, a wire-speed Gigabit firewall that has little advanced functionality or one that can only offer Fast Ethernet speeds but has integrated antivirus and traffic shaping? The right answer is: It depends. If I'm building a service provider core, the former likely would be better; if I'm trying to manage my company's T-1 WAN connection, the latter would be better.What's the solution? Any tests of complete products need to be tightly coupled to an appropriate context. At a minimum, the results need to be analyzed with reference to a specific deployment context.Better yet is for testers to break the products down into their most basic elements. This lets true apples-to-apples comparisons be made between products that don't necessarily implement identical features sets - which is more and more the case today.The tester's role should not be to make decisions for end users but rather to provide end users the reliable data they need to blend into their own requirements to choose the product that's right for them.