What do corporate networks have to do with the emerging science of complexity theory? More than you would think.
In case you don't know, complexity theory is the study of complex adaptive systems, which are self-similar collections of interacting agents. In a complex adaptive system, there are many agents acting in parallel, constantly acting and reacting to what the other agents are doing, with a highly distributed and decentralized control structure.
Starting to sound familiar? Companies increasingly are converging multiple applications onto a common network. They're installing highly responsive application acceleration and optimization software and hardware (known as agents) at the endpoints. And increasingly they're embedding in the applications themselves the ability to request and receive resources dynamically - in essence decentralizing and distributing the control structure.
A complex adaptive system's overall behavior is the result of a huge number of decisions made every moment by many individual agents (for more details, check out Complexity: The Emerging Science at the Edge of Order and Chaos by M. Mitchell Waldrop). Complex systems have some interesting features. They can display complex behavior resulting from a few simple rules. They spontaneously self-organize, changing from one state to a new and more sophisticated state.
There's one other important characteristic of complex systems: They fail. You even could say the defining characteristic of a complex system is its propensity to experience rare but highly catastrophic failures that cannot be engineered out of the system. In a nutshell the problem is that adding enough complexity to protect against a set of known failures increases the complexity of the system to a point in which other failures become inevitable. In other words, you can't fix a complex system by making it more complex.
This isn't good news for the IT folks who are chartered with ensuring the extreme availability of computing and network infrastructures. As I've noted previously, companies are decreasingly tolerant of outages - half of the high-end financial organizations I spoke with last year said the amount of downtime their companies tolerate is "zero." So, just as systems are getting more complex, failures are getting less and less tolerable - and this is not necessarily a fixable problem.
The classic example of a complex system is a pile of sand to which more sand is added (as in an hourglass). At first, the sand just piles up. Then small avalanches start down the sides. Once the pile gets large enough, even a single added grain of sand inevitably generates a very large avalanche - and there's no way to predict which grain of sand will cause that to occur.
What's an IT executive to do? It may sound simplistic, but the best precaution is to recognize that failures will occur - no matter how little they're wanted or tolerated. Have a business-continuance plan in place that can handle rare, catastrophic outages. And level-set expectations around extreme availability - attaining it may require repealing the laws of physics.