"The coming meltdown of IT; the out of control proliferation of IT failure is a future reality from which no country – or enterprise - is immune. The same IT failures that are eroding profitability in the United States are impacting the economy in Australia. IT failures are rampant in the private sector, the public sector, and the not-for-profit sector. No place is safe. No industry is protected. No sector is immune. This is the danger, and it is real."
-- Roger Sessions, CTO, ObjectWatch
Have you had a IT project go astray? Maybe you were lucky and it was a brief hiccup with minimal financial consequences. Or maybe you had a disaster of biblical proportions, such as the one that befell Levi Strauss in 2008.
In 2003, deciding that their "old school" data processing systems needed a facelift, Levi Strauss decided to spend $5 million on implementing an ERP system based on SAP with the help of business and IT consultants, Deloitte. Mostly due to poor requirements specifications, some five years later Levi Strauss took a $192.5 million charge against earnings. This was caused, in the main, by having to shut down all product distribution for a week while switching over to the new system. How on earth did that ever happen?
The Harvard Business Review commented in a 2011 article: "what happened at Levi Strauss occurs all too often, and on a much larger scale. IT projects are now so big, and they touch so many aspects of an organization, that they pose a singular new risk ... They have sunk whole corporations. Even cities and nations are in peril."
So, how big is the problem? Well, how much do you think IT project failures cost annually on a global basis? In his 2009 white paper, "The IT Complexity Crisis: Danger and Opportunity", Roger Sessions, CTO of ObjectWatch, a consultancy specializing in IT architecture and systems complexity, estimated the cost to be roughly $6.2 trillion per year with the U.S. accounting for some $1.2 trillion of that total.
Not everyone, however, agrees with Session's estimate, notably Bruce Webster, principal and founder of Webster Associates LLC, who analyzed Session's financial estimates. "Sessions is fundamentally wrong in his numerical analysis, and his numbers are off by far more than 'ten or twenty percent,'" Webster says. “His estimate of $500 billion/month in lost direct and indirect costs due to IT systems failure just does not hold up, in my opinion."
In his paper Sessions did admit that these were rough estimates. He noted: "I recommend you don’t get overly focused on the exact amounts. I could be off by ten or twenty percent in either direction. The real point is not the exact numbers, but the magnitude of the numbers and the fact that the numbers are getting worse."
Alas, Webster didn't offer an alternative figure for the IT project failure costs, but in April last year Michael Krigsman in his ZDNet column asked two experts to rethink the problem and they concluded the global loss was in the region of $3 trillion. Not $6 trillion, sure, but the scale of the problem is, to say the least, epic.
Zooming back down to the project level, the HBR article found: "When we broke down the [failed] projects’ cost overruns, what we found surprised us. The average overrun was 27% — but that figure masks a far more alarming one. Graphing the projects’ budget overruns reveals a 'fat tail' — a large number of gigantic overages. Fully one in six of the projects we studied was a black swan, with a cost overrun of 200%, on average, and a schedule overrun of almost 70%."
It seems that the primary causes of failures of major IT projects fall into two categories. The first is underestimating (or ignoring) the financial consequences of going over budget, and or not seeing adequate return on investment.
HBR's advice: "Leaders should ask themselves two key questions as part of IT black swan management: First, is the company strong enough to absorb the hit if its biggest technology project goes over budget by 400% or more and if only 25% to 50% of the projected benefits are realized? Second, can the company take the hit if 15% of its midsized tech projects (not the ones that get all the executive attention but the secondary ones that are often overlooked) exceed cost estimates by 200%? These numbers may seem comfortably improbable, but, as our research shows, they apply with uncomfortable frequency."
The second category of IT project failure is poor or inadequate strategic design and specification (which is where complexity rises up to derail major projects). For example, even when there's a good or even great design a lack of strategic thinking can result in building the wrong product.
In a 2010 blog posting "The High Costs of Building the Wrong Product", Scott Sehlhorst, a strategy and product management consultant, discussed requirements bugs: "A developer can reasonably assert that 'if it meets the spec it is not a bug, it is working as designed.' What if the spec is wrong? The developer may not be guilty, but collectively, your team screwed up. There’s a 'bug' in the requirements."
Steve McConnell, CEO and Chief Software Engineer at Construx Software noted in a 1996 article: "Studies have found that reworking defective requirements, design, and code typically consumes 40 to 50 percent of the total cost of software development."
I can find no evidence that these consequences and their costs have decreased in the intervening 12 years, but rather it appears we see endless examples of major projects going awry. Consider that just last year a report by the government's General Accounting Office found the US Department of Defense had wasted $2.7 billion on a faulty ERP system. Quite obviously large scale development projects are as problematic as ever.
McConnell continued: "As a rule of thumb, every hour you spend on defect prevention will reduce your repair time [by] three to ten hours. In the worst case, reworking a software requirements problem once the software is in operation typically costs 50 to 200 times what it would take to rework the problem in the requirements stage ... It’s easy to understand why. A 1-sentence requirement can expand into 5 pages of design diagrams, then into 500 lines of code, 15 pages of user documentation, and a few dozen test cases. It’s cheaper to correct an error in that 1-sentence requirement at requirements time than it is after design, code, user documentation, and test cases have been written to it."
So, while there's much improvement that can be achieved with various development methodologies how can you minimize problems that come from the very beginnings of a project, that is, the phase in which "requirements bugs" are introduced? Next week, my friends, we shall find out ...
Gibbs is specified as being in Ventura, Calif. Debug him at email@example.com and follow him on Twitter and App.net (@quistuipater) and on Facebook (quistuipater).