Researchers often attempt to quantify the number of failed IT projects, usually reporting statistics that discuss failures as a percentage of the overall number of IT projects. These failure stats are primarily useful to the extent they illustrate that IT failure is a common and serious problem.
In a recent white paper, Roger Sessions, defined a model that quantifies the dollar cost of IT failure worldwide. Roger concludes that global IT failure costs the world economy a staggering $6.2 trillion per year, or $500 billion each month. Given these large numbers, it’s no surprise the white paper has received much attention from this blog and elsewhere.
Reactions to the white paper are mixed, with both supporters and detractors lining up with their opinions. The debate even made popular techno-geek news site, Slashdot, demonstrating that Roger’s conclusions hit a nerve.
In contrast to many of the opinions, IT failure expert consultant, Bruce Webster, wrote a serious “analytical critique” of the white paper and its calculations. In that piece, Bruce states:
Unfortunately, Sessions is fundamentally wrong in his numerical analysis, and his numbers are off by far more than “ten or twenty percent”. For the Federal Government alone, they are off by almost a full order of magnitude (10x)….
[M]y conclusion here is that his estimate of $500 billion/month in lost direct and indirect costs due to IT systems failure just does not hold up, in my opinion.
You can read the detailed arguments, so I won’t repeat them here. However, the critique generally states that Roger’s approach:
- Incorrectly interprets government-supplied data regarding IT failure rates and associated costs
- “Ignores or confuses” failure rate data regarding new projects relative to existing systems
- Wrongly extrapolates limited US data to the remainder of the world
- Make unjustified assumptions regarding “direct and indirect costs,” which have a substantial impact on the conclusions
THE PROJECT FAILURES ANALYSIS
By attempting to quantify the dollar cost of IT failure, the white paper adds a new and useful dimension to the usual failure statistics. The associated critique, which catalogs possible misinterpretations of incomplete data, will help anyone interested in refining the approach described in the white paper.
I do fault the critique in one important area: it does not offer an alternative to Roger’s $6.2 trillion number. Perhaps the real number is “lots and lots smaller,” but we need greater accuracy. Granted, the source data is not complete, but re-working Roger’s original calculations based on different assumptions would be a worthwhile project.
My take. We still do not have accurate numbers on the annual world-wide cost of IT failure. Nonetheless, an incorrect guess based on rough and incomplete data is better than nothing at all.
Still, I remain most interested in a different model for quantifying failure: understanding the real-world costs of IT failure inside individual organizations.
Information derived from that model would help companies better link IT investment choices to outcomes, utility, and waste (or failure). Organizations could use that information to help guide better IT purchase and deployment decisions.
[Madcap researcher picture from iStockPhoto.]