According to the World Technology and Services Alliance, countries spend, on average, 6.4% of the Gross Domestic Product (GDP) on Information Communications Technology, with 43% of this spent on hardware, software, and services. This means that, on average, 6.4 X .43 = 2.75 % of GDP is spent on hardware, software, and services. I will lump hardware, software, and services together under the banner of IT.
According to the 2009 U.S. Budget, 66% of all Federal IT dollars are invested in projects that are “at risk”. I assume this number is representative of the rest of the world.
A large number of these will eventually fail. I assume the failure rate of an “at risk” project is between 50% and 80%. For this analysis, I’ll take the average: 65%.
Every project failure incurs both direct costs (the cost of the IT investment itself) and indirect costs (the lost “opportunity” costs). I assume that the ratio of indirect to direct costs is between 5:1 and 10:1. For this analysis, I’ll take the average: 7.5:1.
To find the predicted cost of annual IT failure, we then multiply these numbers together: .0275 (fraction of GDP on IT) X .66 (fraction of IT at risk) X .65 (failure rate of at risk) X 7.5 (indirect costs) = .089. To predict the cost of IT failure on any country, multiply its GDP by .089.
Based on this, the following gives the annual cost of IT failure on various regions of the world in billions of USD:
REGION GDP (B USD) Cost of IT Failure (B USD)
World 69,800 6,180
USA 13,840 1,225
New Zealand 44 3.90
UK 2,260 200
Texas 1,250 110
13 comments:
I love how your reached those numbers. This is quite alarming considering the size (in billions) of the losses caused by the failure in IT projects (and I do believe that you were conservative in your calculation, it's definitely more).
This article on the Chaos Report 2009 has a table of the success rate, the challenged rate, and the failed rate of IT projects since 1994.
I think the PM Hut article (link above) is excellent. I'll leave another blog in the next few days to describe why my numbers and the Standish Chaos number are different. And I do agree, I think my numbers are conservative.
Michael Krigsman kindly reposted my blog about the Cost of IT Failure, and some discussion at that site questions my multiplication factor, the one I call "lost opportunity costs".
Lost opportunity costs are basically the ROI that would have been realized on the investment (in this case, the new IT system) had the investment been successful.
Perhaps an example will clarify this concept.
Between 1994 and 2005, the Internal Revenue Service spent $185 million on a new electronic fraud detection system. The project was abandoned in 2006.
According to a report in 2008 by the Treasury Inspector General for Tax Administration, the Federal Government lost approximately $894 million in fraudulent refunds during 2006 because the system was not operational.
So in this case, the direct cost of the failure was $185 million and the indirect cost (the lost opportunity costs) was $894 million. Of course, that $894 million was lost just in 2006. Presumably the same amount was lost in 2007 and subsequent years.
To say that the only cost to the U.S. economy was the $185 million spent on the IT system itself is just plain wrong.
By the way, this example is taken from my editorial in Perspectives of the International Association of Software Architects published in January 2009. You can find it here: http://bit.ly/2xen52
Also, I should point out that my failure cost numbers are actually lower than the widely quoted Standish group Chaos report. I actually think there are flaws in the Standish analysis. Had this report not had these flaws, its numbers would have been even higher and made mine look even more conservative by comparison.
I'll be posting a blog in the next few days describing the flaws in the Chaos report. So stay tuned to my blog for this analysis: http://bit.ly/vmBZq
I hope this helps clarify my analysis.
- Roger Sessions
See the article "Health IT Project Success and Failure: Recommendations from Literature and an AMIA Workshop" for failure rate issues in healthcare IT:
link
Roger, I have recently finished your “Simple Architectures…” book. It is one of the more interesting and direct books I have read in some time. Keeping in mind the theme you have on this blog and in your book, regarding IT costs out of control, and considering the complexity of software systems and the mechanisms to simplify them, it seems to make no sense that simplification would not be at the top of most CIO’s to do list. However, I have not seen much support for an approach such as you suggest (SIP, ABC, partnership interactions, etc) or even interest in considering such an approach.
What I have found is, a profound lack of support for logic-based decisions, for the most part. Decisions and mechanisms for reaching those decisions are based on “we have always done it that way..”, NIH, Political clout, unsubstantiated opinions, vendor biasing, interdepartmental rivalries, programming language religious wars, and an atmosphere of general repression of healthy communications. My experience is that more progressive ideas just get squashed under these circumstances. Reference your section in the book about CIO’s trying to solve the wrong problem.
With this said, what techniques, ideas and approaches have you used to deal with these very common major obstacles to progress?
Roger Ball: Thanks for your comments! I'm glad you liked the book. I agree with you that there is a "profound lack of support for logic-based decisions." I address this issue somewhat in my recent white paper, "IT Complexity Crisis". Have you seen that? If not, please take a read, I think you will enjoy it. It is at http://bit.ly/3O3GMp.
I've just now read this post so apologies if we've moved on. I think the indirect cost may actually be higher because of the following reasons:
1. Repeated failures induce skepticism and cynicism within the organization. This can lead to delay or abandonment of other projects since failure seems inevitable. It can also drive out better workers who see every project as a death march. This can in turn lead to other missed opportunities as well as inefficient internal processes.
2. Repeated failures may drive up the cost of future projects. As in #1, the loss of good workers drives the need to rehire and/or retrain for replacements. Additionally, the loss of those workers represents a loss of historical knowledge and experience which can drive further failures. The future cost can go up, too, because the perception may be that more resources (for any definition of resources) are needed, budgeted, and allocated for future projects and in many cases, will be unneeded.
3. The loss of immediate opportunity can impact the ability to influence potential customers for future dollars. This may be through word-of-mouth advertising about the failures, perceived performance failures, or (for bundled products) direct experience with bad products that drive customers to the competition.
These points may be stating the obvious, but my experience is that many organizations fail to recognize these unanticipated costs or even understand that they may have failed.
My 2p worth
Dhestand: Great observations. I fully agree. I was trying to be conservative in my estimates, but your points are well taken. If you haven't seen my longer analysis on this, please see my white paper "The IT Complexity Crisis" available at http://bit.ly/3O3GMp
Where can i find the full report? I would like to see the GDP and cost of IT failures for the Asian countries.
Thanks
Roger, I did actually read the longer treatise but it was after I posted my comment, else the comment might have been shorter. I think that your approach is very useful and it makes me wonder why organizations that are so concerned with managing shareholder expectations to the penny do not apply the same consideration to massive (or even not so massive) IT investments. Thanks for the posting!
New Zealand's GDP is more like 116 billion (PPP).
https://www.cia.gov/library/publications/the-world-factbook/geos/nz.html
Great to learn that there are more people out there who refuse to get trapped by prevailing published opinions and do prefer to do their own thinking.
Most IT problems can be reduced to a very simple mantra: Complexity kills !
And by the way, this does not only apply to the project phase where many thoughtful approaches do fail before or when they reach the production phase.
Complexity also kills during the production phase, if the underlying infrastructure is too complex to be run reliably. The result are poor service levels and enormous hidden costs resulting thereof.
Not all IT platforms are created equal. Most of them come with a very attractive price tag but do lead straight into humongous complexity when there is significant workload. And then there are other IT platforms (like IBM's zSeries or HP NonStop) coming with less complexity and a proven track record - but many people shy away from those because of higher initial investment cost or perceived "political correctness".
Roger, I did actually read the longer treatise but it was after I posted my comment, else the comment might have been shorter. I think that your approach is very useful and it makes me wonder why organizations that are so concerned with managing shareholder expectations to the penny do not apply the same consideration to massive (or even not so massive) IT investments. Thanks for the posting!
Post a Comment