Thursday, October 29, 2009

The Problem With Standish

In my recent white paper, The IT Complexity Crisis, I discussed how much IT failures are costing the world economy. I calculated the worldwide cost to be over $6 trillion per year. You can read the white paper here.

In this white paper I discuss the Standish Chaos numbers, but many readers have continued to question whether my conclusions are in agreement with Standish. I think my conclusions are in agreement, but I also think the Standish numbers are flawed. So I have mixed feeling about them. Let me explain.

The Standish Group has been publishing their annual study of IT failure, their "CHAOS Report" since 1994 and it is widely cited throughout the industry. According to the 2009 report, 24% of all IT projects failed outright, 44% were "challenged", and only 32% were delivered on time, on budget, and with required features and functions.

To be honest, I have never read the Standish Report. Given the $1000 price tag, not many people have. So, like most people, I am basing my analysis of it on the limited information that Standish has made public.

The problem with the Standish Report is not that it is analyzing the numbers wrong. The problem is that Standish is looking at the wrong numbers. It analyzes the percentage of IT projects that are successful, challenged (late, overbudget, etc.), or outright failures. This sounds like useful information. It isn't.

The information we really need is not what percentage of projects are successful, but what percentage of IT budgets are successful.

What is the difference between percentage of projects and percentage of budget? A lot. Let me give you an example.

Suppose you are an IT department with a $1M budget. Say you have six IT projects completed this year, four that cost $50K, one that cost $100K, and one that cost $700K.

Which of these projects is most likely to fail? All other things equal, the $700K project is most likely to fail. It is the largest and most complex. The less the project costs, the simpler the project is. The simpler the project is, the more likely it is to succeed.

So let's assume that three of the four $50K projects succeed, the $100K project succeeds, and the $700K project fails.

Standish would report this as 4/6 success rate, or a 67% success, 23% failure rate. I look at these same numbers and see something quite different.

I look at the percentage of IT budget that was successfully invested. I see $250 K of $1M budget invested in successful projects and $750 K in failed projects. I report this as a 25% success rate, a 75% failure rate.

So both Standish and I are looking at the same numbers, yet we have almost exactly opposite conclusions. Whose interpretation is better?

I argue that, from the organizational perspective, my interpretation is much more reasonable. The CEO wants to know how much money is being spent and what return that money is delivering. The CEO doesn't care how well the IT department does one-off minor projects, which are the projects that dominate the Standish numbers.

So the bottom line is that I have major issues with the Standish Report. It isn't that the Standish analysis is wrong. It is just that it is irrelevant.

10 comments:

Machiel said...

I also have problems with the Standish Reports, not just how they interpret the number but because they don't measure what matters. No mention of ROI.

Most mind boggling successful projects that delivered billions of profit would be a failure according to standish because it wasn't delivered within the original budget. Too bad your project saved your company, Standish still thinks you failed. This is seeing IT only from a cost perspective.

Perhaps you would like to comment my thoughts on this: http://machielgroeneveld.nl/?p=54

Steven Romero, IT Governance Evangelist said...

I too have a problem with the Standish Chaos Report. I wrote a blog post earlier this year arguing the project failure rates are actually higher when you interpret their categories correctly (based on my contention that the Standish "challenged" category are actually failures.

But even though I have problems with their report, I would not call it "irrelevant."

I agree that your "failed budgets" approach provides potentially more meaningful insight into the relevance and impact of the failure. Even so, understanding project failure rates (independent of failure cost) is still a meaningful measure. I need this data to ensure my project governance decision-making processes are working well. It is not the "only" metric, but a very telling piece of information.

Steve Romero, IT Governance Evangelist
http://community.ca.com/blogs/theitgovernanceevangelist/

Roger Sessions said...

Steve,
I agree that it would be helpful to have an understanding of project failure rates.

Unfortunately, you won't get that from Standish. The reason is that Standish doesn't differentiate by project size (or even better, by project complexity.) In the real world, the chances of failure go up markedly with project complexity. See my white paper for more on this.

Now had Standish given us failure rates as a function of project complexity (or even project price, or even LOC), then you could use this information for useful governance decisions. But unfortunately, they didn't.

- Roger

Steve Romero, IT Governance Evangelist said...

We're in violent agreement. I was more defending understanding project failure rates than Standish reporting techniques.

Thanks for the thought-provoking post.

Steve Romero, IT Governance Evangelist

Todd Williams said...

From the Standish 2009 CHAOS report:

"This year’s results show a decrease in project success rates, with 32% of all projects succeeding (delivered on time, on budget, with required features and functions); 44% were challenged (late, over budget, and/or with less than the required features and functions); and 24% failed (cancelled prior to completion or delivered and never used)."

And…

"Size continues to matter, with 61% of successful projects costing less than $750,000 in labor, and 19% of projects from $750,000 to $3 million were successful. [...] Projects over $10 million only have a 2% chance of coming in on time and on budget, and represent a statistical zero in the success column."

My problem is that "late, over budget" is not necessarily a designator of failure. The "less than the required features and functions," which I am going to liberally translate to "lacking value," is failure. If it takes 15% more time, money or both to make a useful and valuable product, it cannot be classified as a failure.

There have been some studies that I have read about (the oft quoted Robbins-Gioia and Conference Board Survey, both in 2001) that look at the "impressions" of success. These are biased by opinion rather than fact, but for anyone providing project management services, be in a IT department or a consulting company, perception is as bad as reality.

Regardless of how you look at it though, the numbers seem to say that 40-70% of projects fail. Let's assume that you are using the high-end numbers--cut your estimates in half. Then you are only a $3 trillion. Maybe there is something else you did wrong and it is only a trillion. In other words, that it is about 6,000 abandoned FBI Virtual Case File failures a year. In my book, that is a lot of pizza and beer.

Todd

Roger Sessions said...

Todd, Why are you suggesting the number be cut in half? This is actually higher than the failure rate I was using. Am I missing something here?
- Roger

Todd Williams said...

Looking at all the sources of estimates on project failure, the estimates seem to range from 40% to 70% of all projects are in trouble. Taking the lower number would cut your value in half. At the size of the numbers involved, it does not matter if it is $3T or $6T. It still constitutes a large percentage of the national debt.

The point of your article is the size of the number. People that are concerned about some detail of Standish's definintion are missing your point.

Todd

Sander Hoogendoorn said...

Hi Roger,

Couldn't agree with you more. You've just added a weighed percentage to the number, instead of the Standish Group using a counted percentage. Of course the latter is easier to calculate, but will be flattering in most cases. You're simple example speaks for itself.

Sander.

VItaliy Kurdyumov said...

Interesting discussion. I support Roger's thoughts.
But there is a question which I was always interested to have answered: what is failed project?what are criterias of failure?

- Vitaliy

Raoul Duke said...

I think it has to be Roger's + Michael's + Vitaliy's thoughts: We have to look at the ROI of the whole IT budget, and we have to keep in mind the context of the organization.

A video game company can have 9/10 failures, much like a VC firm, and the remaining project goes gang-busters and everybody ends up with porsches for a bonus. This would be just fine. That is a market where knowing what success will be is too bloody hard. You can only mostly know after the fact.

This is obviously different than software for the Space Shuttle or Boeing or Airbus. In those cases if 9/10 of the software projects fail at runtime, there's not going to be enough ROI for sure.

In other words: I think I see people pushing a really narrow, overly simplistic, golden-bullet pet idea of how to study and interpret the numbers, when in fact "it depends" is the only true answer. :-)