Sunday, November 8, 2009

The IT Complexity Crisis: Danger and Opportunity

Roger's new white paper, The IT Complexity Crisis: Danger and Opportunity is now available.

Overview
The world economy is losing over six trillion USD per year to IT failures and the problem is getting worse. This 22 page white paper analyzes the scope of the problem, diagnoses the cause of the problem, and describes a cure to the problem. And while the cost of ignoring this problem is frighteningly high, the opportunities that can be realized by addressing this problem are extremely compelling.

The benefits to understanding the causes and cures for out-of-control complexity can have a transformative impact on every sector of our society, from government to private to not-for-profit.

Downloading the White Paper
You can download the white paper, download an accompanying spreadsheet for analyzing architectural complexity, and view various blogs that have discussed this white paper here.

Would you like to discuss the white paper? Add a comment to this blog!

12 comments:

Anonymous said...

Interesting reading. A colleague of mine has asked me the following comment so I thought I would post it here:

I'm interested to see the mathematical working used to derive the formula you established for complexity, so that I am able to explain the rationale behind the formaula in 'business speak'

Waylon Kenning said...

I've found myself struck with a bit of serendipity lately. I've been wondering why IT projects consistently fail, even though we have applications like Project, frameworks like Prince2, and the brightest minds thinking of the problem.

Initially I was thinking along the lines that Project Managers have been ignoring the impact that people have on a project - i.e. assuming that all resources are the same and can be changed on a project without affecting the outcome of that project. Pretty much I've been trying to come up with a model that models some of the soft aspects of Project Management, and developed a prototype (Excel).

Then when I was on an aeroplane from Tokyo to Sydney I watched a Science Channel Video called "Connected: The Power of Six Degrees", which talks about networking, the small world problem, and then I thought to myself "Wow, if we think of all the tasks in a project as an entity that communicates success or failure messages to other tasks, then maybe the science of networking applies to Project Management, i.e. we could discover how when tasks fail or push out timelines we could predict the success of a project exponentially increasing towards zero, as the failure impacts tasks that didn't seem related to the task that failed".

And then as I was randomly browsing Computerworld, I saw an article on how you worked out the cost of failure in IT in New Zealand, and how you created a method to evaluate the complexity of IT projects as a mathematical formula, and I thought to myself, wow, this is simply amazing.

And then I had my epiphany - that the relative success of an IT project I believe relies on all three of these ideas:

1) The motivation, skill, and productivity of the different stakeholders who work on a project;
2) How when tasks succeed or fail them impact on other tasks which are not clearly obvious but share some commonality;
3) The complexity of the software being developed or implemented based on the number of business functions, and number of connections between business functions.

And I thought to myself - Wow, this is a big deal. A really big deal. We could create Project Forecasting software that could attempt to accurately forecast the time and success of IT projects. So I'm going to give it a go, and try and create some formula that aggregates these ideas into a relatively easy to use application that will attempt to predict the time and success of IT projects.

Can I use your formula for calculating complexity of projects in this application in exchange for a portion of equity in any subsequent business venture that occurs because of this?

I'm happy to chat more about this sometime somewhere else if you're interested.

Regards,
Waylon Kenning
http://kenning.co.nz

Anonymous said...

Although the white paper was compelling to read, I found you made too many assumptions to make it a solid argument. The point that you lost me is attributing the financial meltdown to an IT failure, when in actuality it probably was an IT success. How can I say that the financial meltdown was a financial success? Let's look at what happened.

The problem with the financial meltdown was there was too much lending and speculations. Banks heavily relied on IT systems to approve and fund bad loans and to trade less than solid packages for sale on the stock market. If the IT systems failed, we might not have had the volume of loans produced in 2008 - 2009 or the ability to trade toxic assets like what happened. Instead it was a human system failure, where banks couldn't see that their actions were failing them. The IT systems in the financial industry worked like they should have, it's too bad that they did.

While I agree with you complexity is part of the problem, the paper felt more like a paper to sell your formula to architects. I just feel this approach is very one sided. Yes, developing a good architecture helps but there is so much more that leads to IT Failure, which was glossed over in this paper.

Roger Sessions said...

I think the previous comment is confusing the financial meltdown with the IT meltdown. I never said that the financial meltdown was due to IT failures. What I said was that the cost of IT failures is equivalent (higher, actually) than the cost of the financial meltdown, and therefore deserves the dubious distinction of "meltdown."

baludec5 said...

Good post. I like your blog.

Anonymous said...

an interesting white paper. Could you please explain to me how you got to the logarithmic constants log(2) and log(1.25) you divided in your formular at page 7? Thanks!

Özgür Efe said...
This comment has been removed by the author.
Özgür Efe said...

I am not clear about the mathematical induction but I am pretty sure that your power formula y = 10^(3.1 log(x)) can be further simplified to Y=x^3.1 ( See http://www.wolframalpha.com/input/?i=y+%3D+10%5E%283.1+log%28x%29%29&a=*FunClash.log-_*Log10.Log- )

I am working on Business Architecture and keenly interested on complexity removal through SOA / SOMA. Hence I will enjoy to learn about mathematical background that lies behind.

Pascal Echevest said...

Your work on this field is very interesting and useful.
Your formula for measuring the IT complexity described in your white paper (S = 10 3.1 log(bf) + 10 3.1 log(cn)), with bf as business functions and cn as connections, works well with small IS. But when we have IS with a few thousands business functions (as function points defined by the IFPUG Functional Size Measurement Method) and only a few tens of connexions, the final score is strongly related to the number of business functions, but not enough I think to the connections with other IS. The problem is due to the fact that the number of business functions and the number of connections are not on the same order of magnitude (for instance an IS with 10000 bf and 100 cn has a connexion impact very ridiculous, although on a technical architectural viewpoint 100 cn brings a lot of complexity).
Is there a way in your formula to take more deeply into account the number of connections ? (I understand connections for an IS see as a black box, as connections with other IS).

Unknown said...

I enjoyed the paper as well. I have been a long time advocate that we need more simplicity in IT but my focus has been more on the IT infrastructure side than the app development side which you seem to focus on. How would you address the complexity issue of implementing business functions on inherently complex infrastructures i.e. database from one vendor on server from another, running an OS from a third with virtualization from a fourth and a storage subsystem from yet another vendor. Then you connect that system to another system on a completely different mix of infrastructure? This complexity leads to failure as well. Change becomes difficult and risky, utilization rates are low, software licensing costs are high, maintenance (patching) is problematic, availability and security is compromised. How does this factor into your formula?

Anonymous said...

Interesting reading. A colleague of mine has asked me the following comment so I thought I would post it here:

I'm interested to see the mathematical working used to derive the formula you established for complexity, so that I am able to explain the rationale behind the formaula in 'business speak'

Roger Sessions said...

You can read about the Mathematics behind this in the White Paper The Mathematics of IT Simplification available here:
http://simplearchitectures.blogspot.nl/2011/04/mathematics-of-it-simplification.html