As systems add more functionality, they become more complex. As systems become more complex, the traditional processes we use to manage those systems become more strained. The typical response on the part of those building these more complex systems is to try to understand how to scale up the processes from those that can handle simple systems to those that can handle complex systems.
Consider a "simple" system A with only 100 functions. Say process X has been used to successfully manage A. Process X could be Agile Development, RUP, Earned Value Management, or any other of your favorite management processes.
Now we need to build a more complex system B with 1000 functions. Since B has 10X the functionality of A and we know X works for A, most assume that we can use X to manage B as well, although we assume that it will take 10X the effort.
The flaw in this reasoning is that the difficulty of applying X to B (regardless of what X is) is proportional to the complexity of B, not to the functionality of B. And when the functionality increases by 10X, the complexity, because of the exponential relationship between functionality and complexity, actually increases thousands of times. The exact number is highly dependent on the nature of the functions of B and how they are organized, but the number will typically be very large.
As long as we focus on how to better use X to manage B, we are doomed to failure. The complexity of B will quickly outpace our ability to apply X.
Instead, we need to focus on the real problem, the complexity of B. We need to understand how to architect B not as a single system with 1000 functions, but as a cooperating group of autonomous systems, each with some subset of the total functionality of B. So instead of B, we now have B1, B2, B3, ... etc. Our ability to use X on each of Bi where i = 1, 2, ... will be dependent on how closely the complexity of the most complex Bi is to the complexity of A (which is the most complex system on which X is known to be a viable process.).
The bottom line: if we want to know how to use process X on increasingly complex systems, we must focus not on scaling up the functionality of X, but on scaling down the complexity of the systems.
For more information on scaling down complexity in IT systems, see my white paper, "The IT Complexity Crisis" available at http://bit.ly/3O3GMp.
Subscribe to:
Post Comments (Atom)
5 comments:
What you describe is neither complex nor complexity management, but rather simple versus complicated. Please read my comment on the (mis)use of the word 'complex': http://pauljansen.eu/complexity.htm
Paul,
You are using complexity as defined by the Cynefin framework. I disagree with that framework.
I use the word "complex" to mean an entity that has more "complexity" than needed to do what it is intended to do. By "complexity" I mean the number of internal states.
While my definition is not the same as the Cynefin definition, that doesn't mean that I am wrong, only that I disagree with Cynefin. No surprise there.
I have a number of issues with Cynefin, but the most important are these:
1. Cynefin doesn't relate "simple" to "complex", a fundamental flaw in the framework, I believe.
2. Cynefin doesn't give any insight into how to keep things from getting complex.
3. Cynefin doesn't have any objective way to decide if something is complex.
4. Cynefin doesn't include any value judgment on whether complexity is good, bad, or neutral.
Cynefin is mainly useful for trying to understand how to deal with systems once we have determined which quadrant of the Cynefin framework they happen to land. So, for example, Cynefin tells us that once we have determined that an IT system is "complex", then the only way we can deal with it is by poking it and seeing how it responds.
I fundamentally disagree with this. We need to deal with IT complexity as a science. This means that we have ways of measuring complexity and testable hypothesis for how it behaves. And, perhaps most important of all, we have validated processes for designing systems to be as simple as possible.
Roger, lets first agree to disagree on Cynefin. This clears the path to accept your meaning of the word 'complexity' in the limited context of 'IT system', and hence my fully agreeing with you that "We need to deal with IT complexity as a science. This means that we have ways of measuring complexity and testable hypothesis for how it behaves. And, perhaps most important of all, we have validated processes for designing systems to be as simple as possible.". We are on the same page there.
Sounds like we understand each other. I am only considering complexity within "the limited context of IT systems." If I can get the world to solve complexity in IT systems, I will have accomplished enough for this life.
Hi Roger,
So after reading both the Cynefin's definition of simple, complicated, complex and chaotic above and your whitepaper on IT Complexity, am I right to say that you use the term complex to refer to gratuitous complexity found in the complicated domain of the Cynefin Framework?
The reasons I come to this conclusion are:
1. By definition, Cynefin's complexity cannot be simplified by a partitioning/reduction process such as SIP. The whole is different from the sum of its parts in this domain.
2. Instead, you claim a complex problem is measureable and therefore the cause and effect relationships in this problem space must be repeatable and predictable. According to Cynefin Framework, this context is the ordered (complicated but knowable with expert help) domain.
Post a Comment