I don't know how this got started, but we were tweeting a discussion about enterprise architecture, and somehow the question came up...
How Many Enterprise Architects Does It Take To Measure A Donkey?
A: depends on where the datum is and which part of the donkey they measure!
A: and should not speed of donkey be considered too? For relativity effects I mean.
A: ahh yes, but it is velocity and in the direction of measurement! A jumping donkey!
A: jumping donkeys - can I have some of what you two are on?
A: EAs measuring the donkey? One measures required height of TO-BE donkey, one argues length and height are basically the same, ...
A: .. one says the problem is that the front half of the donkey and the rear half are not in alignment.
A: ... one says that we can't measure the donkey until we have completed the business case.
A: one say that we need an industry consortium to define the best practices for donkey measuring.
A: ... one says we don't need to measure the donkey. It's not strategic. We need to outsource it.
A: ... one says that we can't measure the donkey until we first partition it into little pieces so that it isn't so complex.
With contributions from @RSessions, @richardveryard,@seabird20, @taotwit, and @j4ngis. (Did I miss anybody?)
Thursday, April 30, 2009
Wednesday, April 15, 2009
Factors Driving System Complexity
IT Systems are failing at an alarming rate and the rate of failure is increasing. Approaches that we have used in the past to successfully deliver IT systems are no longer working.
Something has changed: the complexity of the systems. It should be no surprise to anybody in IT that the complexity of the systems we are being asked to build has dramatically increased. The only surprise is that we have not adapted the processes that we use to build IT systems to take into account this increase.
Processes that can drive simple IT systems development do not scale up as the complexity of the systems increases beyond a certain point, a point that we have long passed.
I do not advocate throwing out what we know about building simple systems. Approaches such as Agile Development are great, as long as the complexity of what we are building is manageable. The approach that I advocate is learning how to break large, complex systems that we do not know how to build into smaller, simpler systems that we do know how to build.
I call this process partitioning. It is important to remember that the goal is not just to create smaller systems, but smaller systems that are as simple as possible. The specific process that I advocate for doing this is called SIP, for Simple Iterative Partitions. SIP is designed from the start to generate systems that are both small and simple.
While the goal of partitioning is to create smaller simple systems, the process itself is neither small nor simple. Planning for simplicity paradoxically adds complexity to the planning process but pays huge dividends further on. The extra work involved in delivering complex systems is far greater than the extra work involved in planning for simplicity from the beginning.
Partitioning is a science, like medicine. It must be led by people who understand the nature of the disease and have been trained in how to manage it. The training is important because there are many things that can go wrong. When things do go wrong, complexity is inadequately removed, or, in some cases, made worse.
IT failures due to poor partitioning are epidemic in our industry. For example, there are many failed service-oriented architectures (SOAs). Behind almost all of these SOA failures is a partitioning failure.
I have been studying IT failures for a number of years now. I have concluded that partitioning failures are the primary cause of most IT failures. The bigger the failure, the more likely it is that incorrect partitioning is the root cause. The symptom of partitioning failure is always the same: the project seems swamped by complexity. If you have worked on a project whose complexity killed the project, then you have experienced firsthand the effects of partitioning failure.
I have seen so many partitioning failures that I have started to recognize underlying patterns or categories of partitioning failures. At this point, I have identified eleven categories, and the list is growing as I analyze more failures. The eleven categories are as follows:
(Thanks to Chris Bird for suggesting this topic with his innocent tweeted question: “What are contributors to complexity?”)
Something has changed: the complexity of the systems. It should be no surprise to anybody in IT that the complexity of the systems we are being asked to build has dramatically increased. The only surprise is that we have not adapted the processes that we use to build IT systems to take into account this increase.
Processes that can drive simple IT systems development do not scale up as the complexity of the systems increases beyond a certain point, a point that we have long passed.
I do not advocate throwing out what we know about building simple systems. Approaches such as Agile Development are great, as long as the complexity of what we are building is manageable. The approach that I advocate is learning how to break large, complex systems that we do not know how to build into smaller, simpler systems that we do know how to build.
I call this process partitioning. It is important to remember that the goal is not just to create smaller systems, but smaller systems that are as simple as possible. The specific process that I advocate for doing this is called SIP, for Simple Iterative Partitions. SIP is designed from the start to generate systems that are both small and simple.
While the goal of partitioning is to create smaller simple systems, the process itself is neither small nor simple. Planning for simplicity paradoxically adds complexity to the planning process but pays huge dividends further on. The extra work involved in delivering complex systems is far greater than the extra work involved in planning for simplicity from the beginning.
Partitioning is a science, like medicine. It must be led by people who understand the nature of the disease and have been trained in how to manage it. The training is important because there are many things that can go wrong. When things do go wrong, complexity is inadequately removed, or, in some cases, made worse.
IT failures due to poor partitioning are epidemic in our industry. For example, there are many failed service-oriented architectures (SOAs). Behind almost all of these SOA failures is a partitioning failure.
I have been studying IT failures for a number of years now. I have concluded that partitioning failures are the primary cause of most IT failures. The bigger the failure, the more likely it is that incorrect partitioning is the root cause. The symptom of partitioning failure is always the same: the project seems swamped by complexity. If you have worked on a project whose complexity killed the project, then you have experienced firsthand the effects of partitioning failure.
I have seen so many partitioning failures that I have started to recognize underlying patterns or categories of partitioning failures. At this point, I have identified eleven categories, and the list is growing as I analyze more failures. The eleven categories are as follows:
- Delayed partitioning: The partitioning did not begin early enough in the project life cycle. It should be mostly complete before the business architecture is begun.
- Incomplete decomposition: The decompositional analysis of functions did not go far enough, resulting in too coarse a granularity of the functions being partitioned.
- Excessive decomposition: The decompositional analysis of functions went too far, resulting in too fine a granularity of the functions being partitioned.
- Bloated subsystems: Too many functions have been assigned to one or more subsystems.
- Scant subsystems. Too few functions have been assigned to one or more subsystems.
- Incorrect assignment: Multiple functions have been assigned to the same subsystem that do not belong together or have been assigned to different subsystems that do belong together.
- Duplicated capabilities: Functions have been duplicated across different subsystems.
- Unnecessary capabilities: Functions have been partitioned that should not have been included in the first place.
- Technical partition mismatch: The technical partitioning does not match the business partitioning, a common problem in purchased systems and service-oriented architectures.
- Inadequate depth: The technical partitioning does not extend far enough. In SOAs, this problem often manifests itself as multiple services sharing common data.
- Boundary decay: The boundaries between subsystems in the partition are not strong enough, and functionality slips back and forth between subsystems.
(Thanks to Chris Bird for suggesting this topic with his innocent tweeted question: “What are contributors to complexity?”)
Thursday, April 2, 2009
Adaptability of Large Systems
As I was tweeting the other day, I came upon a tweet from Noel Dickover about large systems being less adaptable than small systems. This began a tweet exchange that I thought brought up some important issues about adaptability.
Many people believe that large systems are harder to adapt than smaller systems. And, in genera, this is true. But system adaptability is not a function of size, it is a function of architecture. When a large system is partitioned correctly so that it is composed of a number of smaller, autonomous systems, then it MAY be more adaptable than a single smaller system.
I say "may", because its adaptability depends on how well it has been partitioned. The key is not whether the partitioning is good technically (say, mapping well to an SOA), but how well the technical partitions overlay on top of the business partitions of the organizations.
In other words, when a large system is built of autonomous smaller systems AND those smaller systems map well to the autonomous processes that occur naturally within the business, then, and only then you have a large system which is highly adaptable.
The reason so many systems fail to achieve this (even when they do manage a reasonable technical partitioning) is that the technical partitioning MUST BE driven by the business partitioning. This requires a partitioning analysis of the business that is completed before the technical architecture of the system is even begun.
This business partitioning analysis is best done by representatives of both the business and the IT organization. The business group has the best understanding of how functions relate to each other and the technical group has the best understanding of how this business partitioning analysis will eventually drive the technical partitioning architecture.
Since both business and technical experts are involved in this exercise, I place this work in the common ground between business and technology, the watering hole that we call enterprise architecture. But it is enterprise architecture with a very specific focus: driving technical partitioning from business partitioning analysis with the eventual goal of highly flexible systems that are pegged closely to the business need and mirror closely the business organization.
Many people believe that large systems are harder to adapt than smaller systems. And, in genera, this is true. But system adaptability is not a function of size, it is a function of architecture. When a large system is partitioned correctly so that it is composed of a number of smaller, autonomous systems, then it MAY be more adaptable than a single smaller system.
I say "may", because its adaptability depends on how well it has been partitioned. The key is not whether the partitioning is good technically (say, mapping well to an SOA), but how well the technical partitions overlay on top of the business partitions of the organizations.
In other words, when a large system is built of autonomous smaller systems AND those smaller systems map well to the autonomous processes that occur naturally within the business, then, and only then you have a large system which is highly adaptable.
The reason so many systems fail to achieve this (even when they do manage a reasonable technical partitioning) is that the technical partitioning MUST BE driven by the business partitioning. This requires a partitioning analysis of the business that is completed before the technical architecture of the system is even begun.
This business partitioning analysis is best done by representatives of both the business and the IT organization. The business group has the best understanding of how functions relate to each other and the technical group has the best understanding of how this business partitioning analysis will eventually drive the technical partitioning architecture.
Since both business and technical experts are involved in this exercise, I place this work in the common ground between business and technology, the watering hole that we call enterprise architecture. But it is enterprise architecture with a very specific focus: driving technical partitioning from business partitioning analysis with the eventual goal of highly flexible systems that are pegged closely to the business need and mirror closely the business organization.
Subscribe to:
Posts (Atom)