My article on
The Top Seven Mistakes CIOs Will Make in 2008 drew many comments with excellent observations. Let me respond to what I have seen so far.
Mark Blomsma questions whether we can partition an enterprise architecture into subsets that do not influence each other. He points out that a small failure in one subset can lead to a massive failure in another subset.
Mark is very correct in that it is difficult to create subsets that do not influence each other. I call the interactions between subsets
thin spots. Our job in enterprise architecture is to minimize these thin spots. Partitioning is based on the theoretical ideal that there is no interaction between subsets of enterprise functionality (no thin spots). Of course, we know that this ideal is not attainable. So we need to compromise, allowing interoperability between subsets while minimizing the thin spot degradations.
In general, I believe our best hope of minimizing thin spot degradation of an enterprise architecture comes from having interactions between subsets occur through software systems rather than through business processes. There are two reasons for this. First, the more we can automate workflow, the better off we are in general. Second, we have a better understanding of managing software thin spots than we do for managing business process thin spots. As one such example, see the post I did on
data partitioning.
One of the primary tasks of the Enterprise Architect is boundary management, that is, on architecting the boundaries between subsets of the enterprise. The stronger the boundaries, the better the partition and the better we can manage overall complexity. In my upcoming book,
Simple Architectures for Complex Enterprises, I have a whole chapter dedicated to managing software boundaries.
The problem the enterprise architect faces is that the boundaries between subsets are hard to define early in the architecture and under a never ending attack by good intentioned developers for the entire life cycle of the architecture. One of the best approaches I have found to boundary management is getting the enterprise as a whole to recognize the critical importance of complexity control. Once complexity control has been embraced, it is easier to get people to understand the role of strong boundaries in controlling complexity.
KCassio Macedo notes the importance of tying EA to business value.
This is a great point. An enterprise architecture that does not deliver business value is not worth doing. One of the advantages of partitioning an enterprise architecture is that we end up with autonomous subsets, that is, subsets of functionality that can be iteratively delivered. I call these subsets ABCs, for autonomous business capabilities. These ABCs are very compatible with Agile implementation approaches.
Partitioning is our best approach to managing complexity and mapping the enterprise into autonomous bite-size pieces that can be delivered in an agile and iterative manner.
I believe business value need to be measured not only in ROI (return on investment) but on TTV (time to value). TTV is a measure of how quickly the enterprise sees value in having undertaken the enterprise architecture exercise. Most organizations are initially skeptical of enterprise architecture, and the shorter you can keep the TTV, the sooner you can transform skepticism into support. This builds momentum and improves even more the chances of future success.
Mrinal Mitra notes the difficulty in partitioning the business process and points out that some partitioning might even require a business reorganization. This is an excellent point, partitioning does sometimes require business reorganization.
However, why are we doing an enterprise architecture in the first place? As KCassio Macedo points out in the last comment, enterprise architecture must be tied to business value.
There is rarely any business value in merely documenting what we have today. We are more often trying to find a better way of doing things. This is where the business value of an enterprise architecture comes in. A partitioned enterprise is less complex. A less complex enterprise is easier to understand. The easier we can understand our enterprise, the more effectively we can use IT to meet real business needs.
When I am analyzing an existing enterprise for problems, I usually don’t start by partitioning the enterprise, but instead, analyzing how well the enterprise is already partitioned. If it is already well partitioned, then there is little value in undertaking the partitioning exercise. If it isn’t, then we can usually find areas for improvement before we actually invest in the partitioning exercise.
Anonymous gives an analogy of barnacles as an accumulation of patches, reworks, and processes that gradually accumulate and, over time, reduce a system that may have been initially well designed into a morass of complexity.
This is an excellent point. It is not enough to create a simple architecture and then forget about it. Controlling complexity is a never ending job. It is a state of mind rather than a state of completion.
In physics, we have the Law of Entropy. The Law of Entropy states that all systems tend toward a maximal state of disorder unless energy is continually put into the system to control the disorder. This is just as true for enterprise and IT architectures as for other systems.
We have all seen IT systems that were initially well designed, and then a few years later, were a mess of what Anonymous calls barnacles. Enterprise architecture is not a goal that is attained and then forgotten. It is a continuing process. This is why governance is such an important topic in enterprise architecture. In my view, the most important job of the enterprise architecture group is understanding the complexity of the enterprise and advocating for its continual control.
Anonymous also points out that complex IT solutions often reflect poorly conceived business processes. Quite true. This dovetails nicely on Mrinal Mitra observation in the previous comment. And this is also why we need to see enterprise architecture as encompassing both business processes and IT systems.
Alracnirgen points out that the many people involved in projects, each with their own perspective and interpretation, is a major source of complexity.
This is another good point. There are enterprise architectural methodologies that focus specifically on addressing the issues of perspective, interpretation, and communications. Perspective, for example, is the main issue addressed by Zachman’s framework. Communications is the main issue addressed by VPEC-T.
My own belief is that we must first partition the enterprise, focusing exclusively on the goal of reducing complexity. Interpretation will still be an issue, but we can limit the disagreements about interpretation to only those that directly impact partitioning. Once we have a reasonable partition of subsets, then we can, within those autonomous regions, focus on perspective, interpretation, and communications in the broader picture.
Terry Young asks if the relationship between complexity and cost is linear. Terry’s experience is that “doubling the complexity more than doubles the cost to build and maintain the system.” This relationship between complexity and cost that Terry brings up is very important.
I agree with Terry that the relationship between complexity and cost often appears to be non-linear, however I believe that this is somewhat of an illusion. Let me explain.
I believe that the relationship between
complexity and
cost is, in fact, linear. However the relationship between
functionality and
complexity is non-linear. Adding a small amount of functionality to a system greatly increases the complexity of that system. And since complexity and cost are linear, adding a small amount of functionality to a system therefore greatly increases its cost. It is this exponential relationship between functionality and cost that I believe Terry is observing.
Just to give you one example, Cynthia Rettig wrote a paper in the MIT Sloan Management Review (
The Trouble with Enterprise Software by Cynthia Rettig) in which she quoted studies showing that every 25% increase in the functionality of a software systems increase the complexity of the software system by 100%. In other words, adding 25% more functionality doubles the complexity of the software system.
Let’s assume that we start out with a system that has 100 different functions. Adding 25% more functionality to this would be equivalent to adding another 25 functions to the original 100. Rettig tells us that this 25% increase in functionality doubles the complexity, so the software with 125 functions is now twice as complex as the one with 100 functions. Another 25% increase in functionality gives us a total of 156 functions, with four times the original complexity. Another 25% increase in functionality gives us a total of 195 functions with eight times the original complexity.
This analysis shows us that by doubling the functionality in the system, we increase its complexity (and cost) by 800 per cent. If we continue this analysis, we discover that tripling the functionality of the original system increases its complexity by 126 times. In other words, system B with 3 times the functionality of system A is 126 times more complex than system A. And, since complexity and cost are linear, System B is also 126 time more expensive than system A.
But this is the pessimistic way of looking at things. The optimistic way is to note that by taking System B and partitioning into three sub-systems, all about equal in functionality to System A we reduce its complexity (and cost) to less than 3% of where it started.
If you are trying to explain the value of partitioning to your CEO, this argument is guaranteed to get his or her attention!