Thursday, April 2, 2009

Adaptability of Large Systems

As I was tweeting the other day, I came upon a tweet from Noel Dickover about large systems being less adaptable than small systems. This began a tweet exchange that I thought brought up some important issues about adaptability.

Many people believe that large systems are harder to adapt than smaller systems. And, in genera, this is true. But system adaptability is not a function of size, it is a function of architecture. When a large system is partitioned correctly so that it is composed of a number of smaller, autonomous systems, then it MAY be more adaptable than a single smaller system.

I say "may", because its adaptability depends on how well it has been partitioned. The key is not whether the partitioning is good technically (say, mapping well to an SOA), but how well the technical partitions overlay on top of the business partitions of the organizations.

In other words, when a large system is built of autonomous smaller systems AND those smaller systems map well to the autonomous processes that occur naturally within the business, then, and only then you have a large system which is highly adaptable.

The reason so many systems fail to achieve this (even when they do manage a reasonable technical partitioning) is that the technical partitioning MUST BE driven by the business partitioning. This requires a partitioning analysis of the business that is completed before the technical architecture of the system is even begun.

This business partitioning analysis is best done by representatives of both the business and the IT organization. The business group has the best understanding of how functions relate to each other and the technical group has the best understanding of how this business partitioning analysis will eventually drive the technical partitioning architecture.

Since both business and technical experts are involved in this exercise, I place this work in the common ground between business and technology, the watering hole that we call enterprise architecture. But it is enterprise architecture with a very specific focus: driving technical partitioning from business partitioning analysis with the eventual goal of highly flexible systems that are pegged closely to the business need and mirror closely the business organization.

2 comments:

Anonymous said...

Hi Roger, thanks for following up the tweets with a post.

I still have major differences with your tweet, "if large systems are partitioned correctly, they are MORE flexible than small systems" but it sounds like you've backed off on that. A large automotive company that produces 1 million cars a year, if flexible, could shift rather quickly to a different line of vehicles, but probably wouldn't make the shift to a software development house that quickly. Nor would they quickly become a customer service organization when their business is about manufacturing. This would be relatively easy shift for a 10-30 person company, no matter how poorly it was partitioned.

That said, my larger comment would be that the degree of adaptability a large scale system has depends on its structure (of which partitioning is a part) and its interaction with its larger operating environment. In cybernetic terms (this is where my background lies - apologies if I drop into my own particular geek speak), a system or organization is adaptive to the degree it maintains homeostasis with its operating environment. If the operating environment is relatively static, an "adapted" organization functions quite well. Its only when organizations operating in transient operating environments need to be more flexible and adaptive.

The best model I've seen which describes what is necessary to maintain adaptiveness is Stafford Beer's Viable Systems Model. It also does wonderfully at discussing what is necessary from a partitioning sense. If the organization is partitioned into discrete operations, which are in a sense, mini-companies (holographic mappings of all five critical system functions in Beer's VSM sense), then it is far more adaptive than a partitioning involving specialized parts. An organization decomposed into specialized parts often ends up looking like Gareth Morgan's "organization as machine" type metaphor (Images of Organization, 1986). In this instance, when encountering a new problem, organizations structured like machines (we call them bureaucracies) won't reorganize to accomodate the change, they will simply add a new function on to handle the new change (they don't redesign the carborator when needing to run cleaner, they just stick an emissions system on). After a system like this has been in existence for a while, as Kenneth Boulding would say, "Things are the way they are because they got that way." All sorts of tacit responsibilities emerge, and the structure starts to look very different from anything resembling clear business purposes.

Unfortunately, in looking at the Federal Govt (the beginning of our tweets), the Federal Govt's partitioning is largely determined by statute (not political influence). In DoD where I spend most of my time, when a new statute is enacted by congress -say to fix some issue in acquisition, DoD responds by adding new directorates to handle the new requirements. Unfortunately, most new requirements overlap with many existing requirements. To handle the overlapping requirements, everyone engages in "coordination" processes prior to making any departmental position. This coordination process gets more and more convoluted, with very little opportunity for improvement. In the two other agencies I've worked for the same dynamic exists. Simply put, this problem is cannot be solved merely by conducting a thorough business analysis. The amount of dollars spent on architectures to do just that, while they have been an incredible boon to govt contractors (I would call them the ultimate in middle class welfare projects), they have done scant little in addressing the partitioning problem.

This behavior is in fact far more the norm with large scale systems than the exception. Your point, I think, is that it is possible for large scale organizations to be adaptive - I agree that its possible, it just doesn't happen all that often. The concern all of us working on the Gov20 stuff have is how we get these truly breakthrough technologies to make a difference when the organization itself is not agile. A systems person would say that the key is to find the leverage points which have the ability to transform existing systems. This, I think, is a far better approach for transforming the Federal govt than a business analysis. The two that immediately come to mind is to imbed the notion that "Access Replaces Reporting" - meaning we need to start providing access to the working files vice engaging in detailed reporting processes that traverse up and down the heirarchy. The second thought is to start instilling informal comunications over formal ones - again, this provides a clear break from the ossified formal communication channels that characterize govt communication today.

MrChips said...

On wikipedia the other day I stumbled upon something called the OBASHI methodology and its associated Business and IT diagrams. Have you heard of this and/or used this? Would it solve any of the points you raise?