Wednesday, October 21, 2015

Why Big IT Systems Fail

Small IT systems usually deliver successfully. They are delivered on time and on budget. When they are delivered, they usually meet the needs of the business, they are secure, they are reliable, and they are easy to change.

Large IT systems usually do not deliver successfully. They are delivered late and over budget, if they deliver at all. If delivered, they usually fail to meet the needs of the business, they are rife with security problems, they fail frequently, and they are hard to change.

Why are we so good at delivering small IT systems and so bad at delivering large ones? The obvious answer is that large systems are more complex than small systems. But that is not the problem. 

The problem is not the fact that IT systems get more complex as they get larger. The problem is how they get more complex. Or more specifically, the rate at which they get more complex.

The complexity of an IT system increases at a rate we describe as exponential. For most modern IT systems, such as service-oriented architectures, the exponential increase is driven by the number of dependencies in the system. As the system gets bigger, the number of dependencies increase. As the number of dependencies increase, the complexity increases. But the increase in complexity is not linear, it is exponential. 

The difference between a linear increase and an exponential increase is critical. 

An example of a problem that increases linearly is a leaky faucet. Say a faucet leaks at a rate of one ounce per hour and the water is going into a clogged sink that can hold 20 ounces. After three hours, a three ounce container will empty the sink. If you don't get to the sink for ten hours, you know you need to bring a ten ounce container to empty the sink. The water leaks out at a steady rate. It doesn't start leaking faster just because more water has leaked.

But think of a forest fire. Forest fires increase at an exponential rate. Say in the first minute of the fire it has engulfed one square foot. You cannot assume that in twenty minute the fire will have engulfed twenty square feet. That is because forest fires spread exponentially; the bigger they get, the faster they spread.

The mathematics of IT complexity follow the mathematics of forest fires. Say we are building an IT system at the rate of one function per week. It will take almost one year to reach 100,000 standard complexity units (SCUs). But it only takes 10 more weeks to reach the next 100,000 SCUs. And then only 7 more weeks to reach the next 100,000 SCUs. By the end of the second year we are adding more than 30,000 SCUs per week!

Except that we won't, because this rate of complexity increase is unsustainable. Just like a forest fire will eventually burn itself out once it has consumed all possible fuel, so will an IT system. It will grow until the resources are no longer available to support the massive complexity increase. At that point, it will do what all complex systems do when they reach a level that is no longer sustainable. They collapse.

Does this mean that it is impossible to build large IT systems? No, it doesn't. It does mean that we need to figure out how to attack the complexity growth. We can't prevent the IT system from getting more complex (that is impossible), but we do need to figure out how to make the complexity increase linearly rather exponentially. 

In other words, we need to figure out how to make IT systems behave more like leaky faucets and less like forest fires.

We will email you alerts when new IT complexity related blogs or white papers are available. Subscribe <here>.

You can learn more about our IT Simplification Initiative (ITSI) <here>.

Photo of the forest fire is by the U.S. Forest Service, Region 5, made available through Creative Commons and Flickr. The photo of the faucet is by John X, also made available through Creative Commons and Flickr.

Monday, October 19, 2015

Article Alert: The CIO Perspective on IT Complexity

A new article alert by Roger Sessions

Article Name: The IT Complexity Challenge: A Trustmarque study into the state of IT complexity from the CIOs’ perspective.
Authors: None given.
Publisher: Trustmarque
Date of Release: October 2015
Registration requirements:  Nominal registration information required.

Main Points of Article

IT Complexity is a huge challenge for most CIOs. 93% of CIO’s believe that IT complexity has increased and 66% believe that the cloud has increased complexity, more than for any other cited factor. This is especially interesting given that the cloud was touted as way of simplifying IT.

Companies are now recognizing the problems complexity causes. (“Organizations are craving simplicity.”) CIOs don’t know how to deal with IT complexity (“For CIOs, the end result... is confusion over which technologies and services will actually help them simplify their IT landscape.”) But despite the confusion about how to simplify IT, 79% of CIOs think that simplifying IT is a priority.

IT complexity is causing many problems that directly impact the business. IT complexity is a major contributor to security problems with almost all CIOs (87%) agreeing that IT security is a challenge. IT complexity also makes it difficult to respond to the business needs with almost all CIOs (89%) agreeing that simplifying IT is at odds with driving innovation

IT simplification is not a luxury, it is a necessity. But it is a necessity few CIOs are equipped to deliver. 80% believe their organizations lack the in-house skills needed to deliver projects at the speed required. The article concludes “What the modern CIO needs is to simplify the IT at their disposal, yet this is a huge challenge for IT departments to do it all on their own.”

My complements

All in all, this is a well written article that makes clear the dual points that while CIOs understand they have a problem with IT complexity, they have little idea what to do about it.

I also agree that IT complexity causes security problems and makes responding to business needs difficult. I would add to these two problems a host of others, including reliability and cost of delivery. I also agree that few CIOs are equipped to respond to the challenges of IT complexity.

Most CIOs will benefit from reading this article, if only to understand that they are not the only ones struggling with the problem of IT complexity.

My criticisms

The article gives no guidance as to how to respond to the problem of IT complexity other than to engage Trustmarque and there is no information as to how Trustmarque will help organizations achieve IT simplification.


Roger Sessions has no interest, financial or otherwise, in the article discussed above. The article was chosen based solely on his judgement as to the value of the article to his readers.

Do you know about a recently released article you think Roger should cover? Or do you have information about a recent highly complex project you think he should write about? Anonymity guaranteed if desired. Drop him a note at


Subscribe to Roger Sessions’s alerts and articles at

This article alert is by Roger Sessions, probably the leading expert on IT complexity. Our approach to IT simplification is our IT Simplification Initiative (ITSI). You can read about ITSI at

The road to IT Simplification begins with a single ITSI step.

Tuesday, August 4, 2015

The Three Headed Dog

Hercules and Cerberus

Vulnerable, Unreliable, and Inflexible Systems: Three Symptoms, One Disease

As IT systems increase in size, three things happen. Systems get more vulnerable to security breaches. Systems suffer more from reliability issues. And it becomes more expensive and time consuming to try to make modifications to those systems. This should not be a surprise. If you are like most IT professionals, you have seen this many times.

What you have probably not noticed is an underlying pattern. These three undesirable features invariably come in threes. Insecure systems are invariably unreliable and difficult to modify. Secure systems, on the other hand, are also reliable and easy to modify.

This tells us something important. Vulnerability, unreliability, and inflexibility are not independent issues; they are all symptoms of one common disease. It is the disease that is the problem, not the symptoms.

The disease is complexity. System complexity not only results in vulnerability, unreliability, and inflexibility, but a host of other problems such as budget overruns, deadline slippages, and poor business alignment. If you have ever had to deliver large IT systems, you are also painfully familiar with these issues.

System complexity spreads like most diseases: exponentially. As system size increases, system complexity increases faster. The rule of thumb is that when a system grows by twenty-five percent in functionality, it doubles in complexity. Once complexity doubles, security breaches double, system outages double, and the cost of making future changes double.

We are fortunate. In the last decade, we have made tremendous strides in understanding IT complexity. We now have mathematically grounded models for understanding complexity. We have metrics for measuring complexity. And we have directed methodologies for maximizing desirable functional growth while minimizing undesirable complexity growth. With these models, metrics, and methodologies, we are finally in a position to make complexity related IT problems a distant memory.

As we better manage the disease of complexity, we also better manage the symptoms of complexity: vulnerability, unreliability, inflexibility, and an assortment of others.

The irony is that if you want to make systems more secure, flexible, and reliable, you won’t do it by making systems more secure, flexible, and reliable. At least, you won’t get far doing that. Sooner or later, you need to attack the disease that is the underlying problem. That disease is complexity.

Complexity is the cancer of IT. That is the bad news. The good news is that we now no longer need be victims of complexity. We have models to understand complexity, methodologies to eliminate it, the tools to make sure it doesn’t return. And that means that we can now create large IT systems that are also secure, reliable, and flexible. Not to mention less complex.

- Roger Sessions, Austin Texas

Wednesday, February 12, 2014

Wake Up Call for the Banking Industry

In a discussion thread on the LinkedIn group Simpler IT, Marc Lankhorst mentioned that the Dutch Bank (The Dutch banking regulatory board) recently came out with a new report that discussed the stability of Dutch banks. The report is titled Naar een Dienstbaar en Stabiel Bankwezen or To a Serviceable and Stable Banking System (as translated by Bing.)

Appendix 7 of the report discussed the critical relationship between banking IT systems, Enterprise Architecture, and complexity management. The report is in Dutch. I ran it through Google translate. The translation was very rough, but even in the Google translation, it was obvious that the report was nothing short of a wake up call to the banking industry.

One of the members of the Simpler IT group arranged for a group of humans to translate Appendix 7. This group did a great job and kindly gave me permission to reprint their translation here. The primary translator was Sasja McCann with help by Andi McCann and Peter Mcelwaine-Johnn.

The original (in Dutch) is by the Dutch Ministry of Finance and is here.

If you would like to discuss this report, I recommend discussing this on the appropriate thread at Simpler IT.

Appendix 7: Resolution and Recovery of IT systems in Banking

Information Technology (IT) is critical for many businesses and non-profit organisations. Business processes are so dependent on automated information systems that, in many cases, the performance of those systems are for a large part responsible for the success of an organisation. Many business procedures cannot operate anymore without the need for information systems. 

This is no different in the financial sector, and particularly the banking sector. Banks are sometimes referred to as glorified IT companies. IT plays a critical role in nearly all banking processes and has done so for a long time. Already in the 1960s, bulk processes were automated on a large scale, such as the automated payment system for instance. Over the years, most banks have created a variety of information systems, most of them geared towards automating administrative processes, but in the meantime, many other forms of information processing have been created.

Information systems in the Dutch banking sector, and the IT function that is responsible for those systems, have the following characteristics:

Information systems are an integral part of business, more so than in other sectors;

  • The budgets for their development and maintenance, and budgets for the IT function in general, are therefore correspondingly higher;
  • The proportion of older systems (legacy) is still relatively high, with relatively high additional maintenance costs;
  • The complexity of systems is relatively high, partly due to their relative old age and the dependency of banking processes on those systems. “Everything is dependent on everything else.”
  • The diversity of systems is relatively high, and also the number of systems is high;
  • IT functions are generally quite mature. A lot of investments have been made in processes and personnel;
  • Personnel working in the IT function are well-educated and have a high experience level;
  • The management and maintenance of information systems is often outsourced to specialized IT companies (e.g. IBM, Accenture, Cognizant) and is often operated from India or Eastern Europe;
  • Information systems extend to the customers, both business and retail customers. Much use is made of “electronic banking”
  • The customer is an extension of the information systems of the bank, and partly for this reason IT in the banking sector needs to comply with strict security regulations / conditions. This aspect is an important component of the trust customers have in a bank
Information technology is, in general, characterised by big changes and fast dynamics. For the purpose of this report it would be too much to discuss social media, cloud computing and big data in detail, but obviously banks will continue to invest in these areas, not only to be able to offer an attractive proposition to their customers and shareholders, but also to continue to comply with laws and regulations. At the same time all these IT developments offer the chance to reduce the complexity of information systems and to enhance their effectiveness. However, these developments do all need to comply with the relevant IT Governance.

In the remainder of this text we discuss the resolution and recovery of IT systems in the context of M&A activity. We describe some principles (the preconditions which we discussed in the previous paragraph) to which information systems must comply in order to be capable of resolution and recovery (i.e. splitting IT systems). We start with a brief introduction to the discipline to which these principles belong, called Enterprise Architecture. Enterprise Architecture is a tool for the governance, including IT Governance, of an organisation.

Enterprise Architecture

An Enterprise Architecture (EA) is a coherent set of principles, models and patterns focused on the design, development and management of an organisation. An Enterprise Architecture is like a blueprint (or map) of an organisation and its information systems. Strictly speaking EA is not a specific IT tool – in practice, however, it is a key tool to assure IT Governance. It describes the business functions and processes, their relationships and their information needs, and it outlines the information systems that meet that need.

An Enterprise Architecture structures the IT landscape, makes it possible to describe the current and necessary information systems in an orderly and consistent manner, and to take decisions based on these descriptions. These decisions are aimed generally at the new development, modification or replacement of information systems.

The discipline that deals with EA has developed in recent years in response to the increasing complexity of existing information systems, and the associated problems of large, unmanageable IT projects and dilemmas that many organisations face as a consequence of the fast dynamics and speed of information technology.

Due to the structuring and complexity-reducing character of Enterprise Architecture, this instrument is the means to achieve resolution and recovery of information systems. 

Enterprise Architecture and Dutch banks

Because of the aforementioned characteristics of information systems in the banking sector, Enterprise Architecture is highly relevant to banks. This is the reason why most Dutch banks have invested in the development of Enterprise Architecture functions and departments.

In theory, the banks already have a tool that enables high quality of information systems. One of the aspects of high quality is that resolution and recovery of information can take place in a controlled manner. However, given the quality problems that banking systems face at the moment, the reality is often different: insufficient availability, high security risks and lack of maintainability. This contributes to high cost of maintenance and adjustment costs of banking information systems and also means that successful resolution and recovery is very difficult to achieve.

Why then, given all the promises, is Enterprise Architecture still underutilised? Reasons for this are:

  1. Opportunism of the ‘business’: often driven by circumstances there are often “quick and dirty” information systems developed that do not conform to the EA. These systems usually live a longer life than originally envisaged. It is often these systems that cause the most problems. 
  2. Backlog: we’ve already highlighted the legacy problems of banks. It takes a lot of time and effort to clean up and replace legacy systems. 
  3. Unnecessary complexity: sometimes there is an atmosphere of mystique around Enterprise Architecture that makes it unnecessarily complicated, resulting in lack of understanding by the people that need to understand it. Furthermore, the programmes that are implemented through Enterprise Architecture are often large and complex, which increases the risk of failure. 
  4. Insufficient overview: partly because of the complexity and scale of information systems there is no clear overview to actually develop a clear ‘map’. The result is often a very complex diagram that no one understands anymore. 
  5. Mandate: The staff in the Enterprise Architecture discipline (“architects”) have insufficient mandate from the organisation to achieve effective “compliance” with the architecture. Sometimes architects are not sufficiently able to express (the importance of) EA. 
  6. Contracts and Service Level Agreements: vendors are sometimes unable to comply with EA or do not want to comply, e.g. if cost justifications are introduced. Until recently, there were no standards for suppliers or banks to adhere to. 
  7. Each bank has in the past tried to re-invent the wheel at least once, under the assumption that banking processes differ greatly. Obviously this is not the case. It has, however, led to costly programmes and projects that have resulted in a healthy apathy towards IT at senior levels within the banks. 
The last two reasons led to the realisation that there is a need for EA standards for banks. This standard has recently been developed by the Banking Industry Architecture Network (BIAN), established by a number of major banks, along with several established IT vendors[1]. In the Netherlands, ABN AMRO, ING and Rabobank are members of BIAN. Other members are several European, Asian and American banks and its membership is expanding rapidly. The standard, the so called BIAN model, describes all the services that a bank offers, including IT support required for this. The advantage of such a standard is that banks do not have to reinvent the wheel themselves. This not only reduces costs but also increases the quality of the IT landscape, and facilitates M&A activity amongst banks. Figure 1 shows the complete model (at the highest level)[2].

Figure 1 BIAN Service Landscape 2.5 

Resolution and Recovery Principles

Regarding unbundling, an Enterprise Architecture should give priority to the following three principles. This means that all the information systems of a bank are structured and arranged such that they conform to these principles. Note that the principles can be further “unravelled” - in order to avoid complexity as much as possible, we describe them at an aggregate level in this report.

We have sought to minimize the number of principles. This does not mean that we discourage additions or refinement of the above three principles – in practice, banks often use more principles. A minimal set enables clarity, and also allows for the acceptance and implementation of the EA principles.

Principle 1: Compartmentalisation of information systems.

The background of this principle is that business functions/departments must be able to operate as independently as possible from each other and that the information system of one function does not interfere with that of another function. The bank defines its business functions to be as detailed as possible, and also defines the relationships between business functions as clearly as possible. The information systems of a business function should not support other business functions, but communicate (via so-called “services”) and exchange data with information systems from other business functions – they are compartmentalised. Compartmentalisation is achieved in practice by, inter alia:

  • Virtualisation of information systems, which means that users share hardware and software in a controlled way. Special software (virtualisation software) ensures the compartmentalisation; authorisation and authentication play an important role in this
  • Develop and analyse information systems with a “service-oriented” view. “Service-orientation” refers to ensuring a system is developed with the end-user and the purpose of the service in mind.
  • Developing information systems using components with well-defined functionality. Each component should have a clearly defined service. Components should be standardised and documented and may be reused.
  • Multiple layers of information systems, in which, for example the presentation of data is separated from the processing of data. 

A beneficial side effect of compartmentalisation is the reduction of complexity, which in itself simplifies resolution and recovery. In addition, the number of links (interfaces) between systems is reduced, making maintenance easier. The success of compartmentalisation depends on carefully thought-through and well-documented Enterprise Architecture.

Principle 2: Data has one owner who is responsible for the storage, description, sharing and destruction of the data.

This principle should ensure the quality of the data of a bank; for example, it should prevent inconsistencies and data unreliability caused by copying data and then editing the copied data. The data owner is responsible for the quality of the data. Data quality is crucial in any potential split of activity. In the case of a resolution of an information system as a result of M&A activity, two cases can arise:

  • The split entities are no longer part of the same holding. A predetermined copy of the data is made to be used by both entities. Each entity then applies principle 2 in their own entity. 
  • The split entities are part of the same holding. In that case, they can use the same data, and principle 2 still applies, i.e. one data owner.

Principle 3: An information system has one owner, who is responsible for both the quality of the information system and its components, as well as the quality of the services provided by the information system.

The application of Principle 3 ensures clarity of ownership of an information system. In any split this clarity is crucial. Even if there is no split, it is important that an information system has an owner, with a budget to develop the information system to a required level and to keep it there, to ensure business processes are supported optimally and that resolution and recovery is possible. Incidentally, this is also one of the guiding principles of Sarbanes-Oxley (SOX).


Earlier in this text we have already stated that many banks already use Enterprise Architecture, including resolution and recovery principles, and that specific roles, disciplines and processes have been defined. We argue that the Enterprise Architecture discipline needs to take a stronger position within the bank. This means that:

  • The staff in the discipline (architects) have excellent content and communication skills. They know the banking business, the information systems of the bank and the relevant information technology for the bank through and through, and can clearly convey that knowledge verbally and in writing. They are able to capture and define an Enterprise Architecture in understandable language and / or models, and can express the importance of Enterprise Architecture effectively.
  • The discipline reports to top management in the bank. Enterprise Architecture comprises the entire bank and the information provision of the whole bank – it is therefore important that this broad scope is reflected in the weight of the discipline within the organisation. The discipline not only has a close relationship with IT in particular, but also with the operation of the bank in general, and with the risk management discipline. A close relationship with the COO and CRO, in addition to the relationship with the CIO, is therefore necessary.
  • The Enterprise Architecture discipline has the mandate to test the current and future information systems against the Enterprise Architecture. The discipline also has the mandate to escalate to the highest level in the case of non-compliance, with the obligation to indicate what action should be taken to eliminate non-compliance. This mandate also extends to suppliers and vendors – it should be contractually specified that suppliers and vendors are to conform to the Enterprise Architecture.
  • It is advisable to rest the accountability for the Enterprise Architecture discipline with one person: the Chief Enterprise Architect.


Following the above, we propose the following steps to ensure successful resolution and recovery of banking information systems:

  1. An important means of ensuring resolution and recovery is to establish Enterprise Architecture disciplines. Establish a number of clear principles, wherein the three principles as described in this document are a minimum. Become a member of an industry body or adhere to a standard in this area – BIAN seems obvious.
  2. Strengthen the Enterprise Architecture discipline in the bank by appointing a Chief Enterprise Architect with knowledge of the banking business and an overview of the IT landscape of the bank. 
  3. Let the Chief Enterprise Architect report to top management. 
  4. Make resolution and recovery the Chief Enterprise Architect’s responsibility, even if only with regard to the IT landscape
  5. Give the Chief Enterprise Architect the mandate and the tools to assess changes and new developments in the IT landscape, to comment on them and, if necessary, to stop them.
  6. Give the Chief Enterprise Architect the mandate and the tools, including a number of enterprise architects with excellent communication skills and experience in the banking industry, to initiate activities that enable successful resolution and recovery of the IT landscape.
  7. Increase EA knowledge and skills of supervisors/senior management. This applies to risk management, the Supervisory Board and DNB (Dutch National Bank). It has been observed that the latter has little to no ability to test an Enterprise Architecture. In addition, currently there is no reference model to benchmark any testing. The aforementioned BIAN model can fulfill the role of this reference model.
Note that these measures are not only beneficial for the ability to successfully resolve and recover, but also increase the quality and maintainability of information systems in general.

[1] For more information, see

[2] Updated to show v2.5 of the BIAN model. Original report showed v2.0.

Wednesday, November 13, 2013

The Math of Agile Development and Snowmen

I was recently asked about the relationship between Agile Development and Snowmen. If you aren't familiar with The Snowman Architecture, see this earlier blog.

I'm not sure The Snowman Practice has much to offer Agile on small projects (say <$1M). These projects always seem to do reasonably on their own. However once a project goes much above $1M, the Agile approach can no longer keep up with the increasing project complexity.

This is where The Snowman Practice has much to offer Agile. The Snowman Practice offers a methodology to break a large project into smaller highly targeted autonomous projects that have minimal dependencies to each other and are highly aligned to the business architectural organization.

Smaller highly targeted autonomous projects 

There is actually a mathematical explanation as to why Agile over $1M needs The Snowman Practice. As projects increase in functionality, their complexity increases exponentially. Agile however is a linear solution. This means that the amount of work an Agile team can produce is at best linearly related to the size of the group.

At small sizes, a linear solution (Agile) can contain an exponential problem (complexity). But at some point the exponential problem crosses over the ability of the linear solution's ability to provide containment. For Agile projects, this seems to happen someplace around $1M.

At some point the exponential problem crosses over
the ability of the linear solution's ability to provide containment
The Snowman Practice solves this problem by keeping the size of each autonomous sub project under the magic $1M crossover point. And that is a project size that is well suited to the Agile development approach. Or any other linear constrained approach.


The Snowman photos are, in order of appearance, by Pat McGrath and smallape and made available via Flickr through licensing of Creative Commons.

Saturday, October 19, 2013

Wednesday, August 21, 2013

Two Roads Converge?

by Roger Sessions and Richard Hubert


We (Richard Hubert and Roger Sessions) have a lot in common. We are both Fellows of the International Association of Software Architects (IASA). We have both written books and articles. And we are both well known proponents of a particular approach to Enterprise and IT Architectures. For Richard, this approach is called Convergence Architecture. For Roger, this approach is called The Snowman Practice. But it may turn out that we have more in common than we thought. Our approaches may complement each other in some interesting ways. But first, let’s take a look at what each of us has been doing.

Introduction to Richard’s work

Since the mid 1990’s I (Richard) have been developing and optimizing an architectural style that addresses the complexity of both the IT systems and the business processes. I call this holistic perspective Convergent Architecture (CA). I wrote about this in 2001 in my book Convergent Architecture (John Wiley N.Y. ISBN: 0471105600.) CA includes properties that  I consider to be inherent in any architectural style. The metamodel that I use includes the project design, the system design, and the business design. At a high level, this model is shown in the following diagram:

Figure 1. Coverage of a holistic architectural style

As you can see in the above diagram, the partitioning between Organization, Process, and Resource plays a significant role in the quality of the design. Experience and rules-of-thumb are adequate to handle many designs, but as systems get larger, a more formal approach is preferable, especially if it can be assisted by tools. This is where Roger’s work is a perfect fit.

Introduction  to Roger’s work

I (Roger) have been looking at how to validate an architecture. To do this, I have developed a mathematical model for what an ideal architecture looks like and a methodology for delivering an architecture that is as close to that ideal as possible. The starting point for this is to define what we mean by “ideal.” My definition of an ideal architecture is the least complex architecture that solves the business problem. This means that we also need a metric for measuring complexity, which, fortunately, comes out of the mathematical model. You can read about this mathematical model in this white paper.

It turns out that when you discover the ideal architecture for a given problem, it almost always has a characteristic shape: A collection of business functionality sitting on top of a collection of services sitting on top of a collection of data. In addition, these three tiers are separated from other collections by strong vertical partitions. There is a strong connection between business functions in the top tier, services in the middle tier, and data in the bottom tier. Where connections are required between tiers, these occur through asynchronous messages at the service level. This architecture is shown in the following diagram:

Figure 2. The Snowman Architecture Created By SIP

As you can see in the above diagram, the idealized architecture looks a lot like a snowman. The head, torso, and bottom of the snowman contain business functions, services, and data, respectively.

The methodology I (Roger) have developed to drive this architecture is called SIP, for Snowman Identification Process. Some of you may know it under its old name, Simple Iterative Partitions. You can get a good overview of The Snowman Practice from this video.

Synergy discovery

When we compared the architecture that is driven by the CA’s architectural metamodel (Figure 1)  to the architecture that is driven by the highly theoretical SIP (Figure 2) it was clear to us that significant commonalities are at hand. 

Both approaches are based on the fundamental Enterprise Architecture principle of IT-Business alignment. Both approaches define best practices concerning how this alignment can be effectively achieved and measured. Additionally, both approaches are based on rules and patterns that apply to the simplification of any system, whether large or small, organizational or purely technical. The Convergent Architecture, for instance, has been used to design IT-organizations which then use the same approach to design and simplify IT systems (this is conceptual isomorphism).

Lastly, and most important of all, we recognized a SIP approach can be applied to mathematically support and objectively drive both architectural styles. SIP thus enhances the design approach and process as both a tool and substantive mathematical proof needed to ascertain the simplest (least complex) of all possible system and organizational structures. . 

In essence, we now have CA showing that the SIP theory really does deliver a partition that stands up to the most demanding examination. And at the same time we have the SIP mathematics defining the vertical boundaries of a CA architecture that are mathematically not only sound, but as simple as possible.

The Future

Where will this take us? To be honest, we are still discussing this. But the possibilities are intriguing. Imagine, two mature methodologies that have such strong synergy where both the theoretical and the model-driven approaches seem to come up with such complementary solutions. Stay tuned for more information.