Thursday, October 18, 2012

Snowman Architecture Part Two: Economic Benefits


This is the second part of a four part blog about The Snowman Architecture. The first part was The Snowman Architecture: An Overview. In this blog, I will be discussing the economic benefits of the architecture. But don't read this until you have read the overview!

In the next installment (part three) I will discuss The Technical Benefits of the Snowman Architecture. The fourth part, by the way, will be The Criticisms, in which I will describe the many criticisms of the Snowman Architecture and why they are all wrong.

Originally I had planned to cover all of the benefits (economic and technical) in one blog. It turns out there are just too many benefits for one blog so I have had to separate them into those that are more economic in nature (this blog) and those that are more technical in nature (the next blog.)

Review

The Snowman Architecture breaks down a large IT system into small vertically partitioned subsystems called snowmen. These snowmen interact with each other through asynchronous messages. Snowmen are designed to be as autonomous as possible from each other using a design methodology known as Simple Iterative Partitions1 (SIP). Figure 1 shows an IT system designed using the Snowman Architecture.


Figure 1. Snowman Architecture

The Snowman Architecture is in contrast to a traditional architecture that uses a methodology such as TOGAF2 to create a horizontally partitioned system. Figure 2 shows an IT system designed using traditional methodologies.


Figure 2. Traditional Horizontally Partitioned Architecture

Points of Contrast

There are several contrasts that immediately jump out in comparing the Snowman Architecture to the traditional approach. 

The first contrast is in the orientation of the partitioning. The Snowman Architecture uses a strong vertical orientation to the partitioning. The traditional approach uses a weak horizontal orientation to the partitioning.

The second contrast is in the number of subsets in the partition. The Snowman Architecture supports an unlimited number of vertically oriented subsets (snowmen). The transitional approach has exactly three horizontally oriented subsets (business architecture, technical/SOA architecture, and data architecture.)

The third contrast is in the strength of the partitioning. The strength of the partitioning refers to the porosity of the boundaries separating subsets. The more "stuff" that passes between subsets, the greater the porosity. Porosity weakens the partitions, so the greater the porosity, the weaker the partition. The Snowman Architecture partitioning is strong, indicated by the minimal number of connections between subsets. The traditional horizontal architecture partitioning is weak, indicated by the large number of almost random connections between subsets. 

Economic Benefits of Snowman Architecture

Okay, now that you remember the basic overview, let's look at the economic advantages of The Snowman Architecture.

Benefit 1: Linear Versus Exponential Complexity Curve

As an IT system gets larger it gets more complex. This is because complexity is driven both by the amount of functionality in a system and the number of connections in a system3. Both the Snowman Architecture and the traditional architecture gets more complex as the system increases in size but how they increase in complexity is quite different. The complexity of the tranditional system increases exponentially. The complexity of the Snowman Architecture increases linearly

For small IT systems, the difference between an exponential increase and a linear increase of complexity is not important. But as the size of the IT system exceeds $5M in cost, the difference becomes very important. 

Figure 3 show the relationship between complexity and project size of a traditional versus a Snowman Architecture. 

Figure 3. Complexity of Traditional Architecture versus Snowman Architecture

As shown in Figure 3, the complexity of a traditional IT architecture increases exponentially. It starts low and then enters the Risk Zone (the zone in which project failure is likely) when the size hits someplace around $8M. From there it rapidly ascends into the Failure Zone (the zone in which project failure is certain)4.  

In contrast, the complexity of the Snowman Architecture starts low (as does the traditional architecture) and then increases with a shallow linear slope. There is little difference between a shallow linear line and an exponential slope at low numbers. In Figure 3, you can see that at project sizes under $1M, there is effectively no difference between the Snowman Architecture and the traditional approach.

However this changes quickly as the project size increases. Traditional architectues are already in the Danger Zone by the time they hit $8M and by the time they hit $10M they are in the Failure Zone. In contrast, the shallow linear complexity slope allows the size of the Snowman Architecture to remain comfortably  in the Success Zone until well past $100M in project size. In fact, it isn't even clear that there is a size limitation with the Snowman Architecture.

The bottom line: a traditional architecture becomes likely to fail at around $5M whereas a Snowman Architecture has a high probability of success even at $100M.

Benefit 2: Return on Investment (ROI)

To compare the ROI of the Snowman Architecture versus the traditional horizontally partitioned architecture, let's take some reasonable project numbers for, say, a $20M project. 

Using a traditional architectural methodology (e.g. TOGAF) we can reasonably assume the $20M project will go over budget by at least 200% and will cost an additional 400% in lost opportunity costs5

Using the Snowman Architecture we won't be doing a single $20M project, we will be doing some number of smaller project of at most a few $M each. Projects of this size are well within the Success Zone (as shown in Figure 3.) Projects in this zone typically have no overruns and no lost opportunity costs. 

The Snowman approach requires an additional phase in the project life cycle, a pre-planning phase. This is where most of the work is done to design and plan the snowmen. In the worst case, this phase could add 10% to the overall cost of the project.

Of course, these numbers are just best guesses based on what I have seen of industry data. Feel free to plug in actual numbers from your own projects.  But based on these numbers, we can calculate the Snowman ROI.

Without using the Snowman architecture, we expect a total cost of

   $20M (planned cost)
+ $20M (200% overrun)
+ $40M (lost opportunity costs
-----------
$80M (total cost)
With the Snowman architecture we expect a total cost of 

  $20M (planned cost)
+ $2M (10% overhead for Snowman preplanning)
---------
$22M (total cost)

The difference between the two approaches is

  $80M (Cost of traditional approach)
- $22M (Cost of Snowman approach)
---------
  $58M (Difference between approaches)

The ROI of using the Snowman approach is thus

  $58M (Difference in Costs) 
/   $2M (Added cost of Snowman Approach) 
X 100
--------
2900% (Calculated ROI)

The bottom line: the Snowman approach returns a 2900% ROI. A 2900% ROI is excellent by any measure.

Benefit 3: Non Tangible Benefits

There are many benefits to delivering a project on time other than eliminating the lost opportunity costs. It is hard to measure these benefits, but they certainly include the following:

  • Predictability of IT deliverables.
  • Increased trust between Business and IT.
  • Better ability to use IT as a strategic asset.
As you can see, there are compelling reasons favoring the Snowman Architecture over traditional approaches. The reduction in complexity is huge and the ROI would make even the most seasoned CFO salivate  But the most compelling reasons favoring the Snowman Architecture may not be economic, they may be technical. But for those benefits, you must wait for the next installment of this blog.

Footnotes

(1) SIP is a patented methodology for autonomy optimized partitioning. It is described in a number of places, including the web short SIP Methodology for Project Optimization.

(2) TOGAF® is a methodology owned by The Object Management Group. It is described on the TOGAF 9.1 On-Line Documentation.

(3) If you are interested in the mathematical relationship between size, connections, and complexity, see my white paper The Mathematics of IT Simplification.

(4) I have written about the relationship between traditional IT project size and failure rates in a number of places including the web short The Relationship Between IT Project Size and Failure Rates.

(5) Unfortunately, we do not have good data on what these number are world-wide. These particular numbers came from averaging a number of large projects discussed in the Victorian Ombudsman Investigation into ICT Enabled Projects (2011).

Acknowledgements

Snowman picture by CileSuns92

Saturday, September 1, 2012

Snowman Architecture Part One: Overview


Introduction

This is the first of a three part blog. The parts will be laid out as follows:
  • Part One: Snowman Overview. The basics of the Snowman Architecture and why I claim it is critical for enterprise architects.
  • Part Two: Snowman Benefits. Validation for the claimed benefits of the Snowman Architecture over traditional architectural approaches.
  • Part Three: Snowman Apologetics. The arguments against the Snowman Architecture and why they are wrong.
As Enterprise Architects, there is no lack of problems deserving of our attention. We need to ensure our organizations are well positioned for the Cloud, can survive disasters, and have IT systems that can chassé in perfect time with the business. 

And then there is the whole area of IT failures. Too many of our systems go over budget, are delivered late, and end up depressing rather than supporting the business. If you have been reading any of my work, you know all about this.

But what if there was one approach to architecture that could meet most of our needs and solve the lion's share of our problems? I believe there is. I believe there is a single architectural style that is so important, I consider it a fundamental enterprise architectural pattern. I call this the Snowman Architecture.

In my last blog, I talked about Radical IT Transformation, a transformation that redefines the relationship between the business and IT. The Snowman Architecture is the IT side of this radical transformation.

Fundamentals

If Snowman Architecture sounds too informal to you, feel free to refer to it by its formal name: Vertically Aligned Synergistically Partitioned (VASP) Architecture. Figure 1 shows the four main segments of a VASP architecture.


Figure 1. Basic Vertically Aligned Synergistically Partitioned (VASP) Architecture

With a little imagination (or with the help of Figure 2) you can see why I refer to a VASP architecture as a Snowman Architecture. 


Figure 2. Snowman Architecture

Now your first reaction to the Snowman Architecture is probably, "Hey, that looks just like a services-oriented architecture (SOA)." A typical SOA is shown in Figure 3. And you can see that all of the components of the Snowman Architecture also appear in an SOA.


Figure 3. Typical SOA

Snowman: SOA with Constraints

The best way to think of the Snowman Architecture is that it is an SOA with some very tight constraints. It is these constrains that are critical to addressing all of the issues I mentioned earlier, so let's go through them.

Constraint 1: Vertical Alignment.

The contours of the business architecture (Snowman head) define the contours of the technical, services, and data architecture.  

In other words, there is a close relationship between the business, technical, services, and data architectures. Let's take these one by one.

At the technical level, there is package of technical systems (Snowman torso) that implements the package of business systems (Snowman head.) The technical package is complete with respect to the business package, that is, it fully implements the business package and doesn't implement anything other than the business package.

This vertical alignment is respected down to the data level (Snowman bottom.) In other words, there is a package of data that meets the needs of the package of technical systems (Snowman torso). This package of data fully meets the needs of the business package and doesn't meet the needs of any other package.

At the Service level, each messaging relationship supported at the services level implements one dependency at the business level. Further, all messaging relationships can be traced back to a business level dependency.

Constraint 2. Synergistic Partitioning.

The functions in the business package (Snowman head) are synergistic with respect to each other. 

Since the contours of the business package (Snowman head) define the contours of the lower level packages, it is important that the "right" functions be  located together. The overall choice of which business functions should co-habitat with which others should be directed to minimizing the overall system complexity. Elsewhere1 I have shown that the least complex overall system is attained when the choice as to co-habitation is based on the mathematical concept that I call synergy

While the concept of synergy has a precise mathematical definition, it also has a pragmatic definition. For those who don't care about the mathematics, just think of synergy is "closely related." That is, two functions are synergistic if they are closely related to each other, like deposit and withdraw. For those who do care about mathematics, see my White Paper1.

Given these two constrains, you can see why I call this a Vertically Aligned Synergistically Partitioned Architecture. And given the complexity of that description, you can see why I prefer the term Snowman Architecture.

Terminology

I use the term capability to refer to the closely related packages of business, technical, service, and data architecture. This is somewhat similar to the way the term capability is used in various enterprise architecture methodologies, although most don't include anything other than the business architecture in the notion of capability. So if I am being precise, I will refer to one related grouping of the four package types as a capability. When I am being informal, I will refer to that same  grouping as a  Snowman. So I might say the Checking-Account capability or the Checking-Account Snowman. Either of these would mean the business processes that deal with checking accounts, the technical systems that support those processes, the data that feeds those technical systems, and the services that provides interoperability with the outside world.

When I want to be clear that I am talking about my understanding of a capability rather than somebody else's, I will use the term autonomous business capability (ABC) . The word autonomous reflects the synergistic assignment of business functions and the word business refers to the central role of the business layer in defining the overall capability structure.

When I am discussing the business architecture of the ABC, I will refer to the business level of the ABCSimilarly I will use the terms technical, services, and data level to refer to those respective architectures. 

So the business level of the ABC contains some collection of business functions that are synergistic with respect to each other. The technical level of the ABC provides the technical support needed by those functions. The data level of the ABC provides the data that fuels the technical level. And the services level of the ABC implements dependencies between ABCs.

Relating this back to the Snowman Architecture, the business level of the ABC is the head, the technical level of the ABC is the torso, the data level of the ABC is the bottom, and the service level of the ABC is the arms. 

Scaling Up

Since the Snowman architecture is a subset of an SOA, creating larger and larger systems is easy. We just add more Snowmen (or ABCs, if you prefer) into the mix and make sure they are connected through messages as shown in Figure 4.


Figure 4. Scaling Up the Snowman Architecture

Benefits

Let's go back to my original claim, that the Snowman architecture solves many of the problems that plague the enterprise architect. Now I should inject a caution here. I consider the problem space of the enterprise architect the delivery of large (say, greater than $1M) systems2. If all we are building are small systems, then many of these claims don't apply. For that matter, there should be no need for an enterprise architect. 

Given this caveat, I make the following claims about the Snowman architecture in comparison to a traditional SOA or any traditional architectural approach:
  1. The Snowman architecture is cheaper to build.
  2. The Snowman architecture is more likely to be delivered on time.
  3. The Snowman architecture is more likely to satisfy the business when delivered.
  4. The Snowman architecture is easier to adapt to the changing needs of the business.
  5. The Snowman architecture is more amenable to Agile Development.
  6. The Snowman architecture is easier to debug.
  7. The Snowman architecture is more secure.
  8. The Snowman architecture is more resilient to failure.
  9. The Snowman architecture is easier to recover when system failure occurs.
  10. The Snowman architecture makes more efficient use of the Cloud.
There are a number of other benefits I could claim, but this should be sufficient to make the point. And I think it is fairly obvious that if all of my claims are true, it will be a compelling argument in favor of the Snowman Architecture.

In Part Two of this blog, I will validate each of these claims. Then in Part Three, I will discuss all of the arguments against the Snowman Architecture and show why they are wrong.

If you would like to be notified when the next installments are ready, you have two choices. If you just want to know about new blog posts, you can use the email signup on the right. If you would also like to know about my white papers, webshorts, and seminars, then use the ObjectWatch sign-up system at http://www.objectwatch.com/subscriptions.html.

Either way, stay tuned!

-------------------------------
Workshop Announcement: 
Radical IT Transformation with Roger Sessions and Sarah Runge
For my New Zealand and Australia followers, I will soon be doing a workshop with Sarah Runge, author of Stop Blaming the Software. We will be spending two days discussing our work in Radical IT Transformation, a better way to do IT.
Auckland: October 11-12 2012
Sydney: October 15-16 2012
Cairns: October 18-19 2012

Check out our Agenda or Register!
------------------------------

Notes

[1] See for example my paper, The Mathematics of IT Simplification at http://www.objectwatch.com/white_papers.htm#Math.

[2] In passing, I also note that I consider the problem space of the Enterprise Architect the delivery of the maximum possible return on IT investment. Many enterprise architects disagree with this job description. See for example the extensive discussion in LinkedIn on the subject of What is EA?

Acknowledgements

The two Snowmen pictures are by (in order of appearance) jcarwash31 and chris.corwin on Flickr, both are licensed under Creative Commons.

A Note on Comments

I welcome your questions/comments on this blog and I will try to respond quickly. A word of caution: I am not interested in comments along the lines of "This is not EA, this is EA-IT" or "EA is not concerned with delivering more value from IT." If you would like to have that conversation, I suggest you contribute to one of the discussions on LinkedIn, such as What is EA? Comments here are reserved for the topic at hand, discussing the Snowman Architecture, its claims, and the arguments against it. Thank you!


Tuesday, August 14, 2012

Radical IT Transformation


The industry has reached a consensus: IT is in trouble and is in need of a transformation. This much seems clear. But exactly what that transformation should look like is much less clear.

HP and Cisco tells us that IT transformation is about the cloud (1,2). Microsoft narrows this to the private cloud (3). IBM restricts this even further, saying IT transformation is about “consolidation, standardization and—most important—virtualization” (4).

There’s more. According to CIO Magazine, IT transformation is about the ability to show cost of services (5). CapGemini says IT transformation is about “identifying the key business drivers that impact the IT function, and their implications on IT operations [sic]” (6). And Accenture has perhaps the most interesting proposal of all: IT Transformation is about getting rid of all vendors except Microsoft (7). If only it were that easy!

Each of the opinions has a grain of truth. Certainly a transformed IT operation will make effective use of the cloud and will find ways to consolidate its servers through virtualization. A transformed IT would understand how to show its cost of services and be able to identify its key business drivers. And most would concede that Microsoft technologies can contribute cost effectively in many areas.

But each of these opinions lacks a larger perspective. IT transformation is not about adopting the latest technology or business fad. True IT transformation is about rebuilding, from the ground up, the relationship between the business and IT.

Sarah Runge and I have been discussing what such an IT transformation might look like. Sarah is the author of Stop Blaming the Software and approaches the business/IT problems from the perspective of the business. I am author of Simple Architectures for Complex Enterprises and approach these same problems from the IT side. We have found valuable synergies in our perspectives. We too are calling for a transformation, but a transformation that goes to the very heart of the business/IT relationship. We call this radical IT transformation.

Why do we need a radical IT transformation? In a nutshell, we don’t think most organizations are coming close to realizing the potential benefits of their IT investments. We see many IT organizations stretched to the breaking point just trying to maintain existing systems. We see critical new projects being shelved. The new projects that are done are often delivered late, over budget, and missing key functionality. We see many organizations in which the business doesn’t trust IT and IT feels marginalized by the business. And we believe that few large organizations are well positioned to leverage interesting new technologies such as the cloud. Does any of this sound familiar?

We believe all of these problems can be solved, but we don’t think they will be solved with piecemeal solutions. We need a radical transformation not in only in how IT does its job, but in how business and IT work together.

Radical IT transformation is a foundational transformation of the entire business/IT relationship. At its core, the transformation takes us from a technology-centric business/IT relationship to a business-centric business/IT relationship. This transformation includes a number of strategic shifts, each playing a role in the bigger transformation. I’ll briefly describe each of these shifts, saving details for later presentations.

Shift 1: From IT driven to business driven solutions. Today, IT does its best to understand the business and then use that understanding to drive IT projects. In a transformed organization, business takes the lead in driving all IT projects.

Shift 2: From big to small. Today, IT often tries to deliver large far-ranging solutions. In a transformed organization, IT delivers small solutions targeted at very specific, well-defined problems.

Shift 3: From complex to simple. Today, IT projects quickly grow in complexity driving up cost and increasing risk. In a transformed organization, IT intentionally delivers the simplest possible solution that meets the business need.

Shift 4: From long-term to short-term value. Today, IT organizations focus on delivering long-term value from their projects. Unfortunately conditions and technologies evolve quickly, making long-term projections nearly useless. In a transformed organization, time-to-value is a more important metric than projected long term ROI.

Shift 5: From process focus to delivery focus. Today, many IT organizations are mired in lugubrious processes that drag on indefinitely and deliver little value. In a transformed organization, processes are slashed to the absolute minimum and delivery is rewarded.

Shift 6: From internally owned to public cloud environments. Today, most IT systems are running on costly privately owned machines that require huge operational investments. In a transformed organization, many more IT systems will be running on highly efficient leased cloud systems that require minimal operation investments.

Shift 7: From IT centric to business centric architectures. Today, most IT organizations create IT architectures that are independent of the business processes they support. This creates a major IT drag on business agility. In a transformed organization, the IT architecture intentionally mimics the business architecture, resulting in highly agile IT systems that can turn on a dime as the business evolves.

Shift 8: From design to implementation. Today, most IT organizations spend considerable time “doing design.” In a transformed organization, there is much less design done in IT, since the overall design is driven by the business architecture (see shift 7.) While IT doesn’t completely leave the design business, IT is seen as primarily responsible for implementing the design that is defined by the business rather than creating the design that will be used by the business.

Shift 9: From long to short time frames. Today, most IT organizations measure their milestones in months and their delivery dates in years. In a transformed organization, entire delivery cycles are reduced to months or less. With processes slashed, design de-emphasized, and focus shifted to small and simple solutions, time to deliver is cut to the minimum.

Do these shifts resonant with you? Perhaps you are a candidate for a radical transformation. If so, stay in touch. We’ll be discussing this more in the coming weeks.

Would you like to subscribe to notifications about my blogs, white papers, and webshorts? Sign up here:  http://www.objectwatch.com/subscriptions.html.


-------------------------------
Workshop Announcement: 
Radical IT Transformation with Roger Sessions and Sarah Runge
For my New Zealand and Australia followers, I will soon be doing a workshop with Sarah Runge, author of Stop Blaming the Software. We will be spending two days discussing our work in Radical IT Transformation, a better way to do IT.
Auckland: October 11-12 2012
Sydney: October 15-16 2012
Cairns: October 18-19 2012

Check out our Agenda or Register!
------------------------------


Citations:


(1) http://h30507.www3.hp.com/t5/Transforming-IT-Blog/bg-p/transforming-it
(2) http://www.cisco.com/assets/sol/cloud/cloudverse_videos/index.html
(3) http://www.microsoft.com/business/events/en-us/PrivateCloudExec/#fbid=J5GaDAi8tB6
(4) http://ibm.co/MCnX7s
(5) http://www.cio.com/article/663015/Transforming_IT_to_Show_Cost_of_Services_5_Best_Practices
(6) http://www.capgemini.com/services-and-solutions/challenges/transforming-it-function/overview/
(7) http://www.accenture.com/us-en/pages/success-accenture-microsoft-transforming-it-summary.aspx

Acknowledgements

Photo of Potter’s Hands is by Walt Stoneburner (http://www.flickr.com/photos/waltstoneburner/) licensed under Creative Commons

Thursday, July 5, 2012

The Misuse of Reuse

The software industry has been pursuing reuse for at least four decades. The approach has changed over that time. It started with structured programming promising reusable snippets of code. We then moved to object-oriented programming promising reuse through inheritance. Today we are focusing on service-oriented architectures promising reusable services that can be written once and then used by multiple applications.

For forty years we have been pursuing reuse and for forty years we have been failing. Perhaps it is time to reexamine the goal itself.

Let's start by reviewing the arguments in favor of reuse. They are quite simple. And, as we will soon see, they are quite flawed.

The argument goes as follows. Let's say we have three systems that all make implement the same function, say Function 1. This situation is shown in Figure 1.

Figure 1. Three Systems Implementing Function 1


It seems fairly obvious that implementing Function 1 three times is an ineffective use of resources. A much better way of implementing these three systems is to share a single implementation of Function1, as shown in Figure 2.

Figure 2. Three Systems Sharing A Single Function 1

In general, if there are S systems implementing Function 1 and it costs D dollars to implement Function 1 that the cost savings from reuse is given by

D * (S - 1)

If D is  $10000 and S is 5, then reuse should save us $40,000. Right? Not so fast.

In order to evaluate the claim of cost savings through reuse, we need to apply some principles of IT Complexity Analytics. IT Complexity Analytics tells us that the complexity of Function 1 is exponentially related to the number of systems using the Function. This is because each system is not using the exact same function, it is using some variant of the same function. Function 1 needs to be generalized for every possible system that might someday use it, not only those we know about, but those we don't know about. This adds considerable complexity to Function 1. 

If the size of the circle reflects the complexity of the functionality, then a much more realistic depiction of the reuse scenario is shown in Figure 3. 

Figure 3. Realistic Depiction of Sharing Functionality

Since system cost is directly related to system complexity (one of the axioms of IT Complexity Analytics) we can say that in most cases, the theoretical cost savings from reusing functionality is overwhelmed by the actual cost of the newly introduced complexity.

However, the situation is even worse than this. Not only is the cost savings from reuse rarely achieved, but a number of additional problems are introduced. 

For example, we now have a single point of failure. If the system implementing Function 1 fails, all three of our systems fail. 

We have also compromised our security. As IT Complexity Analytics predicts, the overall security of a system is inversely related to its complexity. The more complex a system is, the lower its inherent ability to maintain security. 

And we have created a highly inefficient system for running on a Cloud. The extra cloud segments we will need to pull in to support our reuse will dramatically increase our cloud costs.

Given all of the problems we have created, we most likely would have been better off not attempting to create a reusable function in the first place. 

Now I should point out that I am not totally opposed to reuse. There are situations in which reuse can pay dividends. 

In general, a reuse strategy is indicated when the inherent complexity of the functionality being shared is high and the usage of that functionality is relatively standard. In these situations, the complexity of the functionality dominates over the complexity of the sharing of the functionality. But this situation is unusual. 

When should you pursue reuse? It all comes down to complexity. Will your overall system be more complex with or without shared functionality? This requires a careful measure of system complexity with and without the proposed sharing. If you can lower system complexity by sharing, do it. If you can't, don't. 

Complexity trumps reuse. Reuse is not our goal, it is a possible path to our goal. And more often than not, it isn't even a path, it is a distraction. Our real goal is not more reusable IT systems, it is simpler IT systems. Simpler systems are cheaper to build, easier to maintain, more secure, and more reliable. That is something you can bank on. Unlike reuse. 

...............................
Roger Sessions writes about the topic of Organizational Complexity and IT. If you would like to get email notifications about new posts, use the widget on the right. 

Tuesday, March 13, 2012

The Equation every Enterprise Architect Should Memorize


In my white paper, TheMathematics of IT Simplification, I gave the following equation for calculating the complexity of an IT system

C = F3.11

where C is the complexity in Standard Complexity Units (SCUs) and F is the number of business functions implemented within the system. As I point out in the paper, F can be either business functions or dependencies on other systems. An SCU, by the way, is the amount of complexity in a single business function unconnected and unrelated to any other business function.

This equation is a simplification of a more complex equation that I called Bird's equation after a friend, Chris Bird, who first derived that more complex equation. This simplification was suggested by a woman reader who's name I have regrettably misplaced. If this person was you, please let me know so I can give you credit.

Although I was correct in my discussion about this equation in the white paper, I was a little vague on the proof. A friend, Bill Indest, challenged me on the proof. So with the help of another friend, Nikos Salingaros, I have decided to fully derive the equation, and go one better. Not only will I derive this equation, I will derive the more general, more useful form.

This equation starts with what I call Glass's Law which, as I explain in the white paper, is named for Robert Glass who popularized the law that increasing the functionality of a system by 25% doubles the complexity of the system.

So we know that complexity and functionality increase together, and complexity increases faster than functionality. This describes a proportional relationship between F (functionality) and C (complexity). The simplest such dependence is an exponential one: in other words, there exists some power X such that C = FX. Okay. Now let's solve for X .

We know from Glass's Law that when functionality increases by 0.25, complexity doubles. In other words,

2C = (1.25)FX

So now we have two equations, both of which must be true.

Eq. 1:  C = FX
Eq. 2:  2C = (1.25F)X

Now we need to move into logarithms. Recall that logW Y = Z is interpreted as Y = WZ. And also recall that if W (the base) is not specified, then it is assumed to be 10. So, for example,

log10 100 = 2, or we could just say

log 100 = 2, since 10 is assumed.

One of the laws of logarithms states that if

A = B, then
log A = log B

So let’s go back to our two equations:

Eq 1: C=FX
Eq 2: 2C = (1.25F)X

We can use this property of logarithms to transform equations 1 and 2 into their logarithmic equivalents:

Eq 3: log C= log (FX)
Eq 4: log 2C = log (1.25F)X

Now we use another two log identities:

log AB = B log A
log AB = log A + log B

Using these identifies, we can transform Equation 3 and 4 into

Eq 5:  log C = X log F
Eq 6: log 2 + log C = X (log 1.25 + log F)

We now subtract Eq 5 from Eq 6 giving

Eq 7: log 2 = X(log 1.25 + log F) – X log F

which is the same as

Eq 8: log 2 = X log 1.25 + X log F – X log F 

which reduces to

Eq 9: log 2 = X log 1.25

We can now divide both sides of Eq 9 by log 1.25, giving

Eq 10: log 2/log 1.25 = X

Since log 2 is a constant=3.01, and log 1.25 is a constant=0.097, then the first half of Eq 10 simplifies to 3.01/0.097, which is 3.11.

Thus X in Eq 1 is equal to 3.11, and complexity is related to functionality by

Eq 11: C = F3.11.

This, of course, was my original assertion. Now this equation assumes that you buy into the fact that complexity doubles when functionality increases by 25%. But if you don’t agree with this, you can replace the X by a value that represents your belief (or, even better, observation.)

Eq. 12: C = F (logC/logF)

Let’s say you observe that in your organization that complexity doubles when F goes up by 50%. Then your formula for complexity is

Eq. 13: C = F (log2/log1.5)

Which would be

Eq. 14: C = F (0.301/0.176) = F 1.71

Equation 12 is so important to understanding complexity in an enterprise that I will refer to it as the Fundamental Equation for Enterprise Complexity. Everything you need to know about enterprise complexity can be derived from this equation. The formula should be memorized by every enterprise architect.

See? Aren’t you glad you stayed awake?

Friday, January 6, 2012

Web Short: The Relationship Between IT Project Size and Failure

Large IT projects have bigger budgets than small projects. They also have more formal approaches to tracking milestones and budgets. Does this make them more likely to succeed? It turns out that there is a relationship between the cost of a project and the chances that that project will succeed, but it isn't what you might think.

Watch the presentation and then leave a comment.

You can see a full list of Roger's Web Shorts here.
Would you like to be notified of future Web Shorts and White Papers by Roger Sessions? Sign up here.

And thanks to AuthorStream for hosting this presentation.


This presentation includes narration, so be sure to have your speakers on.

Web Short: The Mathematics of Cloud Optimization

How do you minimize the cost of running your large mission critical application on a public cloud? Do you focus on finding a low cost cloud provider? It turns out that the answer to cost reduction may be closer than you think. And it starts by understanding the mathematics around Cloud optimization.

Watch the presentation and then leave a comment.


You can see a full list of Roger's Web Shorts here.
Would you like to be notified of future Web Shorts and White Papers by Roger Sessions? Sign up here.
And thanks to AuthorStream for hosting this presentation.


The video includes narration, so be sure you speakers are on!