Wednesday, August 21, 2013

Two Roads Converge?

by Roger Sessions and Richard Hubert


Introduction

We (Richard Hubert and Roger Sessions) have a lot in common. We are both Fellows of the International Association of Software Architects (IASA). We have both written books and articles. And we are both well known proponents of a particular approach to Enterprise and IT Architectures. For Richard, this approach is called Convergence Architecture. For Roger, this approach is called The Snowman Practice. But it may turn out that we have more in common than we thought. Our approaches may complement each other in some interesting ways. But first, let’s take a look at what each of us has been doing.

Introduction to Richard’s work

Since the mid 1990’s I (Richard) have been developing and optimizing an architectural style that addresses the complexity of both the IT systems and the business processes. I call this holistic perspective Convergent Architecture (CA). I wrote about this in 2001 in my book Convergent Architecture (John Wiley N.Y. ISBN: 0471105600.) CA includes properties that  I consider to be inherent in any architectural style. The metamodel that I use includes the project design, the system design, and the business design. At a high level, this model is shown in the following diagram:


Figure 1. Coverage of a holistic architectural style

As you can see in the above diagram, the partitioning between Organization, Process, and Resource plays a significant role in the quality of the design. Experience and rules-of-thumb are adequate to handle many designs, but as systems get larger, a more formal approach is preferable, especially if it can be assisted by tools. This is where Roger’s work is a perfect fit.

Introduction  to Roger’s work

I (Roger) have been looking at how to validate an architecture. To do this, I have developed a mathematical model for what an ideal architecture looks like and a methodology for delivering an architecture that is as close to that ideal as possible. The starting point for this is to define what we mean by “ideal.” My definition of an ideal architecture is the least complex architecture that solves the business problem. This means that we also need a metric for measuring complexity, which, fortunately, comes out of the mathematical model. You can read about this mathematical model in this white paper.

It turns out that when you discover the ideal architecture for a given problem, it almost always has a characteristic shape: A collection of business functionality sitting on top of a collection of services sitting on top of a collection of data. In addition, these three tiers are separated from other collections by strong vertical partitions. There is a strong connection between business functions in the top tier, services in the middle tier, and data in the bottom tier. Where connections are required between tiers, these occur through asynchronous messages at the service level. This architecture is shown in the following diagram:


Figure 2. The Snowman Architecture Created By SIP

As you can see in the above diagram, the idealized architecture looks a lot like a snowman. The head, torso, and bottom of the snowman contain business functions, services, and data, respectively.

The methodology I (Roger) have developed to drive this architecture is called SIP, for Snowman Identification Process. Some of you may know it under its old name, Simple Iterative Partitions. You can get a good overview of The Snowman Practice from this video.

Synergy discovery

When we compared the architecture that is driven by the CA’s architectural metamodel (Figure 1)  to the architecture that is driven by the highly theoretical SIP (Figure 2) it was clear to us that significant commonalities are at hand. 

Both approaches are based on the fundamental Enterprise Architecture principle of IT-Business alignment. Both approaches define best practices concerning how this alignment can be effectively achieved and measured. Additionally, both approaches are based on rules and patterns that apply to the simplification of any system, whether large or small, organizational or purely technical. The Convergent Architecture, for instance, has been used to design IT-organizations which then use the same approach to design and simplify IT systems (this is conceptual isomorphism).

Lastly, and most important of all, we recognized a SIP approach can be applied to mathematically support and objectively drive both architectural styles. SIP thus enhances the design approach and process as both a tool and substantive mathematical proof needed to ascertain the simplest (least complex) of all possible system and organizational structures. . 

In essence, we now have CA showing that the SIP theory really does deliver a partition that stands up to the most demanding examination. And at the same time we have the SIP mathematics defining the vertical boundaries of a CA architecture that are mathematically not only sound, but as simple as possible.

The Future

Where will this take us? To be honest, we are still discussing this. But the possibilities are intriguing. Imagine, two mature methodologies that have such strong synergy where both the theoretical and the model-driven approaches seem to come up with such complementary solutions. Stay tuned for more information.

Wednesday, August 14, 2013

Addressing Data Center Complexity


If you have been following my work, you know how I feel about complexity. Complexity is the enemy. I have written a lot about how complexity causes software cost overruns, late deliveries, and poor business alignment.

In this blog, I decided to look at complexity from another perspective: the data center. This is the perspective of those that must support all of those complex systems the software group has managed to deliver.

The problem with complexity is that it magnifies as you move down the food chain. This is bad news for those at the bottom of the food chain, the data center.

Straightforward business processes become complex software systems. Complex software systems require very complex data stores. Very complex data stores run on extremely complex data centers. These extremely complex data centers are expensive to manage, run, and secure.

The numerous problems that complexity creates for data centers were highlighted in a recent survey by Symantec called State of the Data Center; Global Results [1]. The results of this survey should cause any CIO to break out in a cold sweat.

According to this survey, complexity is a huge problem for data centers. For example, the typical company surveyed had 16 data center outages per year at an average cost of $319K per outage or over $5M per year. And this does not include indirect costs, such as loss of customer confidence. The number and magnitude of these outages was directly attributed to data center complexity according to those surveyed. Complex data centers fail often and they fail hard.

But outages aren’t even the biggest complexity related headache for data centers. The most cited complexity related problem is the high cost of keeping the data center running on those increasingly rare days when there is no outage. Other problems attributed to data center complexity were security breaches, compliance incidents, missed service level agreements, lost data, and litigation exposure. Clearly complexity is a big problem for data centers.

How are data centers addressing this escalating complexity? According to this survey, the approach 90% of companies are taking is information governance. What is information governance? According to Debra Logan, a Gartner Research VP,

Information governance is the specification of decision rights and an accountability framework to encourage desirable behavior in the valuation, creation, storage, use, archival and deletion of information. It includes the processes, roles, standards and metrics that ensure the effective and efficient use of information in enabling an organization to achieve its goals [2]. 


Two points should be clear from the above definition. First, information governance is a vague concept.  Second, whatever information governance is, it has nothing to do with the problem that is vexing data centers, namely complexity. This is unfortunate, given that so many of the surveyed companies say they are pinning their hopes on information governance to solve their complexity related problems. These companies are headed for a major disappointment.

If information governance won’t solve complexity related data center problems, what will? The problem, as I stated earlier, is the magnification of complexity as it rolls down the food chain from business process to data center. This problem can only be solved with complexity governance. Complexity is the problem, not information.

How do I define complexity governance?

Complexity governance is a set of policies, guidelines, and procedures that ensure every business process is implemented with the simplest possible IT solution supported with the simplest possible data organization running on the simplest possible hardware configuration. 


This sounds good but what would it take to implement this?

Gartner’s Managing VP and Chief of Research, David Cappuccio, is on the right track when he says it is particularly important for more data center staff to understand the “cascade effect” of making changes in a complex environment [3]. Unfortunately, few IT staff are trained in IT complexity, a prerequisite to understanding the cascade effect to which Cappuccio eludes. And it stands to reason that if one does not understand how complexity cascades, one is woefully unprepared to do anything about it.

Here is my recommended plan for putting in place effective complexity governance.

  1. Train everybody to understand the importance of controlling complexity. Every person on the IT food chain should be able to recite these words in their sleep: Complexity is the Enemy.
  2. Train a select group that includes representatives from the business, IT, and data center in IT Complexity Analytics, the science of complexity as it relates to IT systems.
  3. Give this group a mandate to put in place strong complexity governance.
  4. Give this group the authority to enforce complexity governance.
  5. Hold this group responsible for delivering simpler IT systems that run on simpler data centers.
  6. Document the benefits complexity governance delivers.


I don’t claim that complexity governance is simple. The reality is that complexity governance requires a significant and sustained effort. But it is an effort that delivers substantial business value. If you don’t believe me, ask somebody who is in the middle of their tenth three hundred thousand dollar data center outage this year. They will tell you: Complexity is the enemy.

Sign Up

I'll be glad to let you know when I have new blogs, white papers, or videos available. Sign up here.

References

[1] https://hp.symantec.com/system/files/b-state-of-data-center-survey-global-results-09_2012.en-us.pdf

[2] http://blogs.gartner.com/debra_logan/2010/01/11/what-is-information-governance-and-why-is-it-so-hard/

[3] ] http://www.datacenterknowledge.com/archives/2012/12/04/gartner-it-complexity-staffing/

Acknowledgements

The photo is by Route79 on Flickr, licensed through Creative Commons. (http://www.flickr.com/photos/route79/)

Notices

This blog is copyright by Roger Sessions. This blog may be copied and reproduced as long as no changes are made and his authorship is acknowledged. All other rights are reserved.