Tuesday, August 14, 2012

Radical IT Transformation


The industry has reached a consensus: IT is in trouble and is in need of a transformation. This much seems clear. But exactly what that transformation should look like is much less clear.

HP and Cisco tells us that IT transformation is about the cloud (1,2). Microsoft narrows this to the private cloud (3). IBM restricts this even further, saying IT transformation is about “consolidation, standardization and—most important—virtualization” (4).

There’s more. According to CIO Magazine, IT transformation is about the ability to show cost of services (5). CapGemini says IT transformation is about “identifying the key business drivers that impact the IT function, and their implications on IT operations [sic]” (6). And Accenture has perhaps the most interesting proposal of all: IT Transformation is about getting rid of all vendors except Microsoft (7). If only it were that easy!

Each of the opinions has a grain of truth. Certainly a transformed IT operation will make effective use of the cloud and will find ways to consolidate its servers through virtualization. A transformed IT would understand how to show its cost of services and be able to identify its key business drivers. And most would concede that Microsoft technologies can contribute cost effectively in many areas.

But each of these opinions lacks a larger perspective. IT transformation is not about adopting the latest technology or business fad. True IT transformation is about rebuilding, from the ground up, the relationship between the business and IT.

Sarah Runge and I have been discussing what such an IT transformation might look like. Sarah is the author of Stop Blaming the Software and approaches the business/IT problems from the perspective of the business. I am author of Simple Architectures for Complex Enterprises and approach these same problems from the IT side. We have found valuable synergies in our perspectives. We too are calling for a transformation, but a transformation that goes to the very heart of the business/IT relationship. We call this radical IT transformation.

Why do we need a radical IT transformation? In a nutshell, we don’t think most organizations are coming close to realizing the potential benefits of their IT investments. We see many IT organizations stretched to the breaking point just trying to maintain existing systems. We see critical new projects being shelved. The new projects that are done are often delivered late, over budget, and missing key functionality. We see many organizations in which the business doesn’t trust IT and IT feels marginalized by the business. And we believe that few large organizations are well positioned to leverage interesting new technologies such as the cloud. Does any of this sound familiar?

We believe all of these problems can be solved, but we don’t think they will be solved with piecemeal solutions. We need a radical transformation not in only in how IT does its job, but in how business and IT work together.

Radical IT transformation is a foundational transformation of the entire business/IT relationship. At its core, the transformation takes us from a technology-centric business/IT relationship to a business-centric business/IT relationship. This transformation includes a number of strategic shifts, each playing a role in the bigger transformation. I’ll briefly describe each of these shifts, saving details for later presentations.

Shift 1: From IT driven to business driven solutions. Today, IT does its best to understand the business and then use that understanding to drive IT projects. In a transformed organization, business takes the lead in driving all IT projects.

Shift 2: From big to small. Today, IT often tries to deliver large far-ranging solutions. In a transformed organization, IT delivers small solutions targeted at very specific, well-defined problems.

Shift 3: From complex to simple. Today, IT projects quickly grow in complexity driving up cost and increasing risk. In a transformed organization, IT intentionally delivers the simplest possible solution that meets the business need.

Shift 4: From long-term to short-term value. Today, IT organizations focus on delivering long-term value from their projects. Unfortunately conditions and technologies evolve quickly, making long-term projections nearly useless. In a transformed organization, time-to-value is a more important metric than projected long term ROI.

Shift 5: From process focus to delivery focus. Today, many IT organizations are mired in lugubrious processes that drag on indefinitely and deliver little value. In a transformed organization, processes are slashed to the absolute minimum and delivery is rewarded.

Shift 6: From internally owned to public cloud environments. Today, most IT systems are running on costly privately owned machines that require huge operational investments. In a transformed organization, many more IT systems will be running on highly efficient leased cloud systems that require minimal operation investments.

Shift 7: From IT centric to business centric architectures. Today, most IT organizations create IT architectures that are independent of the business processes they support. This creates a major IT drag on business agility. In a transformed organization, the IT architecture intentionally mimics the business architecture, resulting in highly agile IT systems that can turn on a dime as the business evolves.

Shift 8: From design to implementation. Today, most IT organizations spend considerable time “doing design.” In a transformed organization, there is much less design done in IT, since the overall design is driven by the business architecture (see shift 7.) While IT doesn’t completely leave the design business, IT is seen as primarily responsible for implementing the design that is defined by the business rather than creating the design that will be used by the business.

Shift 9: From long to short time frames. Today, most IT organizations measure their milestones in months and their delivery dates in years. In a transformed organization, entire delivery cycles are reduced to months or less. With processes slashed, design de-emphasized, and focus shifted to small and simple solutions, time to deliver is cut to the minimum.

Do these shifts resonant with you? Perhaps you are a candidate for a radical transformation. If so, stay in touch. We’ll be discussing this more in the coming weeks.

Would you like to subscribe to notifications about my blogs, white papers, and webshorts? Sign up here:  http://www.objectwatch.com/subscriptions.html.


-------------------------------
Workshop Announcement: 
Radical IT Transformation with Roger Sessions and Sarah Runge
For my New Zealand and Australia followers, I will soon be doing a workshop with Sarah Runge, author of Stop Blaming the Software. We will be spending two days discussing our work in Radical IT Transformation, a better way to do IT.
Auckland: October 11-12 2012
Sydney: October 15-16 2012
Cairns: October 18-19 2012

Check out our Agenda or Register!
------------------------------


Citations:


(1) http://h30507.www3.hp.com/t5/Transforming-IT-Blog/bg-p/transforming-it
(2) http://www.cisco.com/assets/sol/cloud/cloudverse_videos/index.html
(3) http://www.microsoft.com/business/events/en-us/PrivateCloudExec/#fbid=J5GaDAi8tB6
(4) http://ibm.co/MCnX7s
(5) http://www.cio.com/article/663015/Transforming_IT_to_Show_Cost_of_Services_5_Best_Practices
(6) http://www.capgemini.com/services-and-solutions/challenges/transforming-it-function/overview/
(7) http://www.accenture.com/us-en/pages/success-accenture-microsoft-transforming-it-summary.aspx

Acknowledgements

Photo of Potter’s Hands is by Walt Stoneburner (http://www.flickr.com/photos/waltstoneburner/) licensed under Creative Commons

Thursday, July 5, 2012

The Misuse of Reuse

The software industry has been pursuing reuse for at least four decades. The approach has changed over that time. It started with structured programming promising reusable snippets of code. We then moved to object-oriented programming promising reuse through inheritance. Today we are focusing on service-oriented architectures promising reusable services that can be written once and then used by multiple applications.

For forty years we have been pursuing reuse and for forty years we have been failing. Perhaps it is time to reexamine the goal itself.

Let's start by reviewing the arguments in favor of reuse. They are quite simple. And, as we will soon see, they are quite flawed.

The argument goes as follows. Let's say we have three systems that all make implement the same function, say Function 1. This situation is shown in Figure 1.

Figure 1. Three Systems Implementing Function 1


It seems fairly obvious that implementing Function 1 three times is an ineffective use of resources. A much better way of implementing these three systems is to share a single implementation of Function1, as shown in Figure 2.

Figure 2. Three Systems Sharing A Single Function 1

In general, if there are S systems implementing Function 1 and it costs D dollars to implement Function 1 that the cost savings from reuse is given by

D * (S - 1)

If D is  $10000 and S is 5, then reuse should save us $40,000. Right? Not so fast.

In order to evaluate the claim of cost savings through reuse, we need to apply some principles of IT Complexity Analytics. IT Complexity Analytics tells us that the complexity of Function 1 is exponentially related to the number of systems using the Function. This is because each system is not using the exact same function, it is using some variant of the same function. Function 1 needs to be generalized for every possible system that might someday use it, not only those we know about, but those we don't know about. This adds considerable complexity to Function 1. 

If the size of the circle reflects the complexity of the functionality, then a much more realistic depiction of the reuse scenario is shown in Figure 3. 

Figure 3. Realistic Depiction of Sharing Functionality

Since system cost is directly related to system complexity (one of the axioms of IT Complexity Analytics) we can say that in most cases, the theoretical cost savings from reusing functionality is overwhelmed by the actual cost of the newly introduced complexity.

However, the situation is even worse than this. Not only is the cost savings from reuse rarely achieved, but a number of additional problems are introduced. 

For example, we now have a single point of failure. If the system implementing Function 1 fails, all three of our systems fail. 

We have also compromised our security. As IT Complexity Analytics predicts, the overall security of a system is inversely related to its complexity. The more complex a system is, the lower its inherent ability to maintain security. 

And we have created a highly inefficient system for running on a Cloud. The extra cloud segments we will need to pull in to support our reuse will dramatically increase our cloud costs.

Given all of the problems we have created, we most likely would have been better off not attempting to create a reusable function in the first place. 

Now I should point out that I am not totally opposed to reuse. There are situations in which reuse can pay dividends. 

In general, a reuse strategy is indicated when the inherent complexity of the functionality being shared is high and the usage of that functionality is relatively standard. In these situations, the complexity of the functionality dominates over the complexity of the sharing of the functionality. But this situation is unusual. 

When should you pursue reuse? It all comes down to complexity. Will your overall system be more complex with or without shared functionality? This requires a careful measure of system complexity with and without the proposed sharing. If you can lower system complexity by sharing, do it. If you can't, don't. 

Complexity trumps reuse. Reuse is not our goal, it is a possible path to our goal. And more often than not, it isn't even a path, it is a distraction. Our real goal is not more reusable IT systems, it is simpler IT systems. Simpler systems are cheaper to build, easier to maintain, more secure, and more reliable. That is something you can bank on. Unlike reuse. 

...............................
Roger Sessions writes about the topic of Organizational Complexity and IT. If you would like to get email notifications about new posts, use the widget on the right. 

Tuesday, March 13, 2012

The Equation every Enterprise Architect Should Memorize


In my white paper, TheMathematics of IT Simplification, I gave the following equation for calculating the complexity of an IT system

C = F3.11

where C is the complexity in Standard Complexity Units (SCUs) and F is the number of business functions implemented within the system. As I point out in the paper, F can be either business functions or dependencies on other systems. An SCU, by the way, is the amount of complexity in a single business function unconnected and unrelated to any other business function.

This equation is a simplification of a more complex equation that I called Bird's equation after a friend, Chris Bird, who first derived that more complex equation. This simplification was suggested by a woman reader who's name I have regrettably misplaced. If this person was you, please let me know so I can give you credit.

Although I was correct in my discussion about this equation in the white paper, I was a little vague on the proof. A friend, Bill Indest, challenged me on the proof. So with the help of another friend, Nikos Salingaros, I have decided to fully derive the equation, and go one better. Not only will I derive this equation, I will derive the more general, more useful form.

This equation starts with what I call Glass's Law which, as I explain in the white paper, is named for Robert Glass who popularized the law that increasing the functionality of a system by 25% doubles the complexity of the system.

So we know that complexity and functionality increase together, and complexity increases faster than functionality. This describes a proportional relationship between F (functionality) and C (complexity). The simplest such dependence is an exponential one: in other words, there exists some power X such that C = FX. Okay. Now let's solve for X .

We know from Glass's Law that when functionality increases by 0.25, complexity doubles. In other words,

2C = (1.25)FX

So now we have two equations, both of which must be true.

Eq. 1:  C = FX
Eq. 2:  2C = (1.25F)X

Now we need to move into logarithms. Recall that logW Y = Z is interpreted as Y = WZ. And also recall that if W (the base) is not specified, then it is assumed to be 10. So, for example,

log10 100 = 2, or we could just say

log 100 = 2, since 10 is assumed.

One of the laws of logarithms states that if

A = B, then
log A = log B

So let’s go back to our two equations:

Eq 1: C=FX
Eq 2: 2C = (1.25F)X

We can use this property of logarithms to transform equations 1 and 2 into their logarithmic equivalents:

Eq 3: log C= log (FX)
Eq 4: log 2C = log (1.25F)X

Now we use another two log identities:

log AB = B log A
log AB = log A + log B

Using these identifies, we can transform Equation 3 and 4 into

Eq 5:  log C = X log F
Eq 6: log 2 + log C = X (log 1.25 + log F)

We now subtract Eq 5 from Eq 6 giving

Eq 7: log 2 = X(log 1.25 + log F) – X log F

which is the same as

Eq 8: log 2 = X log 1.25 + X log F – X log F 

which reduces to

Eq 9: log 2 = X log 1.25

We can now divide both sides of Eq 9 by log 1.25, giving

Eq 10: log 2/log 1.25 = X

Since log 2 is a constant=3.01, and log 1.25 is a constant=0.097, then the first half of Eq 10 simplifies to 3.01/0.097, which is 3.11.

Thus X in Eq 1 is equal to 3.11, and complexity is related to functionality by

Eq 11: C = F3.11.

This, of course, was my original assertion. Now this equation assumes that you buy into the fact that complexity doubles when functionality increases by 25%. But if you don’t agree with this, you can replace the X by a value that represents your belief (or, even better, observation.)

Eq. 12: C = F (logC/logF)

Let’s say you observe that in your organization that complexity doubles when F goes up by 50%. Then your formula for complexity is

Eq. 13: C = F (log2/log1.5)

Which would be

Eq. 14: C = F (0.301/0.176) = F 1.71

Equation 12 is so important to understanding complexity in an enterprise that I will refer to it as the Fundamental Equation for Enterprise Complexity. Everything you need to know about enterprise complexity can be derived from this equation. The formula should be memorized by every enterprise architect.

See? Aren’t you glad you stayed awake?

Friday, January 6, 2012

Web Short: The Relationship Between IT Project Size and Failure

Large IT projects have bigger budgets than small projects. They also have more formal approaches to tracking milestones and budgets. Does this make them more likely to succeed? It turns out that there is a relationship between the cost of a project and the chances that that project will succeed, but it isn't what you might think.

Watch the presentation and then leave a comment.

You can see a full list of Roger's Web Shorts here.
Would you like to be notified of future Web Shorts and White Papers by Roger Sessions? Sign up here.

And thanks to AuthorStream for hosting this presentation.


This presentation includes narration, so be sure to have your speakers on.

Web Short: The Mathematics of Cloud Optimization

How do you minimize the cost of running your large mission critical application on a public cloud? Do you focus on finding a low cost cloud provider? It turns out that the answer to cost reduction may be closer than you think. And it starts by understanding the mathematics around Cloud optimization.

Watch the presentation and then leave a comment.


You can see a full list of Roger's Web Shorts here.
Would you like to be notified of future Web Shorts and White Papers by Roger Sessions? Sign up here.
And thanks to AuthorStream for hosting this presentation.


The video includes narration, so be sure you speakers are on!

Web Short: SIP Methodology for Project Optimization

IT Methodologies such as TOGAF, Zachman, FEAF, RUP, and Agile are all important tools for the enterprise and software architect. But it turns out that all of these methodologies share a common limitation: they don't scale. Each of these works well for projects less than about $1 million, but try to use any of them for projects in the $10M range, and you will find yourself in a murky land with dangers around every curve.

If you are building or maintaining a large IT system, you must start by understanding the principles of partitioning. This  Web Short gives an overview of the SIP methodology, the only methodology focused exclusively on the issue of partitioning. SIP doesn't compete with these existing methodologies, it completes them. SIP is the missing ingredient in scalability.

You can see a full list of Roger's Web Shorts here.

Would you like to be notified of future Web Shorts and White Papers by Roger Sessions? Sign up here.
And thanks to AuthorStream for hosting this presentation.


Saturday, December 10, 2011

The CRASH Quiz

CAST software just published their 2011 CRASH (CAST Report on Application Software Health.) I know the CAST folks quite well. They are leaders in the field of software implementation complexity. Implementation complexity is complementary to my interests, organizational complexity. Organizational complexity comes from poor project partitioning. Implementation complexity comes from poor project coding. Both of these types of complexity cause severe problems for large IT systems.

While the full CRASH report must be purchased, considerable information is available in the free summary available here. I will highlight some of the most surprising, and in some cases, controversial findings. To make this more interesting, I will deliver my discussion as a quiz. So get ready!

CAST did this analysis using their software analysis tools. CAST produces tools that analyze software systems and rates them on various criteria of code quality. CAST, for example, can analyze the 50,000 lines of code that was just delivered from your outsourcing firm and rate it on maintainability, adaptability, security, and a number of other attributes that you probably care a great deal about.

The logic to doing this analysis is simple. Sooner or later, you are going to find out about how maintainable, adaptable, and secure this code is. Would you prefer to find out now, before you have accepted delivery, or later, after you have deployed this system to your trusting constituents?

To produce the CRASH report, CAST used their tools on a large collection of software systems ranging in size from 10K LOC (lines of code) to more than 5M LOC. They did this for a number of industries, programming systems, and development methodologies.

Okay, are you ready to take the CRASH quiz? Here goes! I will start with the questions and then give you the answers.

QUIZ

Q 1. Rank the following languages from most used to least used: ABAP, C, Java, .NET, Cobol.

Q 2. In the Government, which of the following languages is most popular: ABAP, C, Java, .NET, Oracle Forms?

Q3. Which of the following yield the worst security scores: Cobol, C++, or Java?

Q4. Which of the following have the highest complexity scores: Java, Oracle Forms, or Cobol?

Q5. Which industry has the highest complexity scores: Government, Financial Services, or Telecom?

Q6. Which code has a higher overall quality index, code produced in-house or code that is outsourced?

Q7. Which development approach produces code with a higher quality index, Agile/Iterative or Waterfall?

Q8. What is the "Technical Debt" in an average system, per line of code?
a. Less than $1.00 per line of code.
b. Between $1.01 and $2.00 per line of code.
c. Between $2.01 and $3.00 per line of code.
d. Between $3.01and $4.00 per line of code.


ANSWERS (No Peeking!)

A1. Overall, these languages rank (from most to least popular) Java, Cobol, ABAP, .NET, and C. Actually, C is rarely used. I include in the list just for nostalgia.

A2. In the Government, Oracle Forms is the most popular programming system, followed by Java. I note that Oracle Forms tends to be relatively small programs if one can even call them programs. So while Java is used for fewer "systems" I suspect (but can't tell from the data) that it is used for many more lines of code.

A3: From a security perspective, C++ and Java are in a virtual tie for worst security. Cobol code overall has a much better security score. This may reflect more on the industry than the language, since Cobol is used heavily in the Financial Services industry, where security is taken more seriously than, say Telecom where Java use predominates.


A4. The most complex code by far is found in the Cobol systems followed by Oracle Forms. Java wins for the least complex code of the three.


A5. The industry with the highest complexity scores is the Government. Financial Services is a distant second followed by Telecom. This result is surprising given than Java is popular in both Telecom and the Government. The implication seems to be that although the Government is using very good language tools, it is not maximizing their effectiveness.


A6. Code that is produced in-house has a better quality index than outsourced code, but the difference is marginal and probably not statistically significant.



A7. Overall, Waterfall development has a significantly higher quality index than does code produced using Agile/Iterative. Not only does Agile/Iterative score lower in overall quality, it also scores lower in transferability (the ability for other groups to understand the code) and changeability (the ability to modify the code.) I can hear the groans of protest already from the Agile community. Sorry, I'm just the messenger.



A8. The average "Technical Debt" in a system is $3.61 per line of code (answer d.) The technical debt looks at the number of problems, the severity of those problems, and the cost of fixing those problems.

Some of these answers are a bit surprising, aren't they? Feel free to read the summary report here. You will probably find another surprise or two.