Saturday, December 10, 2011

The CRASH Quiz

CAST software just published their 2011 CRASH (CAST Report on Application Software Health.) I know the CAST folks quite well. They are leaders in the field of software implementation complexity. Implementation complexity is complementary to my interests, organizational complexity. Organizational complexity comes from poor project partitioning. Implementation complexity comes from poor project coding. Both of these types of complexity cause severe problems for large IT systems.

While the full CRASH report must be purchased, considerable information is available in the free summary available here. I will highlight some of the most surprising, and in some cases, controversial findings. To make this more interesting, I will deliver my discussion as a quiz. So get ready!

CAST did this analysis using their software analysis tools. CAST produces tools that analyze software systems and rates them on various criteria of code quality. CAST, for example, can analyze the 50,000 lines of code that was just delivered from your outsourcing firm and rate it on maintainability, adaptability, security, and a number of other attributes that you probably care a great deal about.

The logic to doing this analysis is simple. Sooner or later, you are going to find out about how maintainable, adaptable, and secure this code is. Would you prefer to find out now, before you have accepted delivery, or later, after you have deployed this system to your trusting constituents?

To produce the CRASH report, CAST used their tools on a large collection of software systems ranging in size from 10K LOC (lines of code) to more than 5M LOC. They did this for a number of industries, programming systems, and development methodologies.

Okay, are you ready to take the CRASH quiz? Here goes! I will start with the questions and then give you the answers.

QUIZ

Q 1. Rank the following languages from most used to least used: ABAP, C, Java, .NET, Cobol.

Q 2. In the Government, which of the following languages is most popular: ABAP, C, Java, .NET, Oracle Forms?

Q3. Which of the following yield the worst security scores: Cobol, C++, or Java?

Q4. Which of the following have the highest complexity scores: Java, Oracle Forms, or Cobol?

Q5. Which industry has the highest complexity scores: Government, Financial Services, or Telecom?

Q6. Which code has a higher overall quality index, code produced in-house or code that is outsourced?

Q7. Which development approach produces code with a higher quality index, Agile/Iterative or Waterfall?

Q8. What is the "Technical Debt" in an average system, per line of code?
a. Less than $1.00 per line of code.
b. Between $1.01 and $2.00 per line of code.
c. Between $2.01 and $3.00 per line of code.
d. Between $3.01and $4.00 per line of code.


ANSWERS (No Peeking!)

A1. Overall, these languages rank (from most to least popular) Java, Cobol, ABAP, .NET, and C. Actually, C is rarely used. I include in the list just for nostalgia.

A2. In the Government, Oracle Forms is the most popular programming system, followed by Java. I note that Oracle Forms tends to be relatively small programs if one can even call them programs. So while Java is used for fewer "systems" I suspect (but can't tell from the data) that it is used for many more lines of code.

A3: From a security perspective, C++ and Java are in a virtual tie for worst security. Cobol code overall has a much better security score. This may reflect more on the industry than the language, since Cobol is used heavily in the Financial Services industry, where security is taken more seriously than, say Telecom where Java use predominates.


A4. The most complex code by far is found in the Cobol systems followed by Oracle Forms. Java wins for the least complex code of the three.


A5. The industry with the highest complexity scores is the Government. Financial Services is a distant second followed by Telecom. This result is surprising given than Java is popular in both Telecom and the Government. The implication seems to be that although the Government is using very good language tools, it is not maximizing their effectiveness.


A6. Code that is produced in-house has a better quality index than outsourced code, but the difference is marginal and probably not statistically significant.



A7. Overall, Waterfall development has a significantly higher quality index than does code produced using Agile/Iterative. Not only does Agile/Iterative score lower in overall quality, it also scores lower in transferability (the ability for other groups to understand the code) and changeability (the ability to modify the code.) I can hear the groans of protest already from the Agile community. Sorry, I'm just the messenger.



A8. The average "Technical Debt" in a system is $3.61 per line of code (answer d.) The technical debt looks at the number of problems, the severity of those problems, and the cost of fixing those problems.

Some of these answers are a bit surprising, aren't they? Feel free to read the summary report here. You will probably find another surprise or two.

Tuesday, November 22, 2011

CxO Executive Round Table on Business Risk and the Cloud in NYC

CxO Round Table on Business Risk and the Cloud in NYC
With
Robert Youngjohns, President of Microsoft North America
Roger Sessions (Author and Thought Leader)

Note: In return for being the main speaker at this event, Microsoft has offered me ten seats. I am making them available on linked-in, my blog, and to my email list on a first come, first serve basis.

- Roger Sessions

Event Description
Robert Youngjohns will offer Microsoft’s Vision, commitment and investment in the Cloud. Then Roger Sessions will give a vendor neutral roundtable discussion on Business Risk and the Cloud. Sessions’s talk will be followed by an interactive roundtable discussion between you and your peers.

Sessions’s Abstract: The business is greatly impacted by the decisions IT makes on the Cloud. But in most organizations, business has little input in cloud discussions. How can the business be sure that its needs for regulatory compliance, data security, reliability, and ROI are being addressed? A new model must drive the IT/Business collaboration to ensure the Cloud delivers the highest possible value at the lowest possible cost with the least possible risk.

Location: New York City, Tuesday, December 6, 2011

Audience: This talk is directed at those on the business side (that is, CxOs and business leaders) of large (>$1B/year) organizations. In order to optimize participation, the audience is limited to 20.

Note: There is no cost for this event.

Agenda
8:30 - 9:15AM Registration and Breakfast
9:15 - 10:15AM Microsoft’s Cloud Strategy - Robert Youngjohns, President, Microsoft North America
10:15 - 11:00AM Business Risk and the Cloud - Roger Sessions, Author, Thought Leader
11:00 - 11:45AM Roundtable Discussion: Experiences and Strategies
11:45 - 12:00PM Next Steps and Closing

To register for this event, send an email to v-josyph; at microsoft.com.
As the subject, use:
Managing Risk in the Cloud Roundtable Dec 06-Registration
In the body of the email, include your name, title, company, and the note, “by invitation of Roger Sessions.”

Speaker Bios


Robert Youngjohns is President, Microsoft North America
   Youngjohns oversees a sales force of over 8,500 employees across the continent. He brings more than 30 years of experience in sales, marketing and strategic business development. Prior to joining Microsoft, Youngjohns served as president and chief executive officer of Callidus Software, Inc., a publicly traded company and leading provider of sales management software based in San Jose, Calif. Before joining Callidus, Youngjohns spent 10 years at Sun Microsystems, where in his last role as executive vice president of Global Sales he was responsible for Sun's worldwide sales organization. He also spent 18 years in various roles at IBM.
   Youngjohns has deep technical knowledge and sales experience across the breadth of products within the Microsoft portfolio. Current market trends of focus are cloud computing and the consumerization of IT. He is committed to helping customers and partners get the most value from their relationship with Microsoft.

Roger Sessions is the CTO of ObjectWatch
   He has written seven books including Simple Architectures for Complex Enterprises and many articles. He is a past founding member of the Board of Directors of the International Association of Software Architects, Editor-in-Chief of Perspectives of the International Association of Software Architects, and a Microsoft(tm) recognized MVP in Enterprise Architecture. He has given talks in more than 30 countries, 70 cities and 100 conferences on the topic of Enterprise Architecture.
   Sessions is a well respect thought leader in the topics of enterprise architecture, the Cloud, and Complexity/Risk management.



Saturday, October 29, 2011

SIP Complexity Model

SIP, as you probably know, stands for Simple Iterative Partitions, a methodology for finding the least complex solution to a complex IT problem. I have written about SIP elsewhere, for example, in my White Paper The Mathematics of IT Simplification. My goal in this blog is to give a high level description of the model SIP has for IT complexity.

This blog is a work in progress. I will modify it based on suggestions and eventually turn this into a white paper. So please leave your suggestions as comments. If I use them, I will credit you in the eventual white paper. You might want to subscribe to this blog, so that you can follow the discussion.

Let's assume that we have a business problem, BP, that needs an IT solution. Let's say a solution, S001, is proposed. There are a number of attributes of S001 that we could think about. One obvious attribute is how much of the business problem, BP, does S001 solve? In other words, high closely aligned is the solution S001 to the problem BP?

Let's say we have a function A that measures the alignment of S001 to BP and returns a result as an integer between 0 and 100, where 0 is complete non-alignment (S001 doesn't solve any of BP) to 100 (S001 completely solves BP.)

We can write this as y = A(BP, S001)

Let's say we also have a function C that measures the complexity of S001 and returns as a result some number of complexity units. I won't try to describe these complexity units here, but I have described them in the above referenced Mathematics white paper.

We can write this as x = C(S001)

Note that A is a binary function (it takes two arguments) because it must compare the proposed solution to the original problem. C, on the other hand is a unary function (it takes one argument) because complexity is not dependent on the problem, only the proposed solution.

Since S001 now has both a y value, A(BP, S001), and an x value, C(S001), we can plot it on a graph in which y is alignment and x is complexity. This is shown in Figure 1.

Figure 1. Graph of S001, A Solution to BP

Now BP doesn't just have one possible solution, it has a whole bunch of solutions. Let's say we have identified 25 possible solutions to BP. We can name these solutions S001, S002, ... S025.

Just as we plotted S001 in Figure 1, we can plot each of the other solutions. This is shown in Figure 2.

Figure 2. Graph of S001-S025

In general, the values of Sx will be bounded by a tetragon, as shown in Figure 3.

Figure 3. Complexity Tetragon Bounding Sx

Why are the values bounded by this tetragon? At the upper edge, values are bounded by alignment. It is not possible to have more than 100% alignment, so that cuts off values at the top. The x axis is bounded at the low end by the lowest complexity observed in any of Sxs and at the high end by the highest complexity observed in any of the Sxs.

The upper left corner of the tetragon is bounded in the x direction by the lowest possible complexity that still gives the maximum alignment with BP. We can still find simpler Sx, but they can only get simpler by losing functionality that BP needs.

The upper right corner of the tetragon is bounded in the x direction by the highest possible complexity that still gives the maximum alignment with BP. We can still find more complex Sx, but their complexity will result in a loss of ability to meet the needs of BP.

We can further divide the Complexity Tetragon from Figure 3 into four quadrants, as shown in Figure 4.

Figure 4. Four Quadrants of Complexity Tetragon

The lines marking the quadrant boundaries are shown as fuzzy because the boundaries themselves are fuzzy. We can focus on these quadrants by removing the points and adding quadrant names, as shown in Figure 5.
Figure 5. The Four Quadrants With Names

I'll come back to the quadrant names in a moment, but let's just consider the characteristics of solutions that live in each quadrant. The characteristics are as follows:
  • vital quadrant contains solutions that have low complexity and high alignment to the business problem.
  • stagnant quadrant contains solutions that have high complexity and high alignment to the business problem.
  • chaotic quadrant contains solutions that have high complexity and low alignment to the business problem.
  • simplistic quadrant contains solutions that have low complexity and low alignment to the business problem.
Now you might wonder why I have focused on complexity in the X axis. Why not, say, agility? The reason is that a number of important characteristics are all directly related to complexity. Agility, for example, goes up as complexity goes down and goes down as complexity goes up. A partial list of these characteristics and their relationship to complexity are as follows:
  • Agility, inversely related to complexity
  • Security, inversely related to complexity
  • Performance, inversely related to complexity
  • Reliability, inversely related to complexity
  • Cost, positively related to complexity
  • Time to deliver, positively related to complexity
A quick examination of the above list will show you that desirable characteristics go down as complexity increases and undesirable characteristics go up as complexity increases. Therefore complexity is the most important characteristic to consider. Mathematically, we can say that complexity is the independent variable and agility, security, etc are the dependent variables. I will not try to prove this assertion here, but I have written about this elsewhere (for example, in my book, Simple Architectures for Complex Enterprises.)

Now I can describe why I have named the quadrants as I have. 

Solutions in the vital quadrant have low complexity and good alignment to the problem. I call this quadrant vital because these solutions have the best agility of all the quadrants that are aligned with the business problem. Because solutions in this quadrant have high agility, they can change as needed.

Solutions in the stagnant quadrant also solve the problem, but at a high complexity cost. Among other problems, these solutions are hard to change as business needs change. I call this quadrant stagnant because solutions in this quadrant are stuck in time.

Solutions in the chaotic quadrant have no redeeming features. They do not solve the problem and they are highly complex. I call this quadrant chaotic because nothing useful comes out of it. Chaotic solutions get canceled someplace along the project life cycle before delivery, usually after considerable money is dumped into trying to make them work.

Solutions in the simplistic quadrant do not solve the business problem, but at least they are not burdened with overwhelming complexity. I call this quadrant simplistic because the solution is overly simplistic with respect to the need. A rock, for example is a simplistic car.

When considering a solution to a business problem, it is helpful to consider which quadrant that solution lives in. 

If the solution lives in the vital quadrant, you are in good shape. You have a solution to the business problem and it has low complexity and is thus highly likely to have a number of important characteristics (agility, etc.)

If the solution lives in the stagnant quadrant, the solution has hope. At least it appears to solve the business problem. Now you just need to focus on removing the complexity that is going to rob you of the characteristics you want.

If the solution lives in the chaotic quadrant, the solution has no hope. The best thing you can do is to kill the solution as quickly as possible and start from scratch. Learn your lessons and be happy you didn't lose more money.

If the solution lives in the simplistic quadrant, the solution has hope. At least you are not mired in complexity. Now just try to meet more of the business needs without substantially adding to the complexity.

Clearly, our ideal solutions live in the vital quadrant. But not all solutions in the vital quadrant are equal. The ideal ideal solution is the one in the upper left corner, shown as a target in Figure 6. This solution is the one with the absolute lowest complexity that still meets all of the business needs. This is the solution we would like to find.
Figure 6. Complexity Tetragon With Ideal Solution

So far, I have just looked at the SIP Complexity Quadrants at a moment in time. We can also look at how things drift over time. There are two distinct undertows that are pulling at solutions. One undertow is pulling from greater alignment to lesser alignment. The second undertow is pulling from lesser complexity to greater complexity. These undertows are added to the tetragon in Figure 7.

Figure 7. Complexity Tetragon With Undertows

By understanding the undertows we can predict that a solution that is vital on the day it is delivered is going to drift over time to being less aligned and more complex. This tells us that we need to put ongoing energy into fighting these undertows. We want to not only start out with a solution in the vital quadrant, we want to stay in that quadrant for the life of the solution. In fact, the lifetime of the solution will be highly influenced by which quadrant the solutions starts in and how effectively we fight the undertows after the solution is delivered.

Figure 7 now gives us a complete picture of the complexity landscape of a proposed solution. It gives us considerable information about how good a solution is, what steps need to be taken to improve it, and what forces are going to be pulling at the implemented solution over the long term. 

What do you think? Is this a good start to helping you understand the SIP Complexity Model? Keep in mind this is a draft and I will be adding to it as I respond to comments. Feel free to leave comments here, tweet me at @RSessions, or email me (userID: roger; domain: objectwatch.com).















Friday, October 28, 2011

New Word: Simplility

I am introducing a new word into the IT lexicon: simplility. Simplility is defined as the intentional architectural design of simplicity into a software application

Simplility follows a long tradition of "ilities" such as portability, reliability, and scalability. These all imply that somebody has made an intentional decision to include specific attributes in an application. Nobody expects an application to be portable, reliable, or secure by accident. We understand that it take skill and effort to give an application these attributes. And we understand that there are important reasons for doing so.

Why, you may ask, do we need a new word? Why not just use the word simplicity? The problem with simplicity is that we take it for granted. To say that an application is simple simply does not have the same cachet as saying the application is scalable. We understand that scalable has business value. 

So we need a word that elevates the attribute of simplicity to the same level of the other ilities that we understand provide business value. We need a word that announces that simplicity is a business asset and that it takes skill, effort, and commitment to incorporate this attribute into applications.

There is one problem with the world simplility. It implies that simplility is equal in importance to portability, reliability, or security. In fact, simplility is more important. It is the primary architectural attribute, the one from which all other attributes flow. 

Take security, for example. A system that has simplility baked in is one that will be much easier to make secure than one that lacks simplility. Complex systems are inherently insecure. 

The same with reliability. The greater the simplility, the greater the reliability. Complex systems are inherently unreliable.

So if we are serious about IT architecture, we need to get serious about simplility. It is the most important ility of all. And starting today, it has its own word.

Tuesday, October 25, 2011

Public or Private Cloud? Wrong Question!

Many people are asking if it is best to use a public or private cloud. They are asking the wrong question.

The bigger issue for most organizations is not whether to use a private or public cloud, but whether to use any cloud at all. The issue here is not whether the cloud is a good idea or a bad idea, but whether the organization has sufficient maturity to make effective use of any cloud. Most don't.

Most organizations use a pre-cloud architecture. The cloud places specific demands on a app with respect to efficiency, performance, reliability, and security. Very few organizations understand these demands and thus very few apps of any significant size are architected to meet those demands.

A good historical comparison is the switch from client/server programming to three-tier programming. A new generation of technologies enabled three-tier programming, namely transaction processing monitors (TPMs.) TPMs promised a radical improvement in the efficiency of computer resources, very similar to what the cloud promises today. Many organizations took their existing client/server applications and "ported" them to a TPM environment. They then discovered the hard way that a client/server architecture is totally unsuited for a TPM environment.

Organizations that take their existing three-tier, SOA, or even client/server apps and "port" them to the cloud are in for an equally rude awakening. They will find that these apps are expensive to run, highly fragile, insecure, and will have poor performance.

Asking if a public or private cloud is best for most apps is like asking if a public or private road is best for most boats.

Saturday, September 17, 2011

Next Web Short: Size and IT Risk

Announcing a new series of short focused web meetings with Roger Sessions. You are invited.

NEXT WEB SHORT
Topic: Size and IT Risk; The Relationship Between Project Size and Project Failure
Date: Wednesday, Sept 21
Time: 11:00 AM USA Central Time (16:00 GMT)
Recommended Audience: CIO, CIO Reports, Enterprise Architects
Format: 15 minute presentation followed by Q&A


To Register: send an email with subject: REGISTER to roger (domain: objectwatch.com). Be sure to include your name and the email to which you would like the log in information set.


REGISTRATION LIMITED TO 20 PARTICIPANTS. 

There is no charge for registration. We invite you to forward this information to those in your organization that you think might be interested in attending. 

Upcoming Web Short Topics

  • Complexity and the Cloud 
  • The Challenge of Business/IT Alignment 
  • Achieving Agility 
  • The Problem of Procurement
  • The Fallacy of Reusability 
  • SIP Methodology

If you would like to attend these meetings but the time (GMT 1600) is inconvenient because of local time zones, let us know and we will add other time slots.



For priority notification on future Web Shorts and White Papers by Roger Sessions, subscribe to the ObjectWatch News List here.



Tuesday, April 5, 2011

The Mathematics of IT Simplification

01 April 2011 - I have written a 32 page white paper on The Mathematics of IT Simplification. This paper suggests a mathematically grounded approach to simplifying large complex IT systems. This approach is called synergistic partitioning. Synergistic partitioning is based on the mathematics of sets, equivalence relations, and partitions. This approach has several advantages over traditional decompositional design:
  • It produces the simplest possible partition for a given system.
  • It is directed, or deterministic, meaning that it always comes out with the same solution.
  • It can determine the optimal partitioning of a system with very little information about that system.
You can pick up a copy of the paper here (it's easy, no registration or cost) and discuss it by leaving comments right here.