Everyone knows complexity is a foe of IT. But how bad is it, and how do you tell if your decisions are making it better or worse? Peter Leukert, CIO of Commerzbank, one of the largest banks in Germany, set to find out. Leukert, who runs the financial service giant's 3,800 member centralized IT group, built an IT Complexity Model to get a handle on the problem, and then turned to consulting firm Capco Partners to help get other financial service firms involved. Network World Editor in Chief John Dix recently caught up with Leukert and Mat Small, Partner with Capco in New York, for a briefing on the effort.
Peter, tell us about the complexity issues you face and what you're hoping to get out of the model you developed.
LEUKERT: In an environment like ours with more than a thousand business-relevant applications, you see complexity increasing, and every CIO I talk to shares the opinion that complexity is an issue. But it is very hard to get under control.
Why is it hard? Because it keeps creeping up on you and, compared to budget for example, you can't localize it. To manage budget, if I get 100 and I have five reports, each of them gets 20 and I deal with everyone who goes above 20, so you can localize the problem. With complexity, that doesn't work because it's the interaction of many, many efforts and you only feel the effect in the aggregate. Another reason it is hard is we cannot measure it. And that was really the guiding thought behind the model.
Complexity increases cost and decreases flexibility -- often in unforeseen ways -- and also tends to decrease stability. If you run IT, those are three of your most important KPIs. So all three are adversely affected by complexity.
We started building the model three years ago and collected a time series of data, but you can only learn so much from your own data. What I'm really interested in is benchmarking myself against other institutions. That's the main reason for us to join forces with Capco, to get other banks and financial institutions involved, because I believe that together if we get data from other institutions it will become a more valuable tool.
Mat, the first whitepaper about the complexity model that Capco co-authored with Commerzbank says the model is designed to help organizations "model, measure and master" complexity, but go a little deeper for us.
SMALL: It is a model based on inputs that generates a metric. All model inputs aren't equal so, as a company embraces the model, they would want to place different weightings on different inputs. Over the last many months we've been working with Commerzbank to refine the model and increase the number of inputs.
The model itself has been statistically validated by Professor Martin Mocker at MIT, so the math is very tested. And Capco and Commerzbank have been refining the inputs, refining definitions of inputs, continually validating that as we add or amend inputs.
Over the next year we're going to take data from other financial service organizations and start running the model against those inputs, producing metrics that are appropriate for those firms. And then over time we will have created a larger set of benchmark data so different organizations will be able to start accessing larger problem statements.
You are only looking at application complexity, right?
SMALL: Yes, just the application stack. So the model does not look at peripherals or the infrastructure side.
Will you ultimately try to get other types of industries involved?
SMALL: For the next year the focus will be within financial services, although that wouldn't prohibit somebody not in financial services from getting involved. I hesitate to throw out timelines, but one could easily think forward two years to inputs being so well defined that somebody outside of the financial services market could be able to execute and run it.
Peter, many organizations are trying to deal with complexity by consolidating data centers, consolidating servers, reducing their application count, so I presume you are undertaking these kind of efforts as well?
LEUKERT: Due to the takeover of Dresdner Bank by Commerzbank, the largest-ever acquisition in the European financial sector, we were in the fortunate but unusual situation to actually halve our complexity by shutting down most of the Dresdner systems. We basically closed down and shut off half of the total application inventory of the combined bank. And of course we have also consolidated the network and the data centers. We have basically gone from four data centers in Frankfurt to two, from four data centers in London to two, etc.
So yes, consolidating applications clearly reduces complexity. But our model helps you be more sophisticated about it because it helps you figure out where you can get the biggest bang for the buck. It can give you an indicator that says, "Look, if you do it this way, it might be better in the short term, you might have a slightly faster time to market or a slightly lower administration cost, but beware, the long-term impact in terms of complexity is huge, and that will in the end translate into more long-term cost and reduced flexibility."
Does the model specify enough inputs to make critical decisions, or will you need to flesh them out with time?
LEUKERT: Keep in mind that there will always be other factors to consider. It's not an automatic thing where, say, "If complexity is above 20 then don't do the project." It's more, "If the complexity increase is far in excess of the project value, think again about doing the project."
Our model helps you make better informed decisions, and it basically quantifies something that, in the past, all the architects have felt and talked about qualitatively. As you know, if there are three quantitative arguments and one qualitative one, the latter always becomes weak in comparison to the quantitative ones. And that's why if you can make that quantitative, it becomes more powerful.
So that's one use case. And the other is concerning profound decisions, questions of really transforming a major part of your architecture: modernizing a legacy system or something like that. The model can help by telling you it is worth taking the risk and making the investment because not only will you reduce operational costs, but you also will substantially reduce complexity, and that is an additional benefit in your business case.
Can you give us an example of how you have used the model?
LEUKERT: Sure. We are currently using the model to evaluate master data management tools within the bank. We have at least seven different systems holding customer master data and we're evaluating the best strategy to consolidate that.
In one scenario you say, "OK, I'll do a greenfield approach. I wish to start over and migrate all this into a consistent data model in one new system." Or we can take a more evolutionary approach and say, "I'll build upon one or two of these existing systems and extend them in an evolutionary fashion."
Complexity is a very important guideline in making this decision because it really gives you insights into how different they are, the effects in the core domain of customer master data itself and also the effects on the overall application landscape because of all the apps consuming master data.
Have you come to a conclusion on that yet?
LEUKERT: I think we are pretty close to concluding. And it seems at the moment that, if we do the tradeoff of cost vs. investment and complexity, the evolutionary approach is really the superior one in this case.
That's fascinating. Do you think you might have come to a different conclusion if you didn't have the complexity tool?
LEUKERT: Yes. Because we looked at the question a couple of years ago and at that point we actually thought the only way to solve the problem would be the greenfield approach. We thought the other way would not really solve the problem, and by having looked at the complexity levels, we see this is a very valid alternative.
And we have also come to better understand some other things. One of the favorite excuses of project leaders is, "Well, my project became extremely expensive because we don't have a consolidated source of master data." And from this analysis we have learned that only a small amount of complexity is driven by the question of whether your master data is consolidated or not.
You're going to take all the fun out of this by removing the need to get the newest shiny toy.
LEUKERT: (Laughs.) Yes. But of course we don't do it for fun. We typically use proven technology.
How do you anticipate the model being used?
SMALL: I think it will have relevance for any organization that faces change. They would probably start by looking at the metric quarterly -- did complexity increase or decrease from the quarter before? -- and then, over time, advance that to monthly. I doubt over the next five years you'd see people executing this more frequently.
How granular do the inputs get in terms of describing your environment?
SMALL: Today there are no more than two dozen inputs for each component. I would expect that would continue to increase over time, but they're not all required to execute the model. You could run the model with the absence of a number of inputs.
That sounds promising, because sometimes these types of things get so big and complex that, over time, they don't get used because too much work is involved.
SMALL: That's one of the things we've invested quite a bit of time on. As we introduce other companies to the model, we'll look at prioritizing inputs. We might end up with multiple levels. So, there may be priority one inputs that must be captured, the attributes that have to go in for the model to run. And then a priority two set that provides for greater model precision, but you've already captured the directional soundness of the model just by running the must-haves. And then priority threes, which in fact aren't even necessary for the model but they're nice to have in understanding the results of the model.
But it's not there yet. Today we know we need the key attributes to run the model, but only as other companies come in to the process will we be able to predict the precision an organization gets.
Given you only have a couple dozen inputs for each component, how can you reach such meaningful business conclusions?
SMALL: Don't forget this is an analytic; it's not meant to replace a CIO or a CIO's experience. It's simply meant to challenge or validate that experienced perspective.
Think about a trader at a trading desk. You could ask them, "Do you know what your convexity is today?" And they'll be able to tell you, with fairly pinpoint precision what their risk is. But they still get an analytic that tells them what their convexity is. And that analytic is meant to challenge that perspective on a continuous basis because there will be events where all of the sudden it will be different from what the trader thinks, and the trader will say, 'Whoa, whoa, whoa; am I right? Or is the model right?'
And you know, sometimes the model is wrong, it needs to be recalibrated. Sometimes there is a market condition that's happened that they didn't actually fully appreciate. And that's a good way to describe how this would be used by a CIO. The CIO would look at this as a way to say, 'I'm about to make these decisions, and I understand and appreciate what the output is going to look like because I've done this before'. But it's nice to actually be challenged on that. Either positively or negatively.
Peter, you said complexity increases costs, and decreases flexibility and stability. Is the model helping you address all three areas equally?
LEUKERT: Yes. The hardest one to measure is flexibility because that in itself is typically something that's not well quantified. But on cost we have run correlation analysis and see that the more complex an application domain the higher the maintenance cost. There is a very strong correlation. And for stability, we also see a strong correlation of complexity to incidents in production.
Is the model done or is this an evolutionary process?
LEUKERT: I think of it as version 1.0. There will be a 2.0 and a 3.0 version. I think it works. It's applicable. But of course it will evolve. There are two sources of evolution: one, by trying it and working more with it, you will generate additional insight and maybe find additional indicators to take into account. And then of course involving other institutions will help it evolve, will give additional insight.
SMALL: With the introduction of any analytic, people will gain a better appreciation for the metric over time. With complexity, an organization's IT footprint may drive a very high complexity metric. The thing that will be important to understand is how decisions are influencing the increase or decrease of that metric. So, a given company's baseline may be higher than normal, does that make it dangerous? Not necessarily.
You start out with a metric, and then as you're making decisions for the future, you can start to see how you're influencing complexity going forward, and that might challenge some ideas. Before we assume a position where we're increasing complexity, maybe, we may want to look at alternatives that might decrease complexity.
Any hurdles that you can see?
LEUKERT: I think the challenges, as always, are more on the human side. So if you had a model that confirms that gut feeling, people are happy, yes? If the model suddenly contradicts that gut feeling that your architect or some chief developer has, it will be a tough change-management process. But that's normal.
And the other risk, of course, is over-engineering. I mean, for each of the input parameters in the model you could probably write a dissertation, but you don't need 100%. You can really work with 80/20 here to make management decisions, because management is always faced with uncertainty and incomplete information, and this is just a tool to improve your decision quality. So being pragmatic is also important.