Are enterprises successfully escaping Mainframe Island?

An analysis of complete cost-of-ownership studies provide clues for why many happily stay on the island

Are enterprises successfully escaping Mainframe Island?
Credit: Thinkstock

From time to time, a vendor's PR rep sends me a note about the "problem" that is caused by mainframe systems being at the hub of enterprise computing. In reality, these systems often offer more integrated processing power, larger memory capacity and more efficient database operations than a distributed, x86-based solution.

The most recent pitch I received included this sentence: "How the dusty old legacy mainframe holds back cloud initiatives... and how it can be modernized."

What are the real costs?

Part of the reason mainframes won't die is that often they simply cost less to operate when all of the costs of ownership and workload operations are considered.

While I was with industry research firm IDC (I was IDC's vice president of system software research for a time), my team would conduct extensive cost-of-ownership studies to determine the relative costs of a workload or an IT solution hosted on different platforms. Surveys would be conducted to learn the actual costs incurred by companies and then an overall cost of ownership per 100 users could be evaluated.

Unlike some cost-of-ownership studies that are presented by vendors to support their own marketing statements, these studies were not based on modeling costs based upon a vendor's chosen segments of cost. They started from a different place: surveys of IT decision makers and executives. Typically, several thousand IT executives were surveyed to learn about the actual costs their organization was experiencing supporting a specific function or workload. These studies were very comprehensive. Participants were asked questions about 75 to 300 different categories, depending on the type of function, the platforms included and the needs of the buyers of the research.

+ Also on Network World: Mainframes 2026: What the future holds for big iron +

The survey responses would be tabulated and analyzed with the goal of modeling the real costs for hardware, software licenses, staff equivalent costs and, in some studies, the costs of data center floor space, power and cooling.

The categories would include items such as hardware- and software-related acquisition, maintenance and operation. A large number of staff-related costs would be factored in, including everything from development, testing, training, support, administration and operations. This included installation, updates, ongoing operations and even end-user support. The results were usually pretty clear.

Distributed vs. centralized—an important distinction

One segmentation that often turned up as being very important as to why a given approach was selected was how distributed the approach was. In short, typically the more distributed a computing solution was—that is, the more devices, including systems, storage, networking and power equipment, that were involved—the higher the total cost was to the company being surveyed.

That's because a distributed computing environment almost always required a larger number and type of expertise to be available. This also usually also meant the company needed more staff.

This one segmentation can explain way mainframes have stayed in the enterprise data center and why "converged," "hyperconverged" and "ultra hyperconverged" systems are emerging in the x86 world.

Where are the real costs?

These studies almost always demonstrated that the costs for hardware and software, when combined, were typically less than 20 percent of the total five-year cost of ownership. Staffing, networking and power costs typically were significantly higher than hardware and software.

So, when a vendor claims a software product can reduce an enterprise's cost of ownership, I want to know if that claim was based on research that included those factors or merely looked at software-related savings. Even saving 50 percent of a category that makes up only 10 percent to 12 percent of the total might not result in a huge savings for the company.

Suppliers such as IBM use this type of insight to point out that companies relying on Linux, Java and databases to support their cloud computing, analytical workloads and transactional workloads would experience higher levels of performance at a lower overall cost if they deployed a small number of IBM's mainframes rather than a larger number of any vendor's industry-standard x86 offerings to do the same work.

Untangling the yarn

Another issue in moving a workload from a mainframe to any other computing platform is that most mainframe-based applications are tightly integrated with the transaction processing framework, the mainframe database engine, the mainframe storage system and even the mainframe style of IO. So, the attempt to move something from that tightly integrated computing environment would very likely require a total rewrite after a re-architecting of the entire solution to use different tools.

Snapshot analysis

While I’m looking forward to the briefing in which this vendor is going to discuss its cloud-based approach, I suspect that when all of the costs are calculated the mainframe will still win.

We'll see, but I'm pretty certain that we still won't be able to declare the mainframe dead.

This article is published as part of the IDG Contributor Network. Want to Join?

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Must read: Hidden Cause of Slow Internet and how to fix it
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.