• United States

Why banks didn’t ‘rip and replace’ their mainframes

Sep 17, 20183 mins
Computers and PeripheralsData Center

In the early 2000’s, companies were ready to ‘rip and replace’ their mainframes for new technology, but what came next was inferior to the processing power of the mainframe.

world map network server data center iot edge computing
Credit: Getty Images

Consumer demand for instant 24-hour access to personal bank data has taken the financial world in a new direction in less than one generation. Not only do bank IT departments now rival those of software development companies, but banking networks and infrastructure are at least as complex as a tech firm’s. Personal financial information has become one of the most protected and heavily regulated types of data in the world, and security measures and compliance programs consume the largest percentage of a financial institution’s IT budget.

Knowing all this, it’s no wonder the “rip and replace” fad of the early 2000’s never materialized in the banking world. With everyone assuming the turn of the millennium meant “out with the old and in with the new,” companies were ready to rip the mainframes out of their infrastructure to prepare for whatever was next. But what came next never really materialized — or continued to prove inferior to the sheer processing power of the mainframe, which remains the only real choice for high-demand business computing.

Hard to say goodbye to mainframes

“Rip and replace” was driven by excitement at new possibilities and by fear the mainframe would become cost-prohibitive and obsolete as ecommerce increased, but engineers continually adapting and modernizing the mainframe for today’s technology demands have kept the mainframe well ahead of hopes and fears alike, able to keep up with ecommerce and provide incredible security and efficiency. This immense effort to modernize and secure the mainframe has created the most rock-solid banking platform in the world.

The pitfalls of ‘rip and replace’

Some businesses have continued trying to make “rip and replace” work, but while such sweeping strategies sound good in theory, diversifying a network’s architecture adds layers of complexity and may cost more in the long run. For example, the IT department may need to add employees that have a diversified skill-set to maintain a more complex network, or there may be added costs to ETL the distributed data to another server or even route data back to the mainframe for downstream processing or analytics. Not only are capital expenditures involved in a rip-and-replace strategy, but operating costs eventually increase, too.

Why banks stayed away from ‘rip and replace’

These hidden costs and iffy returns were what kept the stability-focused financial institutions loyal to the mainframe, which offers something no other server has: immense processing speed coupled with the ability to encrypt data from end to end, making the mainframe the superhero workhorse for finance. The processing speed of the mainframe means it can detect real-time banking irregularities before the hackers realize they’ve been spotted. The mainframe also contains layers of security, depending on the location of the data, to eliminate a data thief from being able to access personal financial information in one cache.

All these benefits — encryption and security for data at rest and in transit, processing speed for crunching up to 12 billion worldwide banking transactions per day, processing power to enable analytics of enterprise-wide data, and even eliminating platform-dependent skills to develop modern applications — prove that the mainframe still remains at the hub of our financial industry’s network. And the compulsion to “rip and replace” may not ultimately reduce expenses; it could actually end up costing a company more than just money, while the mainframe remains at the core of our digital-age finance.


Jennifer Nelson is the managing director of R&D Database Servers and Tools at Rocket Software. After serving in the U.S. military, Jennifer attended the University of Texas while moonlighting as a DB2 database administrator at an IT company in Austin, Texas.

Before joining Rocket Software, Jennifer enjoyed a fruitful career in the DB2 Tools industry at an ISV in the Austin area. Since joining Rocket Software, she has played multiple roles, including a client technical professional, a product manager, and now a lab director. While her heart will always be in DB2, she has broadened her expertise in business intelligence and analytics with a focus in virtualizing enterprise-wide data for transforming business insights.

Jennifer lives in Texas with her husband and three golden retrievers.