Americas

  • United States

Going back to the future

Opinion
Aug 20, 20033 mins
Data Center

* How the recent computing paradigm shift is a leap ahead and a leap back

It’s interesting how as we progress, that eventually things cycle back around to our past. We went from mainframes, in days gone by, to distributed systems. Now it seems we’re shifting back toward centralized management, virtualization of resources, large pools of processing power, and large collectives of shared storage resources.

For those of you who are old enough: Sound familiar?

Although the equipment and technology are different, we’re returning to many of the concepts of the mainframe. No, we’re not returning to the “glass house” days by any stretch of the imagination, but we’re seeing many of the mainframe concepts and practices returning to the fore.

Many management capabilities that have been used for mainframes are being applied to the broader IT infrastructure, inclusive of mainframe and distributed environments.

As the volume of devices increases, and managing far-flung infrastructures become more complex, the trend toward centralization is gaining ground. The drive to reduce IT operating costs has fueled centralized or “distributed-central” management (distributed centers of management). Large companies are turning to tools that can manage data centers, corporate as well as regional.

Those data centers are filled with blade servers, server farms, racks of servers, and even mainframes. The concentrated collections of compute power are similar in many respects to the mainframe environments of the past – but the variety of processors delivering the compute power today are much more heterogeneous. Unix, Windows, S/390, Linux and more are the workhorses of today.

This heterogeneity presents a big challenge, particularly when trying to “virtualize” your compute resources. In the old days, MIPS were a common measure for characterizing compute power. In today’s world, there is no common measure. Even if you can pool or virtualize all of the processing power, all processing power is not alike. The question then becomes, how do you optimize the compute resources in such a way as to meet the business needs while balancing the cost of the computing resources applied?

On the security front, technologies such as identity management and Lightweight Directory Access Protocol are also contributing to the trend toward centralizing management functionality. Rather than relying on pure system security for authentication and authorization, companies are relying on corporate and regional security management approaches.

Storage-area networks (SAN) are reminiscent of the shared storage resources of the past. We went from shared storage to storing data on the disks of our own PCs. As people realized that getting access to distributed data was an issue, voila – SAN technology emerged.

In the old days, the networks connecting systems, terminals and other devices were king. Then LANs garnered our attention as they helped to connect individual PCs to share information. Now broader networks (including the Internet) are back on top, as applications span across networks.

At the core of this trend toward centralization is integration, managing by policy rather than by manual labor, and a holistic approach. Integrating management tools to achieve the desired result is critical to this effort. 

While these trends point toward some centralization, other emerging technologies such as Web services and utility computing encourage more distribution of computing. This is the reason that I stated at the beginning of this article that we won’t be returning to the traditional “glass house” as we knew it. But it’s interesting how we do go back to the future.