Americas

  • United States

Server consolidation on the rise

Opinion
Jan 08, 20043 mins
Networking

* Scaling up with consolidation

Driving on-demand computing is server consolidation, a way of saving money and getting more for less.

Server consolidation comes in a variety of flavors – blade servers, powerful 64-bit processors, server provisioning, dynamic partitioning and early iterations of utility computing.

Creating the right package of tools to meet both the needs of the organization and the constraints of ever-tightening IT budgets is a constant concern for IT managers.

Research firm IDC predicts that 75% of large corporations will consolidate portions of their servers or storage this year.

Unix systems are being consolidated the most, followed by Windows and then Linux. IDC says users in 2003 spent more than $1.3 billion on Windows consolidation, a figure that will double by 2006. For Linux, consolidation spending was expected to top $232 million in 2003.

IT managers said there are three barriers to consolidation: budget constraints, lack of time and internal management issues.

They said the three top reasons driving them to consolidate were improved system availability, improved disaster recovery and security. Twenty-eight percent said they measured the success of consolidation by the cost saved, and 27% said improved system availability was their metric of success.

IDC expects worldwide revenue from server consolidation projects to grow from $5.2 billion in 2003 to $8.5 billion in 2006.

Consolidation is spurred by several factors:

* A move towards denser computing environments.

* The ability to divide up or partition servers to handle multiple workloads.

* A desire to manage and scale the total server environment from a single management interface.

IBM and Unisys have been very successful in their consolidation efforts. IBM has revitalized the mainframe by putting Linux partitions on it. Unisys claims to have succeeded in server consolidation by ganging up Intel processors into a single large multiprocessor box running Windows 2003.

Both these systems are examples of “scale up” architecture, where scaling fewer machines up with more processors can cost less than “scaling out” to more machines for some workloads. Fewer machines mean simpler architectures, which in turn mean lower administrative costs.

Many Internet-based applications will not really benefit from large-scale symmetric multiprocessing, however. Connecting tens or hundreds of servers with load balancing, clustering/failover and caching can provide effective alternatives.

Thin, high-density servers are less expensive and quicker to deploy (in 15 minutes or so).

However, this will come at the cost of more management complexity and an increasingly larger footprint. And scaling out to more machines simply does not work after a certain number of processors have been added. Eventually, any scale-out implementation must be supplemented by scale-up.

Using the three-tier framework Internet infrastructure of Web and edge servers, application servers, and back-end database servers, customers should evaluate whether they want to scale out or scale up servers to achieve the level of performance and availability they require.

Typically scaling out system performance is done to support Web, edge, and application servers, but with the emergence of distributed databases and high-speed standardized interconnect technologies such as InfiniBand or Myrinet, scaling out for application performance will increasingly occur at the database level.

As storage too is increasingly migrated off of servers, servers will become increasingly optimized to take advantage of scalable memory, high-performance processors and high-speed interconnects in dense server packaging.