• United States

Server virtualization: Controlling server sprawl

May 24, 20044 mins
Data CenterServer Virtualization

Server virtualization: Controlling server sprawl

Omar Yakar, president, Agile360

Business problem: Faced with massive expansion, a title company was struggling to maintain IT service levels. Its employee population had tripled to 1,000 people at 40 offices in the past year. Some application decisions rested with local personnel, which meant some applications conflicted with each other and had to run on separate servers. Furthermore, the performance of shared applications and databases housed at the central data center was degenerating. The five-person IT department needed the ability to manage applications without adding headcount and while maintaining the company’s decentralized style.

Traditional approach: Maintain file and application servers at each office while centralizing databases, messaging and directory services. Replicate critical application databases to each office. Opt for individual silos of servers for each application set to avoid conflicts. Increase network bandwidth to handle database and directory synchronization traffic. However, this would require managing at least 40 servers across 40 full-time, dedicated WAN links, which would incur high monthly recurring costs without even taking disaster-recovery capabilities into account. It also would continue to stretch a small staff too thin, requiring frequent travel to all 40 locations.

New data center approach: Borrow a design strategy from the application service provider model, and approach the IT operation as though it were meant to be a profit center. If it’s going to be a profit center, how does it keep high customer satisfaction levels, high efficiency and low overhead? Virtualization – for storage and servers – would be key.

Server virtualization, in particular, would let business managers control their own environment, even while that environment was being provisioned and managed from a central facility. Server virtualization allows the de-coupling of logical servers (for example messaging, database and domain controllers) from hardware. It also isolates applications from the operating system and aggregates multiple storage resources as one volume. In other words, it turns a physical server into what I call a “processing peripheral.”

With applications isolated from the server operating systems, an application-specific environment can run in a protected memory space rather than on the operating system. This would let a 10-year-old version of Microsoft Word run simultaneously on the same physical server with a new version, for example.

Some server virtualization products encapsulate the entire image of a physical server in one file (including the operating system, applications and direct-attached or networked storage). In these cases, the processing peripheral (the physical server), runs the virtualization software on its internal disks while the virtual server file, with its associated storage, runs on a SAN. An application then can boot from the SAN and execute on the chosen processing peripheral. Management tools let you see all of the available processing peripherals and the load on each and choose the best server to run the application.

With server virtualization, logical servers are converted to virtual servers, meaning they become files not tied to any hardware, but residing instead on logical unit numbers carved out of the SAN. They can be operated on any physical server or even moved across different models of hardware without interruption to users. Efficiencies come from consolidation of processing resources, managing load capacities across a pool of disparate resources and the ability to quickly spin up any kind of server a business manager needs. In this case, the title company can run all of its applications on two multi-processor servers, each acting as a failsafe for the other.

The title company also would want to make use of an ILM strategy that lets it lower storage costs by using inexpensive ATA devices (such as EMC’s Content Addressed Storage [CAS]). CAS is analogous to checking your coat at a restaurant – the content (an e-mail message, image or document) is assigned a ticket and then stored; when retrieved the ticket is matched to the content and delivered.

Another crucial element is the applications. With applications now housed at a central location, the title company would want to implement a role-based Web front end that aggregates Web and Windows applications with a common user interface like a browser. Applications also should run on a thin-client design (such as Citrix or Web services). This would limit bandwidth requirements, regardless of the number of applications running simultaneously, while centralizing application server management.

While the title company still would need 40 WAN links, it could rely on smaller and less-expensive links for many offices because of thin-client computing. Low-cost VPNs might be used as failovers to each site should the WAN go down.

Previous Electronic messaging growth | Next Sluggish applications