A virtual breeze

Five tips that will have you sailing through your server virtualization projects.

Although most server virtualization software can run on just about anything and still work, users need to invest in good hardware if they're serious about virtualization, early users say. This is especially important if they're planning to deploy high-transaction database applications or other I/O-intensive applications, they add.

By now, almost everybody agrees the buzz about server virtualization is justified. It's impossible to argue with the evidence presented by early adopters: This New Data Center technology has indeed let them decrease the number of physical servers they run and increase the number of applications they support - all while boosting performance and availability, and even easing the overall administrative workload.

These results have not come without trial and error, however. Roughing it out through server virtualization's early years, the pioneers learned a few things that make the technology even easier to deploy and manage. Without a doubt, they say, these five tips will help today's users get the most out of their virtualized server environment.

1. Don't skimp on the hardware.

Although most server virtualization software can run on just about anything and still work, users need to invest in good hardware if they're serious about virtualization, early users say. This is especially important if they're planning to deploy high-transaction database applications or other I/O-intensive applications, they add.

For example, investing in Sun's two-processor, dual-core SunFire 4100 and 4200 servers let Atlanta's energy consultancy NewEnergy Associates fold 20 virtual servers into just one machine - far more than the seven to 10 applications it had expected to consolidate, says Neal Tisdale, vice president of software development. "The Sun servers have really good addressing speeds with the Opteron processors, and the address bus is four to eight times faster than some of the Intel motherboards," he says. "So the amount of servers we've been able to virtualize is very high."

Similarly, Jason Powell, technology director at Granger Community Church in Indiana, is consolidating 11 servers down to four midlevel Dell PowerEdge servers. Each server, he estimates, costs $10,000 for the hardware alone but can accommodate as many as six virtual servers easily. This leaves him two extra servers for truly robust failover. "The hardware investment is worth it, because it can really be leveraged, and you end up saving in the long run," he says.

But perhaps the best example is Baldor Electric, a manufacturer of industrial electric equipment in Fort Smith, Ark. Mark Shackelford, IS director, says he consolidated 45 Linux-based SAP application servers down to just one box, an IBM zSeries mainframe. It runs all the company's mission-critical applications and handles I/O-intensive SAP database applications, he says. "The IBM zSeries is very expensive upfront, but I have a very limited staff," he says. "In the long haul, we've proven that the total cost of ownership of the zSeries is the cheapest there is, especially compared to Intel boxes and their downtime, performance and management costs." (See related story for more on measuring the savings possible with virtualization.)

2. Don't virtualize everything.

Baptist Healthcare System in Louisville, Ky., has consolidated nearly 200 servers into 15 Intel-based boxes, but that doesn't mean it virtualizes everything, says Tom Taylor, a client/server infrastructure analyst for the healthcare group. "We use [VMware's] ESX Version 2.5, so anything that needs over 3GB of space, requires more than two processors and requires its own USB device or ancillary components like that, we don't virtualize," he says, noting key limitations of the VMware software.

Similarly, applications like Microsoft Exchange are too I/O-intensive to virtualize. "I wouldn't put my Exchange server in a virtualized environment, just because of the high database I/O. It wouldn't be conducive to the platform or technology currently," says Kevin Westman, network systems manager at the University of Chicago, which manages Argonne National Laboratory in Lemont, Ill. An early VMware user, Westman now favors XenSource's open source Xen platform. With it he has consolidated as many as 15 Windows and Linux servers into an Intel-based box.

Baldor's Shackelford agrees, noting Exchange is one of the few applications his firm runs on a dedicated Windows server. "Exchange wants to control the whole box," he says.

3. Watch the licensing.

Because servers can be deployed very quickly with VMware, staying compliant with licenses can be difficult, Baptist's Taylor says. "We have some licenses we can use in a test environment and some in production. Sometimes it's in production one day and test the next, so you've got to play that game a bit. It's difficult to stay compliant in a Windows world, just because Microsoft makes it so difficult to follow its licensing," he says.

Licensing can be especially tricky when dealing with CPUs and cores, NewEnergy's Tisdale says: "Microsoft's not so bad, because things are licensed per server and then per user. But Oracle was one of the worst, because it was charging per core. If you move a big Oracle app onto a virtualized server that runs 20 other virtual servers and has a ton of CPUs, it can be tough. You have to set it so that Oracle only runs on the licensed number of CPUs." Math libraries such as MathCAD and MathLab also are difficult, he says.

Others warn of software vendors that won't support their products on virtualized servers. This was more the case in virtualization's early days, but it still happens, says Philip Borneman, assistant IT director for Charlotte, N.C. The city uses 12 to 15 physical servers to support 72 to 75 virtual servers, primarily for Windows and Linux applications. If a vendor won't support its application in a virtual configuration, the city won't take the risk, he says.

The problem of nonsupport is especially acute in healthcare, Baptist's Taylor says. "We constantly hear that the application is validated by the FDA and it's not validated on a virtual platform, but that's just a smoke screen. If you talk to the FDA, it says it doesn't worry about where the package runs, only what it is doing." When Taylor encounters a vendor leery about virtualization, he offers to test its package in his environment. "We work out partnerships with different vendors and say, 'Look, if you let us do this, we will let you learn from us, and we will validate your platform for you.'"

4. Get a grip on storage.

Virtualized server environments require a strong storage-area network (SAN) on the back end, pioneers say they've learned. At Granger, a small nonprofit without a SAN, Powell says he finds managing disk space difficult. "VMware can bite you with disk space. You have to commit to how big each server is going to be, and if you have five virtual servers, they can chew up a lot of space. A SAN would make life so much easier, because all of these virtual folders could just live on the SAN," he says. "And we could actually boot right from the machine to the SAN, so we wouldn't even need disks in the physical servers anymore."

"If you don't have a SAN in place, you're limited to managing ESX servers individually," Taylor says. "You lose the high availability, because now the ESX server is reliant on this local storage. If it goes down, there is no good way for you to move those virtual machines to another host, because there's no SAN to pivot off of." VMware's VMotion, which lets users move virtual server instances on the fly, requires shared storage, such as a SAN, he adds.

Even with a SAN in place, storage can be tough to manage, Taylor says. "Virtual machines are so easy to roll out that you can easily overrun your environment," he says, noting he has issued memos to his server staff putting a moratorium on new virtual servers. "When you can have a 2,000-server rollout in 15 minutes, people get used to that, and things can get out of hand, especially in storage."

To manage storage, Taylor usually sets the storage specs for new virtual machines lower than requested, because tweaking up is easier than down. Figuring out how big to make logical unit numbers (LUN) for supporting virtual machines is tough too, he adds: "We've found that 500GB tends to be the best LUN size, because that way you limit how many virtual machines can go on it. You don't want too much I/O on the LUN, or you'll get performance hits. So we're supporting maybe 15 LUNs across our 15 ESX servers. If I'm going to add an ESX server, I add a LUN." Getting to this magic number, however, was difficult he says: "Nobody would tell me what works - they'd just say 'It depends.'"

5. Be ready for management headaches.

Another caveat is that management is iffy. Hardware management is eased because there are fewer physical boxes to manage, but overall administrative tasks aren't lessened. "Each of those virtualized instances still needs to be patched or have [Basic Input/Output System] updates," the city of Charlotte's Borneman says. "People tend to forget that."

There is no one tool that can do it all, and most users find they rely on several disparate, sometimes homegrown management tools. VMware has a great tool for managing its own environment in VirtualCenter, but XenSource "is just coming along when it comes to management," notes the University of Chicago's Westman. Managing Windows and Linux virtual servers together is difficult, he says - but no more so than managing physical servers together.

"On the Microsoft side of the house, we use SMS and things like that, but on the Linux side, we basically control each one of the virtual boxes directly on the machine itself. We've put together some homegrown applications to do all that," he says.

Others say the management tools need improvement. "We heavily relied on HP Insight Manager, and we got burned," says Baptist's Taylor, explaining he deployed HP Insight's agent for ESX but ended with server instability. HP had based the agent on Linux, and though ESX is Linux-like, it has some differences. The agent kept getting false positives. Thinking ESX was hung up led it to reboot the server continually. This is fixed in the next version of ESX (Infrastructure 3) and Insight, but until Taylor rolls those out, he's stuck.

"Right now we are running our ESX machines agentless, which means we are solely relying on VMware's VirtualCenter for our management, for our up/down state, for our server utilization numbers and things like that," Taylor says. "It makes me uncomfortable, because I've lost all my hardware-based monitoring. I have no way to tell if I have a CPU or memory or a [network interface card] going bad. I'm purely just performance-based monitoring."

The bright side is, it's all virtual. So Taylor can easily - and quickly - move those virtual instances to new hardware.

Cummings is a freelance writer in North Andover, Mass. She can be reached at jocummings@comcast.net.

< Previous story: Managing the virtual realm | Next story: Where does all the time go? >

1 2 3 4 Page 1
Page 1 of 4
Take IDG’s 2020 IT Salary Survey: You’ll provide important data and have a chance to win $500.