Virtual headaches

There's an age-old choice in IT -- whether to adopt a "best of breed" strategy for the power and flexibility it can bring, or go with a single vendor for accountability and simplicity. J. Craig Venter Institute Inc. (JCVI) believes in best of breed. The genomic research company runs Linux, Unix, Windows and Mac OS in its data center. For storage, it draws on technology from EMC, NetApp, Isilon, DataDomain and Symantec.

"It's quite a heterogeneous environment," says computer systems manager Eddy Navarro. "Thankfully, we have a very talented staff here." (Learn how to build your own staff skilled in storage virtualization.)

And a talented staff was just what was needed to master the many flavors of storage virtualization, which can make multiple physical disks look like one big storage pool (For a detailed explainer on storage virtualization, check out our QuickStudy). Like JCVI, many organizations are enjoying the lower costs and added flexibility of storage virtualization. But the benefits can come with some headaches. Here, five IT managers who have led successful storage virtualization projects offer advice for relieving the pain.

Headache 1: Managing Multiple Vendors

For several years, JCVI had employed software-based virtualization in the form of Red Hat's Linux Logical Volume Manager, which allows logical partitions to span multiple disk drives. More recently, the company added hardware-based virtualization in the form of NetApp's V Series system to create a single virtual pool of storage consisting of EMC Symmetrix disks and legacy Clariion disks.

The Clariion drives, which came into the data center from a corporate merger, were being poorly utilized, Navarro says. Now, the NetApp V system reformats data going to and from the EMC disks, "and then you carry on just as if it's another NetApp system," Navarro says. That enabled JCVI to wring better performance from the legacy disks.

Each of JCVI's vendors makes its own unique contribution to a powerful and cost-effective storage architecture, Navarro says. But the diversity comes at a cost. "When you are talking about multiple vendors' hardware -- and they compete with each other -- it may not be the easiest thing to get support when something goes wrong," he says. "So you have to ensure compatibility first and foremost, and you have to know in advance something is going to work." (Read a guide to storage virtualization products.)

How to cope: Study the documentation, do your homework, and ensure that your approach has been tried before and is certified by the vendors, says Navarro. And if you don't have experienced technical staff, he adds, be prepared to hire some outside professional help.

Headache 2: Dealing With Extra Technology Layers

Even companies with less-complex environments report that although virtualization can ultimately simplify storage administration, putting it in place and tuning it is a demanding job.

Lifestyle Family Fitness, a rapidly growing chain of 60 health clubs based in St. Petersburg, Fla., is a Microsoft shop built around SQL Server and .Net development of Web applications. For storage virtualization, it uses IBM's SAN Volume Controller (SVC), disk arrays from IBM and EMC, and IBM Brocade SAN switches. IBM DS4700 disks provide 4Gbit/sec. Fibre Channel connections for the company's online transaction processing applications, while the Clariion drives handle less-demanding jobs like backups.

The IBM SVC was brought in to resolve an I/O bottleneck. The high-speed Fibre Channel drives and cache on the SVC appliance opened up the bottlenecks almost like an I/O engine would, says Mike Geis, director of IS operations. Moreover, the setup allowed Lifestyle Family Fitness to use its new IBM-based SAN while continuing to use its old EMC SAN. "In the past," he says, "you'd bring in a new SAN and have to unload the old one."

Geis says the SVC architecture promises vendor independence. He says he has a "great relationship" with IBM, but if that ever changed, he could easily bring in drives from another supplier and quickly attach them directly to his storage network. "We aren't held hostage by the vendor," he adds.

But the advantages come with some difficulties, Geis notes. "You are adding complexity to your environment. You add overhead, man-hours of labor, points of failure and so on. You have to decide if it's worth it."

How to cope: "Pick strong partners -- both vendors and implementation partners -- and make sure you are not their guinea pig," Geis advises.

Headache 3: Scheduling Maintenance/Backups

Ron Rose, CIO at travel services company Inc., takes a holistic view of virtualization. In fact, he speaks of a "virtualization sandwich" consisting of integrated technologies for server virtualization, storage virtualization and server provisioning. He uses 3PAR InServ S400 and E200 tiered disk arrays for storage, BladeLogic tools for provisioning, and 3PAR Thin Provisioning and other products for virtualization.

Rose says most companies could reduce their server and storage footprints by 20% to 40% using a virtualization sandwich. "And not only are there cost savings; there are green benefits. It's good for the planet," he says.

But like most practitioners of storage virtualization, Rose says there is no free lunch. "You have to plan your architecture more thoroughly and look at all your applications. The more systems you have running on the box, for example, the more challenging it is to schedule maintenance. If you have 10 applications running on that chunk of infrastructure that you are going to do maintenance on, you have to schedule it and move the apps to other machines in an orderly manner."

He says 3PAR has powerful tools that can hide much of the complexity of virtualization, but the kind of maintenance scheduling needed "is not a system or tool issue; it's a process and discipline issue."

Similarly, ensuring reliability requires extra care, Rose says. "As with maintenance, you don't want to get too many eggs in each basket," he explains. Priceline keeps critical files on three machines -- what it calls "tri-dundancy."

How to cope: "Think of your entire virtual environment, not just storage," Rose advises. "You will get better ROI in aggregate if you think through all three layers of the virtual sandwich. And getting a little consulting from real experts early in the process will help you anticipate the entire environment."

Headache 4: Setting Up Management Tools

Like Rose, Jon Smith takes a very broad view of virtualization. "For me, a server is no different from a hunk of data storage, and I can move it wherever I want," says the CEO of ITonCommand, a hosted IT services provider. "Whether it's running the operating system or it's just data, it's all storage."

Smith says that eventually virtualization technology will enable any data to go anywhere -- on direct-attached storage when high performance is needed, or somewhere on a SAN when speed is less critical and a higher level of redundancy is required.

ITonCommand uses HP BladeSystem c3000 disks for direct-attached storage, and LeftHand Networks Virtual SAN Appliances and LeftHand's SAN/iQ software on an HP StorageWorks array for storage virtualization on its iSCSI SANs.

The company is now standardizing on Microsoft's Hyper-V hypervisor, part of Windows Server 2008, for server virtualization and on Microsoft's System Center Virtual Machine Manager for administration.

The glue that holds everything together, Smith says, is Microsoft's new Virtual Machine Manager for provisioning and managing physical and virtual computers.

"With VMM on a display, a system admin can look at all the virtual servers' hypervisors across my whole environment, all in one spot, and adjust them," he says. "It's pretty cool stuff."

It's cool when it's set up, but getting there isn't so easy, he acknowledges. "System Center is new, and so is [Hyper-V]. It took us a while to figure out how to connect all our old virtual machines into the hypervisor. It's not the easiest setup out of the box."

Smith says continued virtualization at ITonCommand will result in a true "utility computing" model for his clients. "It will take a while, but people will stop thinking of physical boxes running one operating system. Hardware will be nonexistent to the end user. It's just going to be, 'How much horsepower and storage do you want?'"

How to cope: "Find an expert who knows virtual technology and knows Microsoft System Center," says Smith.

Headache 5: Getting the Right Gear

Babu Kudaravalli, senior director of business technology operations at National Medical Health Card Systems Inc., gives this definition of storage virtualization: "The ability to take storage and present it to any host, of any size, from any storage vendor." He's pursuing those goals with three tiers of storage, each supported by a different HP StorageWorks product. The technology used in each tier is chosen for the mix of cost, performance and availability it offers.

Kudaravalli uses high-end HP XP24000 disk arrays for the most demanding and mission-critical applications, lower-cost Enterprise Virtual Array 8000s for second-tier applications, and Modular Smart Array 1500s for archiving, test systems and the like. His five SANs hold 70TB of data, of which about 35TB in the EVA and MSA tiers is virtualized, he says.

Kudaravalli says there are several things to be careful about when buying storage virtualization products. First, be aware that vendors typically certify their products to work with the latest versions of other vendors' products. If you don't have those exact versions, your interfaces might not work. He says this is a good reason to think about replacing your old gear when you go to a heterogeneous storage environment -- or at least to keep current on the latest releases.

Second, Kudaravalli says that although virtualization should ultimately simplify storage management, setting up a virtual system is complex. Careful planning and an understanding of the limitations of products is crucial.

A few years ago, vendors had very different definitions and standards for virtualization, says Kudaravalli. "But now they seem to be coming together," he says. "They are trying to offer similar features and capabilities, but it is not completely mature."

How to cope: Although storage virtualization is often undertaken to better utilize existing resources, it may have a perverse impact, says Rick Villars, a storage analyst at IDC. "The whole point of virtualization is to make it easier to provision or move a resource, to create a new volume or another snapshot, or to migrate data from one system to another," he says. "But when you make something easy to do, people are induced to do it more often."

According to Villars, volumes, snapshots, data sets and even applications can needlessly proliferate. "You can go from being more efficient to more wasteful. It's just what can happen with virtual server sprawl." Preventing that is a matter of policies, procedures and good business practices, not technology, he says.

Users agree that there are many technical details to master when pursuing storage virtualization. But Navarro suggests starting with a basic question: Why am I doing this? "Virtualization is a hot word, a big thing. But is it really necessary? There are benefits, but ask yourself if you are doing it for the right reasons, or just because you want to be on the cutting edge. It's very easy to get swept up in these groupthink movements."

This story, "Virtual headaches" was originally published by Computerworld.


Copyright © 2008 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022