The best ways to tweak storage for great performance

From policy-based archiving to virtualization, smart ways you can keep enterprise storage performing at its peak

From policy-based archiving to virtualization, smart ways you can keep enterprise storage performing at its peak.

Raul Robledo, a storage specialist at the Trumbull, Conn., office of Affinion Group, recently experienced this firsthand. Earlier this year, the global marketing company's 50TB storage-area network (SAN) began experiencing severe outages, because of what Robledo later found was bandwidth saturation on the SAN's interswitch links (ISL). A number of the company's external, Web-facing applications depended on the availability of data residing on that underlying SAN.

"We had too much traffic going through the ports, and that caused applications to spawn additional processes that weren't getting a response back. This started a big chain reaction that began to take some of our [Web] applications down," he says.

Affinion's SAN is a dual-fabric environment consisting of three EMC Clariion storage arrays and a 3PARdata array, connected via Brocade Communications Systems 3800 and Brocade 4100 switches. To help diagnose and correct the ISLs' bandwidth-saturation problem, one of Affinion's SAN administrators used Orca, from open source provider OrcaWare Technologies, to gather and plot data from the Brocade switches via . (Orca, which helps plot arbitrary data from text files onto a Web server-based directory, is used by the group's Unix administrator to plot server performance.)

Although Orca proved important in this instance, Robledo says he realized he needs a tool specifically designed to keep the SAN performing optimally. Orca and other similar tools tend to require more manual work, knowledge and customization than products meant for real-time SAN-performance monitoring, he says.

Robledo began searching in earnest for a robust SAN performance-monitoring tool that would let him address problems before they came to users' attention. After all, his team's ability to meet ongoing service-level agreements was at stake.

How to save millions (vs. thousands) through optimization

He turned to Onaro, a storage service-management vendor. For the past year, Robledo's team had been using Onaro's SANscreen Foundation software to monitor and report on storage operations. After the ISL outage, he decided to see whether Affinion could benefit from Onaro's recently released Application Insight 2.0. The software offers an application-to-array picture in real time of storage-resource use and efficiency, while providing application-centered monitoring and reporting about the performance of the storage infrastructure, Onaro says.

Having conducted proof-of-concept testing of Application Insight, Robledo believes the tool would help to head off potential performance issues before they become a problem. "By using a combination of both products -- [SANscreen] Foundation and Application Insight -- we could be alerted in real time of any performance spikes and hopefully be informed of any issues that could cause an outage, before someone calls from the business line," he says. "We wouldn't need to get inquiries or notification from individuals. We would be getting those right from a product that's monitoring our environment."

Because Insight also shows port use, his team would be able to provision storage more effectively, Robledo says. The team would be able to configure the hosts to send or receive data through specific switches and storage ports. "This would let us define a host with certain storage buckets and assign which applications those belong to. So, when we look at performance, we could then see which applications are on which switches, including the storage that is on the specific arrays," he says. "We could then see a pattern of which applications or hosts are resource intensive from a storage perspective, and maybe start to utilize storage for that application on another array."

Affinion is not alone in considering real-time storage-management products to help optimize SAN performance, says Mike Karp, a senior analyst with Enterprise Management Associates (EMA). Software that performs application-centric data management and root-cause analysis, or manages storage in the context of networks and the other systems around it is gaining in popularity, he says. In addition to the Onaro tools, the category of storage optimization includes EMC Smarts, MonoSphere Storage Horizon, HP Storage Essentials and the HiCommand suite from Hitachi Data Systems (HDS), Karp and other analysts say.

Optimizing storage in a virtual world

Michael Layton and Vo Tran

Other technology options also are available, and any self-respecting storage-hardware or data-management software vendor today says its offerings help optimize storage resources. Many undoubtedly do. But two other technologies -- storage virtualization and archiving -- are generating the most user interest, experts say.

Storage virtualization, a New Data Center staple, is much touted for its ability to combine disparate physical storage systems (often from different vendors) into one logical pool whose collective resources can be managed and provisioned more easily. The technology is coming of age, as many enterprises get close to their SANs' three-year end of life, says Josh Howard, a storage specialist at reseller CDW.

"As you look toward moving into the next [storage] frame, you have data-migration issues where you may have to look at a data-migration professional-services engagement, or schedule a lot of downtime to move that data into the new frame," Howard says.

Storage virtualization products -- such as FalconStor Software's IPStor, IBM's SAN Volume Controller and the HDS TagmaStore USP -- address some of this pain by performing much of the data migration in the background, Howard says. They even can help organizations reuse some of their now end-of-life storage systems by relegating them to a new role as lower-level storage tiers or backup targets for snapshot-type data sets.

From the perspective of storage optimization, the virtualization argument becomes one of flexibility and greater utilization, Howard says. "Virtualization enables flexibility, including across different brands of storage. It gives you the ability to buy truly cheap disks, not the big vendors' version of cheap disks," he says, citing products from companies such as Nexsan Technologies. "That's inexpensive and can work as your backup target, while your production system remains an [EMC] Symmetrix or [HDS] USP," he says.

Likewise, from a storage-utilization perspective, Howard says organizations with multiple storage frames from various vendors probably will see use rates jump from 40% of available disk space to as much as 70% to 80% as a result of implementing virtualization.

For archiving, Howard and EMA's Karp cite as examples CA's iLumin, EMC's Email Xtender and Disk Xtender, Symantec's Enterprise Vault and Zantaz's Enterprise Archive Solution. These archival applications help translate policy into computer-driven rules that automate the movement of data from high-performance production disk arrays to lower-level storage tiers.

Policy-based management tools not only help automate the environment but also capture what Karp terms "senior staff intelligence" and best practices. These are translated into policies that empower junior-level employees to perform many tasks that previously had been the domain of more experienced colleagues.

Data deduplication technology, which reduces the amount of redundant data and is one of the biggest draws in the data-backup market, is an optimization favorite, Howard says. He has heard of organizations using data-deduplication software from such vendors as Data Domain, Diligent Technologies, ExaGrid Systems, FalconStor and Quantum that have been able to remove duplicate data and compress remaining backup sets enough to store the equivalent of 20TB to 30TB of backup data on a 1TB disk.

Optimizing storage on multiple fronts

Optimization often takes a combination of technologies. That's the case at Baylor College of Medicine in Houston. Its IT team, which recently implemented a disk-to-disk backup product from Network Appliance along with HDS storage-virtualization technology, knows something about the role technology can play in making things run better.

Baylor had been relying on tape for weekly backups of roughly 40TB of file data and another 5TB of application data. Those backups had begun to take increasingly longer to finish. "We'd spend countless hours backing up just the [storage] volumes," says Michael Layton, director of enterprise services and information systems at the college.

That was before Layton and Vo Tran, manager of enterprise servers and storage, began using NetApp disk storage systems and SnapVault software to replicate NetApp Snapshot data copies to separate, secondary storage. For primary storage, Baylor uses a NetApp FAS980C (two-node cluster). The secondary SnapVault backup target is a NetApp FAS6070 storage system, Tran says.

In moving from tape to disk via SnapVault, the college has shortened backup and restoration time to one-tenth of what it was before, Layton says. Plus, his team is on track to make it possible for the college's internal users to recover lost files on their own, he says. The self-service recovery server will appear as another "recovery-oriented" file share to users, with a file directory structure similar to that of their primary file share. If they inadvertently delete a file, or if a file becomes corrupted, they will be able to point to the recovery server, where they can locate and copy over the original file easily. That's huge from a self-healing perspective, Layton says.

On another front, given the college's recent acquisition of the HDS TagmaStore USP for storage virtualization, Layton and Tran are looking forward to providing customers storage capacity on demand while reducing to one SAN-management interface, down from eight. More important, Layton expects this move will let his dedicated storage personnel manage twice the amount of storage with no additional head count -- going from what amounts to 90TB of Fibre Channel and -based network storage to as much as 170TB of storage over the next few years.

No stranger to virtualization, Layton and Tran also took advantage of the NetApp V-Series V980C virtualization system earlier in its network-attached-storage (NAS) consolidation efforts to help ease the pain of migrating files to the new NetApp-based NAS systems and gateways also backed by HDS SAN storage.

With optimization, storage managers get more done in shorter windows, no longer fret over backups and recoverability, and don't get caught up with "putting out fires," Layton says. "They can actually do something more proactive to manage our storage environment."

Hope is a freelance writer who covers IT issues surrounding enterprise storage, networking and security. She can be reached at mhope@thestoragewriter.com.


< Previous story: Five e-discovery must-haves | Next story: How to save millions (vs. thousands) through optimization >

Related:

Copyright © 2007 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022