When virtual tape arrived on the scene a few years ago, many storage managers dismissed it as a niche product not sturdy enough for enterprise-class backups. Now this New Data Center technology, which mimics a tape backup library but uses disks as the medium, is proving itself in ever-larger organizations.
Consider CitiStreet. The joint venture between financial giants Citigroup and State Street is one of the nation's largest insurance benefits delivery providers and retirement-plan record keepers; the company reports servicing more than 9 million plan participants. CitiStreet, in Quincy, Mass., has used a 35TB Sepaton S2100-ES2 virtual tape library (VTL) in its Jacksonville, Fla., data center for more than a year, and is installing a 40TB unit in its Quincy data center. By July, that VTL will be operational and the company will retire its two aging Quantum ATL tape library units, which contain four DLT 7000 tape drives each, says Jeff Machols, systems integration manager for CitiStreet. At that time, the VTL will let the two data centers provide speedy disaster recovery for each other. Machols recently discussed CitiStreet's storage plans with Network World Executive Editor Julie Bort.
What was the impetus for moving to a virtual tape library?
The [Quantum] equipment was starting to age, and as compliance moved to the forefront, compliance-audited security started to become a major concern. Plus, we had more batch processing going on at night. We needed to shrink our backup window because our backups were a big [part] of our batch processes. Each [backup batch] stream could take anywhere from one to three hours - and each client had its own batch cycle.
The Quantum ATLs we were using were 5 to 7 years old. More importantly, the media was aging. We knew we had to make a big purchase of hundreds of tapes; it made sense to start looking at other solutions.
Did you know you wanted virtual tape?
Initially, we were going to just refresh our [tape library] hardware. About three years ago, we first saw [VTLs] . . . but they weren't mainstream yet and the ones that were out there were relatively small and not really scalable. They weren't sophisticated in terms of the software, and the road map they were on. But when we started to look seriously [to replace the Quantum library], more enterprise-class systems were available.
We looked at traditional tape backup and also things like and virtual tape. Network-attached storage would have changed all our backup procedures, software, scripts - everything, because it's a whole different storage. Virtual tape emulates the tape library. So we didn't have to change any software or update any of our backup or restore processes - our Veritas backups, and backup and restore scripts.
Did you get a faster backup with the VTL?
Much faster. We went from averaging 2M to 3M byte/sec to well over 30M byte/sec.
Virtual tape is billed as being a low-cost backup method. Was that the case for you?
When you look at having to buy the actual library, the drives and the tape media, the cost per megabyte for virtual tape is about the same. For about the same cost we're getting 10 times the performance plus the added benefits of software functionality down the road.
When will the second system be live?
By the end of Q2.
What functions can you do with your virtual tape library that you couldn't do with physical tape?
The biggest is appliance-level replication. The Sepaton has the ability to talk to another Sepaton and clone the data [it stores on the entire appliance]. That gives us a secure way to transmit all our data over the wire, encrypted on our dedicated circuit, and that's how we do our disaster recovery. That's a much more effective way than using a third-party tape storage vendor that's going to bring our media on- and off-site. On top of that, there is other content-aware functionality that reduces the physical footprint we require. Sepaton uses certain types of compression and incrementals - it realizes that this data is the same as yesterday's data, backs up only the new data and reduces the capacity that we need, which reduces the cost.
How does it help you enforce your policies for compliance and auditing?
In terms of compliance, it helps us because the more we can contain our own data in-house, the better off we are. Reducing reliance on third-party vendors is a good thing, especially when a lot of these high-profile data loss cases have come from tapes being lost during shipping. As well as giving us added security, it is much faster. If you reduce the amount of time it takes to get a copy from off-site, that provides us with even faster recovery times.
Once you decided you wanted virtual tape, how did you determine system requirements?
We got into the guts of this. When you look at all the different products, in reality it's all just [Serial Advanced Technology Attachment]-attached drives - and how many Serial ATA drive vendors are out there - three or four? So, in the guts they are all a lot the same. To me a 2% to 5% difference in I/O per second of access time wasn't critical. What I liked about Sepaton is that it was much further down the road than anybody else in terms of things like replication, content-aware backups to reduce capacity, that sort of thing.
Sepaton was relatively new, so there was a little bit of risk going with the company, as there is anytime you go with a new vendor, and you get away from a standard like Quantum. At the same time, when you are talking about a whole new technology, a [young vendor] is attractive because it is looking at a new paradigm of backup and recovery. Quantum was still focused on traditional tape libraries. Sepaton is looking at centralized, second-tier storage management. You're not going to put your production Oracle database on Sepaton and run it real-time. My advice is to look at the direction of the company. If VTL is an afterthought - a secondary product - and you are going with VTL as your core technology for backup, you may want to look at someone else.
What else are you doing to help with compliance?
We categorize data into two main areas: backup and archiving. [The VTL] is what I classify as backup. It is data for the business, data we need to store if a database gets corrupted or if the file system gets removed. Archiving is long term and geared for compliance. For this set of users, by law, we have to keep e-mail for seven years. This type of information, because of [Health Insurance Portability and Accountability Act] regulations, we need to keep for a number of years. With the archiving we're looking at two solutions, one for e-mail and one for all other files. We use Network Appliance for e-mail archiving. We have set the policies that [incoming e-mail] for these users or this group of people automatically goes to a [write once, read many (WORM) times drive] on NetApp. We're compliant by using this as our back end for archiving.
So are you doing information life-cycle management (ILM) with your data overall?
We've looked at it and at some of the different products. We have two main disk subsystems in our environment - HP StorageWorks XP Disk Arrays and HP StorageWorks Enterprise Virtual Array (EVA). When you look at the cost of implementing ILM and coming up with a policy that says, 'OK, after two months, I'm going to move Word documents from this storage to this storage,' to me that isn't worthwhile, because buying EVA-level storage, when you look at cost per megabyte, is not significantly different from buying some Serial ATA array. If you are talking about a petabyte of information, then it's going to be cheaper. But throughout the 100 terabytes, by the time I bought the software or the ILM layer, implement the policies, came up the management of it and do the conversion, it's more cost-effective to me to have one tier of storage.
Beyond e-mail, for which you have separate systems, how do you deal with the unstructured stuff, the data not in databases?
We categorize this into two main areas: things that are business-critical and things that fall under compliance. Usually those are the same things, in that anything that falls into compliance is business-critical - though we may have business-critical data that is not under compliance, like application design, diagrams, source code, that sort of thing. We have a file-level archiving solution. We have a generic policy that we keep relatively conservative using Enterprise Vault by Veritas. So we might have a policy that says, after three months, send it to the optical platters or a NetApp WORM appliance, and it's there and then it's archived. That helps keep our storage more manageable and our backups more manageable, instead of some complicated ILM system. We can set up the policies based on the areas of the storage. Client-sensitive data goes here and it can't go anywhere else, so it's pretty easy for us to maintain. We [watch] access time. If you have a spreadsheet viewed every day, we don't want to put that to optical and have to worry about it getting stuck in a case vault.
After you get your disaster recovery moved in-house, what's next for your storage systems?
Once that's in place, the next thing is more frequent backups - continuous backups or checkpoints every hour, taking us to that next level. Then we can provide a better response time. In our case, for disaster recovery, the [best in our industry] is 24 hours, and the [industry average] is 48 hours. How can we use this as a competitive advantage? What if we could get our recovery down to an hour? We can go to our business side and say: 'Here's a new selling point. We're going to guarantee a [recovery] in, say, two hours'. We want to be ahead of the curve.
< Previous summary: Utility storage: Right on the money | Next summary: The hacker-resistant database >