Talk it up

Disaster recovery gets the floor in this ongoing roundtable discussion among IT execs.

Clustering and virtualization, continuous data protection, and business-process monitoring are becoming integral pieces of progressive business-continuity plans. Three IT executives recently gathered to discuss these new data center technologies and how they're putting them to use for disaster-recovery purposes. Participating in this, second in an ongoing roundtable series, are Tony Adams, IT analyst, J.R. Simplot, in Boise, Idaho; Matthew Dattilo, vice president and CIO, PerkinElmer, in Wellesley, Mass.; and Rael Paster, head of collaboration services IT, Serono, Geneva.

Clustering, virtualization-what are your thoughts on how such emerging technologies can keep your servers running?

Adams: Our core disaster-recovery strategy is rapidly shifting away from tape-based restore of physical systems. Instead, we are working to virtualize as much of our x86 workload as possible and leverage long-distance [storage -area network]-to-[storage-area network] capabilities to maintain concurrent data at the disaster-recovery target site. With virtualization, we will be able to concurrently maintain exact copies of entire virtual machines at that site and be able to boot those virtual machines on arbitrary hardware. Virtualization completely eliminates the need to have a 1-to-1 inventory of identical physical hardware between data centers.

We accept that disaster-recovery operations can be degraded from a performance standpoint. Therefore we are able to budget lower total 'horsepower' for our recovery systems. This allows us to fully meet business functional requirements with lower CPU and storage costs.

Dattilo: For us, the emergence of these technologies has been exciting in that they're allowing us to significantly reduce the occurrence and severity of those hardware outages that would impact us in the 'several hour to a couple of days' range, depending on application.

While we've had these capabilities in our Unix environment for years, extending clustering and virtualization to our Windows/Intel-based environments has given us better choices of platforms, and better reliability in this environment. Lagging certification by applications providers has been a concern, but I don't see it as a long-term barrier.

We're bifurcating our outages into the short [couple of hours or typically less] network/software issues, and the site disaster scenario. We haven't found a cost-effective way to build the redundancy to eliminate the former or to harden ourselves against the latter scenario.

Have you investigated continuous data protection (CDP) technology and, if so, what do you think about it?

Adams: We haven't investigated CDP in particular but have made recent changes to our back-up infrastructure to attain at least one of the stated benefits. We now use [Serial Advanced Technology Attachment ] disk as our first-level back-up medium. This has greatly reduced our restore times because primary images are typically available without requiring offsite media retrieval. To meet offsite storage requirements, we duplicate the SATA-based images to tape on a daily basis.

CDP may become more appealing if our back-up vendor were to offer it. I see this as a back-up solution, and I'd prefer to keep our entire back-up solution as integrated as possible.

Paster: We explored CDP and were quite impressed with our findings. As a result, we deployed Storactive's LiveBackup to our users last year and now include it in our standard workstation image. CDP for servers /SAN/[network-attached storage], etc. is still currently best suited to snapshot- or transaction-based back-up technologies.

Business-process monitoring is said to be the "next big thing" in IT management. How feasible is it and what sorts of IT systems would you need to make it work?

Adams: The main feasibility requirement is that the application or business process must accommodate functional real-time monitoring. This would typically involve a suite of test transactions run periodically and tested for validity. This type of monitoring is completely feasible with today's ERP systems with centrally located and enforced business functions. This architecture allows a common approach to monitoring the potential multiple interfaces to the central business functions.

A well-designed centralized monitoring application, such as Nagios [an open source program], is capable of directly executing [or executing by proxy] the test transactions.

Paster: I believe this is quite feasible, and the business will demand it. We've already seen that our Web dashboards have become immensely popular and for good reason. They are the 'enterprise consoles' for business performance management, which correlates disparate business events to key performance metrics and gives the business managers real-time visibility.

Our existing systems will become more sophisticated and will 'naturally' evolve to become business-process monitoring orientated. Application performance and availability can be easily assessed and managed according to their service levels and business priorities. We've been using Mercury and Tivoli for our [service-level agreement] management . The next step is to fully integrate the various tools into our business intelligence so that we may, among other things, accurately assess the business impact of application downtime and prioritize problem resolution based on business impact.

Copyright © 2005 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022