Skip Links

Data de-duplication changes economics of backup

By Miklos Sandorfi, special to Network World
August 21, 2007 03:38 PM ET
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.

Network World - The ability to de-duplicate backup data — that is, back up or copy only unique blocks of data — is rapidly changing the economics of data protection.

Data volumes are growing exponentially. Companies are not only generating more primary data but also are required by government regulators to back up and retain that data many times over its life cycle. With a retention period of one year for weekly full backups and 10 days for daily incremental backups, a single terabyte of data requires 53TB of storage capacity for data protection over its life cycle. Backing up, managing and storing this data is driving up labor costs as well as power, cooling and floor space costs.

That’s the bad news. The good news is the cost of disk storage is decreasing, making it increasingly attractive for secondary storage.

And data de-duplication technology — typically found on disk-based virtual tape libraries (VTL) — can help control data growth by backing up and storing any given piece of data only one time.

VTLs are disk-based systems that emulate tape technology to enable enterprises to install them in existing environments with minimal disruption. De-duplication software (available on some VTLs) stores a baseline data set and then checks subsequent backup sets for duplicate data. When it finds a duplicate, it stores a small representation of it that enables the software to compile and restore complete files as needed.

There are two main data de-duplication methodologies: hash-based and byte-level comparison-based. The hash-based approach runs incoming data through an algorithm to create a small representation and a unique identifier for the data called a hash. It then compares the hash with previous hashes stored in a look-up table. If a match is found it replaces the redundant data with a pointer to the existing hash. If no match is found, the data is added to the look-up table. But using a look-up table to identify duplicate hash strings can put a significant strain on performance and may require several weeks to achieve optimal de-duplication efficiency.

A more efficient method simply compares items on an object-by-object level; for example, comparing Word documents to other Word documents. Some technologies perform this comparison using a pattern-matching algorithm. However, a more efficient technology uses intelligent processes that analyze the back-up files and the reference data set to identify files that are likely to be redundant before comparing the two files in more detail. By focusing its activities on suspected duplicates, it can de-duplicate more thoroughly and avoid processing new files unnecessarily.

Some technologies perform the de-duplication as the data is being backed up. This inline de-duplication slows backup performance and adds complexity to the backup. Other technologies perform out-of-band de-duplication in which they back up the data first at full wire speed and perform the de-duplication afterward.

Byte-level de-duplication can provide up to 25:1 data reduction ratios. When combined with compression technology — a typical VTL feature — enterprises can store 50 times more data in the same space without adding capacity. This dramatic reduction enables companies to store more data online and keep it online longer, leading to labor savings and the advantages of keeping data on disk.

Our Commenting Policies
Latest News
rssRss Feed
View more Latest News