Skip Links

Data Protection Manager is a decent step for Microsoft

By Tom Henderson, Network World
October 17, 2005 12:04 AM ET

Network World - Microsoft has made its first attempt at tending to enterprisewide data protection. In our Clear Choice Test of the newly released Data Protection Manager 1.0, we found that it does a respectable job at data backup/restore, archiving and management, but it's strictly limited to Microsoft-only machines and does not yet support Microsoft's full range of applications.


Archive of Network World tests
Subscribe to the Network Product Test Results newsletter

DPM requires highly patched versions of Windows 2003 server editions as its base platform. This server becomes the nexus for most storage-management operations. Each compatible component (meaning one that is a member of an Active Directory domain and is running the Active Directory Remote Registry daemon) of a Microsoft-based network can be readily managed.

DPM cannot be used on 64-bit hardware - even if it can run in x86 emulation mode - until Microsoft releases a planned service pack, either as a protected device or as a server. DPM doesn't support numerous Windows client types, including older versions of Windows, Windows CE devices, Windows-based mobile phones and Windows Media Center.

Users of these Windows platforms must use their own primitive back-up facilities to store data onto a DPM-compatible platform to be peripherally included in the DPM platform services. The Windows platform compatibility was surprisingly immature for a product that would be purchased to address the critical issues of archiving and availability.

A more egregious omission is that DPM is unable to fully protect Microsoft Exchange . While other Microsoft and third-party methods exist to increase Exchange availability, DPM can't do comparatively simple tasks, such as taking a snapshot of Exchange, without shutting down the application.

The primary DPM server can't be an Active Directory domain controller, but must be a domain member. In turn, storage-area network components must have direct and speed-worthy links (nothing less than T-1 speeds) to this server.

Through experimentation, we found it best to build a DPM server from scratch, which takes about two hours, compared with four hours to take an existing Active Directory controller server and run it through the processes needed.

Immediately after media installation, it's time to allocate storage pools. These pools can be optical media, locally attached storage or a virtualized infrastructure, such as iSCSI -connected drives. DPM found all of the storage components that we used in our test without issue.

How we did it
We deployed DPM on two platforms: a NetFrame 1600 server with dual 3.06GHz Intel Xeon CPUs, 4G byte dynamic RAM, Adaptec SCSI controller, two Broadcom-based Gigabit Ethernet ports, and Windows 2003 Enterprise Edition fully patched; and an HP DL140 with dual 3.06 Intel Xeon CPUs, 1G byte DRAM, Smart Array Disk Controller and two Broadcom-based Gigabit Ethernet ports and Win 2003 Enterprise Edition.

We tested the platforms in conjunction with our local storage-area network (SAN) - two just a bunch of disks (JBOD) fiber disk arrays connected with a 2G byte/sec Fibre Channel Brocade Silkworm 2G byte/sec Fibre Channel 16-port switch; JNI 2FC controller in the server under test - to assess storage pool management. The DPM storage pool management finds any visible locally attached storage, and therefore found and was able to use various drive combinations in the SAN JBOD array.

We performed tests to assess maximized speed in which we restored four snapshots simultaneously through two Gigabit Ethernet ports to their destinations. We measured 302M byte/sec under maximum downloads for both Gigabit Ethernet ports at maximum read speed (using the NetFrame server and an internal single-drive Seagate Barracuda UltraSCSI 320 drive), and a maximum of 412M byte/sec writing as measured by Windows PerfMon and NetMon.

Click to see:

The software agent is pushed to each node that will be included in the DPM archiving scheme and rebooted. Clients or servers found in the Active Directory service can be included in scheduled full, incremental, snapshot or other back-up plans, depending on organizational needs. We tested each of these methods, both as incremental/"scheduled archives, as well as a number of different types of restorations. Snapshots, or the originally backed-up full data set with added periodic backups, are managed most thoroughly. This snapshot method was better than traditional file-by-file/folder-by-folder/application-by-application back-up processes, because it's easier to track, lacks user intervention and the state of a server or client is easier to restore.

Multiple concurrent backups, snapshot storage cycles and restore jobs can run at once with no detected DPM platform instability. DPM takes advantage of both hardware speed and dynamic RAM (especially in restore jobs to remote clients) and can become extremely busy. Very active organizations might consider having several DPM-based servers online if their storage activities are strenuous or frequent, as DPM can become hardware-limited in the sheer amount of work it can do. Several vendors have announced appliances based on DPM, and these might make a convenient fit for some organizations.

SQL Server 2000 aptly keeps track of what's going on and handles multiple concurrent requests for a mixture of backups and restores without errors. The limits imposed on performance were the number of network cards/link speed and the speed of the SCSI drives we used. We tested DPM in two profiles, using a direct connection to emulate a local network back-up server, and via a local VPN to emulate a branch network.

Our CPU utilization tests showed that DPM is capable of drawing at very near the wire speed of the host bus adapter (SCSI or Fibre Channel ) and the drives we used. When files change, only the deltas are backed up. For example, when a 2.2-G byte file changes by 100M bytes, about 130M bytes of data is sent to the server in the form of a snapshot back-up job. This iterative back-up method tends to make backups faster, but might require heavy rethreading of files in the case of large file loss.

We attempted to make several iterative backups of slight changes in the aforementioned 2.2-G byte file, but were unable to detect delays in the reformation/rethreading of the file when we restored the file via snapshot. Jobs are encrypted by a hash when traveling from agent machine to the DPM server, and double encrypted when we used our VPN as the transport.

Our Commenting Policies
Latest News
rssRss Feed
View more Latest News