Everyone agrees that backups should be sent off site, but not everyone agrees on how that should be accomplished. The decision about which method to use will affect your recovery-time objective (RTO), recovery-point objective (RPO), risk level, and cost \u2013 so it\u2019s rather important.\nFactors affecting backup RTO\nHow you make sure that your data is stored off-site can affect a variety of important things. Your off-site method will determine your RTO \u2013 how long it takes to restore data that gets lost.\n\nFor example, some people use a common carrier like FedEx to ship their tapes to somewhere very far away to keep their data out of harm's way. These people are worried about things such as natural disasters that might take out both their organization\u2019s facilities and those of any nearby off-site storage company. While this may make sense from a risk-avoidance perspective, it also guarantees a very long RTO if your only copy of your data is a FedEx shipment away. Storing data much closer would help you have a much tighter RTO.\nFactors affecting backup RPO\nYour off-site method will also affect your RPO \u2013 the gap between your last backup and the incident that causes a loss of data. If you are shipping tapes with Iron Mountain, and they only show up once a day, the best you can do with an RPO is 24 to 48 hours. Worst-case RPO could actually be much longer. Make sure to take RPO into account when considering your off-site method.\nAssessing backup risk\nIn addition to affecting your RTO and RPO, your method will also affect your risk level. Storing a copy of your data in a hot site immediately next door might give you a great RPO, but a big disaster might affect both locations. For example, there were companies that ceased to exist on 9\/11 because they had a hot site in the other tower.\nThe farther away your data, the lower your risk. The closer your data, the better RPO possibilities such as synchronous replication. You will need to decide which risks you are going to mitigate, and which risks you are willing to accept. The method that you choose to get your data off-site should then be determined by these risks.\nWeighing the cost of backup\nYour off-site method will affect how much it costs. Sending your only copy of last night's backup via FedEx to your subsidiary might be the least expensive method. Synchronously updating a live copy close-by might be the most expensive. But since each method will come with associated RTO, RPO, risk level and costs, your job will be to weigh all of these factors together to determine which method is most appropriate for you.\nGetting backup off-site\nIf a company is shipping tape off-site, there are two options for what to send: the original or a copy. Many companies ship their original tape off-site, because it is the easiest and least expensive. However, there are several challenges with doing this. If your only backup is off-site, it's not available on-site for a recovery, dictating a longer RTO.\nSome companies deal with this issue by holding backups on site for a week and shipping last week's backups off-site. This gives them a good RTO for operational recoveries, but a lousy RPO for disasters. This is why most experts advise against shipping the original backup off-site.\nAn alternative to shipping the original off-site is to ship a copy off-site and leaving the original on-site. In most data centers today, this is done by first backing up to a deduplicated disk system and then copying to tape. If your only option is to use tape for DR, this would be the optimal way to do it.\nAnother very common way to get backups offsite is to use a deduplicated disk system that can replicate its backups offsite. Besides reducing the amount of disk needed to store backups, deduplication also reduces the amount of bandwidth necessary to replicate last night's backup.\nThe advantage of this method is that you have disk-based backup both on-site and off-site without ever having to use a man in the van. The disadvantage is cost because you have to buy two dedupe systems and pay for a lot of bandwidth. This is why many companies opt for disk-to-disk-to-tape, as discussed above. But if you can afford it, using a deduplicated disk system that replicates off-site is a nice way of having both on-site and off-site backups in a completely automated fashion.\nTarget vs. source deduplication\nWith target duplication, the deduplication is typically done by a third-party appliance near the storage destination and not by the backup software. Source deduplication is typically done by the backup software and is performed at the very beginning of the backup process at the client being backed up \u2013 at the source. Backups can be sent directly off-site by the backup client.\nThe advantage to source deduplication is that it does not require an on-site appliance. The disadvantage of not having an on-site appliance is the same as if you ship your original backup tape directly to your off-site storage vendor \u2013 longer RTOs. This is why some vendors capable of doing source deduplication offer the option of storing a copy of the data on site as-well. Companies choosing this method typically use an appliance for larger data centers and opt-out of such an appliance for smaller offices and laptops.\nJust get it off-site\nAny of these methods is better than none of them. Many years ago a man lost all his data because he stored his backup tapes in a box on the top of his server. His server caught fire and burned up the tapes with it. Based on this event, he gave a lecture that tapes were bad and all backups should be done to disk. He was ahead of his time, since disk-based backup would be another 10 years away.\nBut he seems to have missed the point of the lesson. It's not the tapes that were bad; storing your only copy of backups in the same place as what you are backing up is simply a recipe for disaster. If you are not currently sending your backups offsite, look into it as soon as possible. It doesn\u2019t matter how you get them there, just get them there if you can.