Whether or not you work in the IT department, you have likely experienced the pain of migrating from one system to another. When you buy a new laptop, or a new phone, you’re faced with having to backup and replicate your old data to your new system, or start from scratch with none of the files you might need on your new device.
Imagine this problem at enterprise scale. Moving terabytes of data is a daunting task that also requires planning and downtime when IT has to add a new storage system, upgrade or replacement. Just like with our smartphones, the old system likely still has some value, but since data can’t move easily from one system to the other, the equipment we’re leaving behind often remains as a backup to the backup copy.
Solving this problem in the enterprise starts with making applications aware of different storage capabilities, such as the high, medium and low performance and price provided by the average storage ecosystem in use at many companies today. Then those varying storage resources must be made simultaneously available to applications. This can be done by virtualizing data across a global namespace to enable access to different storage systems, whether or not they are shared, flash, or even cloud storage resources. In other words, an application sees a logical version of the data’s filename so that the data itself can be moved from one storage type or another without having to reconfiguring and interrupting the application.
Metadata, the information about data, and management software can add even more power through automation, monitoring how applications experience storage and aligning data to the right resource to meet changing demands across performance, price and protection. This makes it possible to move past manual storage migrations to automated data migration with the ability to tier data as business needs evolve. Then, rather than being a one-off event, migrations take place automatically, whenever data policies determine that a different storage resource would be the best asset for the job at hand. New storage can be added in minutes, and admins can trigger data to move off old storage with a simple software policy change.
Great gains: automatically tier data across different storage
At enterprise scale, data tiering delivers big savings. Due to the pain of conventional storage migrations, IT teams are often forced to over purchase capacity and performance to meet projected application demand years in advance. This is called overprovisioning. Studies have found that about 75 percent of data stored is typically inactive, or cold, which means IT is overspending significantly on overprovisioned storage. Storage technologies also change fast, so having to purchase years in advance can also leave an enterprise behind compared to competitors who can add new resources on demand. Overprovisioning makes sense when migrations are painful, but the waste becomes a weakness as software makes it easy to move data from tier to tier.
A global namespace also makes it simple to add the cloud as another storage tier, and even more importantly, get the less used or cold data to the cloud and back again if needed. Despite the hard work of cloud vendors to make their solutions easy to adopt, the big challenge for most enterprises is that they do not know what data is inactive and can go to the cloud, and then move the data without impacting other business operations.
Metadata — details such as when a file was last opened, which application is using it, the file size, and so on — is the key to this insight. With this information, a metadata engine can automatically move files that have been inactive to the cloud tier, freeing up capacity for active data that needs higher performance. Providing faster storage performance to an application increases computational throughput so a company can support more users, more analytics and greater business opportunities. A metadata engine can provide live data mobility to allow migrations to occur while files are open and data is actively being accessed. Data can then move at any time without taking systems down, maximizing agility without impacting business.
The ability to tier data also enables IT to ensure resources are perpetually optimized. If a noisy neighbor impacts another application’s service levels, a metadata engine can intelligently redistribute data across storage, transparently to applications. This means IT can finally say goodbye to lost nights, weekends and holidays performing emergency migration fire drills.
At every company, IT teams are closely watching how quickly they’ll be moving to the cloud, and being cautious about making big purchases in traditional storage that create new storage silos that might only be a short-term investment. With a global namespace and a metadata engine that can deliver live data mobility, IT gains the ability to automate data migration across existing and new storage. This data mobility helps enterprise IT move past the struggle of storage migrations, and instead automatically align data to the storage that best meets changing business needs.