Americas

  • United States

Don’t keep squandering one of your greatest storage assets: metadata

Opinion
Oct 27, 20175 mins
Data Center

Metadata for every single file in your enterprise is sitting un- and underused, but it’s one of IT’s greatest assets.

file folder storage and sharing
Credit: Thinkstock

Storage has long been one of the biggest line items on the IT budget. Rightly so, given that data is valuable asset and the lifeblood of every business today. Critical applications consume data as quickly as they get it, and many companies are also using their data to find new insights that help them develop novel products and strategies.

Regardless of how hot a file is when it is created, in time, its use cools. As a business matures, more and more cool, cold and even frigid data continues to pile up. With analytics now offering new insights on old data, however, no one wants to delete old files or send them to offline archival storage. This means buying more and more capacity is a given, just as death and taxes.

The cloud is rising as a cost-effective fix for data sprawl, but the challenge remains that IT doesn’t know what data is truly inactive and a candidate for moving to a slower, cheaper store. Even then, actually moving the data without affecting the application is a whole other problem to tackle, and traditional migrations take planning, downtime, plenty of money, and time.

It’s nice to be noticed

The irony is that the information needed to identify data activity has been there all along, in the metadata, the data about your data. Metadata is created alongside a file, offering salient information about when it was last opened, by whom, its size, when it was last modified, and so on. This information is sitting latent with every single file in your enterprise, but until now, there was not a way to analyze it to better manage data across an enterprise storage ecosystem. It’s one of IT’s greatest assets, and it’s been ignored.

A metadata engine gives enterprises the ability to finally put their idle metadata to use. For example, software can now scan through storage systems and give IT the information that’s been dormant on what data is hot, and what is cold. With data virtualization, it becomes possible to create a global namespace that makes different storage resources simultaneously available to applications. Then, the metadata information can be leveraged to create policies that can automatically move data from a costly flash system to the cloud if it’s been cold for six months, a year, and even bring it back if the business needs it again, automatically. 

Be a real American IT hero

Knowing what’s going on with your data is only half the battle. Once you have the metadata insight on data activity, you need a way to align those needs with your various storage resources and act on that information. Most large enterprises have numerous storage systems installed, and those systems offer diverse capabilities. Diversity is valuable, but the value of storage assets is greatly increased when they can be put to work serving data that needs those specific performance or price attributes.

Automated data mobility helps IT put painful storage migrations behind them for good. Instead of waiting for IT to plan a migration, schedule downtime and execute the move, data can move automatically to maintain performance or cost thresholds without any application impact. By creating a global namespace, it becomes possible for all storage resources to be simultaneously available to applications. With a policy engine, IT can set Service Level Objectives the amount of performance your data requires and how much you want to spend to meet those requirements. Data Profiling tools can even enable IT to model the cost impact of their policies to fine tune exactly how much they might save by automatically tiering data across their various on-premises storage resources and object or cloud storage.

Architecting for success at scale

Data mobility needs to be a two-way street, which means that if a file gets hot again, data can move right back to a high-performance tier in order to minimize fire drills and the number of performance complaints IT has to service. With global awareness of available resources, IT can be notified when capacity thresholds are reached, helping them stay far ahead of any unwelcome surprises.

Advanced data management solutions are architected to ensure they remain out of the data path and can be move data across different storage types without requiring IT make any changes to applications. Over time, some software systems can even begin to predict strategies to help IT optimize savings, bringing machine learning intelligence into data management.

Storage has long been the most complex and costly element of IT infrastructure. Today, data is too massive, is too diverse, with changing demands, and it cannot be confined to a single storage silo any longer. Solving storage waste starts with putting your metadata to use, and setting your data free from storage silos. IDC reports that the majority of enterprises are already looking into management automation tools, so try not to wait too long before evaluating which tool will help you modernize data management for your business.

lancesmith

Primary Data CEO Lance Smith is a strategic industry visionary who has architected and executed growth strategies for disruptive technologies throughout his career. Prior to Primary Data, Lance served as Senior Vice President and General Manager of SanDisk Corporation IO Memory Solutions, following the SanDisk acquisition of Fusion-io in 2014. He served as Chief Operating Officer of Fusion-io from April 2010.

Lance has held senior executive positions of companies that transacted for billions of dollars, holds patents in microprocessor bus architectures, and received a Bachelor of Science degree in Electrical Engineering from the Santa Clara University.

The opinions expressed in this blog are those of Lance L. Smith and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.