If you’re a storage admin, it might seem like there’s a new flash storage system being pitched at your inbox every week. Maybe a few times a week, in fact. Perhaps you’re also investigating the cloud, and whether your enterprise would want to go with a hybrid, private or public cloud implementation. Chances are you already have a lot of storage in your infrastructure from past purchases, and when you add it all up, you could be sitting on quite a diverse collection of resources—and those resources may be significantly underutilized today.
The diversity of storage types presents many options, which creates a real challenge for admins—but only because all those different resources could not be seamlessly connected until now. With storage solutions that deliver ultra-fast performance, such as all-flash arrays, some that save with low cost, cloud capacity for cold (inactive) data, and numerous other shared storage resources, most enterprise IT teams have the right resources to serve a wide variety of different data demands. The challenge is knowing what data needs which resource, then continually aligning data to the right resource as its needs change over time.
+ Also on Network World: 4 ways to contain IT storage creep +
For example, data related to a weekend sale is likely hot throughout the event and through the next week when it undergoes business review or planning for the next sales promotion. But as the next sale approaches, that data cools, and if it is stored on high-value flash, it begins to take up space needed for the next round of promotions. That data will still be needed for sales and corporate reporting, and it has value for long-term big data analysis. IT can’t delete it, but they also would prefer not to waste money keeping it on expensive flash for years to come.
This is where storage is ripe for some innovation. If we can separate the logical view of data from where it is actually stored, it becomes possible to abstract and connect different types of storage across a global namespace. Advanced data management capabilities can automatically place data on the right storage at the right time to meet and maintain service-level objectives. This finally makes efficient use of the capabilities inherent in your storage investments, making sure critical data gets high performance and availability, and colder data gets archived automatically. This has been called “the Holy Grail” of data management, but it has taken several new breakthroughs to finally make it possible.
Metadata’s (Linux) kernel of truth
Knowing anything about the data used in an enterprise starts with metadata (the data about the data), which notes important details such as when a file was last opened, how often it has been accessed, who accessed it, its size, its location, and so on. IT can learn a lot from this rich resource. One of the key breakthroughs that make it possible to access and manage data using this information is the new Network File System version 4.2 (NFS v4.2) protocol distribution.
NFS v4.2 includes Parallel Network File System (pNFS) Flex File layout enhancements that allow clients to provide statistics on how data is being used and the performance provided by the storage resources serving the data. These advanced features are already being implemented in the industry, as the most recent release of Red Hat Enterprise Linux 7.3 features Flex Files support.
Not only does NFS v4.2 give you insight into data across different types of storage, but it also ensures that clients running on the most up-to-date Linux OS already have native support to access all storage resources in a single global namespace. This means IT does not have to download and install proprietary drivers or agents on thousands of clients. The support is already there, running native in their OS. This simplicity is critical when you consider the scale of IT operations that are critical to petabyte-scale enterprises, with thousands of clients running 24/7.
Enterprises can use analytical software that analyzes the insights found in metadata and compares it to objectives or policies to align storage resources with data demands. For example, IT can detect cyclic events in data—such as payroll getting hot once or twice a month—and automatically move it to high-performance storage in anticipation of payroll work. It can then move the data back to lower-cost storage once payroll completes, freeing more expensive capacity for more active data.
Another benefit provided by abstracting data is the ability for enterprises to optimize the use of existing and new storage. Rather than ripping and replacing existing storage, IT can use software to get the full value out of what they paid for, and easily adopt new resources, such as the cloud. Existing resources enjoy longer service life, and the money spent on investing in storage for critical data is well spent serving the hottest data.
If you’re looking for a way to truly transform storage in your enterprise, start with harnessing the collective power sitting in your data center and give yourself an easy on-ramp to any innovation coming next.
This article is published as part of the IDG Contributor Network. Want to Join?