Storage is a fast-evolving industry. Groundbreaking hardware technologies quickly become commoditized, which is a challenge for vendors, but a great benefit to customers. Today’s shiny new array soon becomes matched by a similarly capable JBOD (Just a Bunch of Disk) product that might not have as robust of vendor support, but costs the enterprise far less than a brand-name system. This commoditization extends into flash as well. While it is still growing in adoption in the enterprise, Gartner already sees JBOF (Just a Bunch of Flash) products on the horizon in this segment as well. Cloud storage is on the rise in tandem with flash, and smart data management software can help enterprises overcome the complexity of cloud adoption and easily integrate JBOC (Just a Bunch of Cloud) with their existing architectures.
Gartner calls the cloud “one of the most disruptive forces of IT spending since the early days of the digital age,” noting that more than $1 trillion in IT spending will be directly or indirectly affected by the shift to cloud during the next five years. As part of the infrastructure shift ahead, Gartner’s 2017 Strategic Roadmap for storage predicts that “by 2021, more than 80% of enterprise unstructured data will be stored in scale-out file system and object storage systems in enterprise and cloud data centers, an increase from 30% today.”
Indeed, IT leaders are already hard at work rolling out cloud adoption strategies. Many of the enterprises we speak with are building private clouds internally to mitigate risk before migrating to public cloud services provided by leaders like AWS, Google and Microsoft Azure. It’s a strategy that makes sense for many IT teams, and enables even greater agility as enterprises expand by integrating both private and public clouds. This shift aligns with what 451 Research sees ahead. The firm reported that one-third of enterprise data was stored off-premises in 2016, but enterprises will reduce their ownership of data centers as we approach 2020.
Software makes it simple: scale one or multiple clouds
One of the challenges with cloud adoption starts with simply knowing what data can be moved from primary storage to a slower lower cost of ownership cloud resource. Unless they plan to run the entire application in the cloud, IT certainly does not want to move active data that is in demand with applications and essential to current business operations. However, most storage systems don’t give insight into whether data is recently active or inactive, because they lack metadata intelligence. A metadata engine is the first step in achieving in-depth visibility of file activity, including its size, location, when it was last accessed, by whom, when it was last opened or written, and so on.
Once IT knows what data is really cold and thus a candidate for moving to the cloud, it’s time for a migration plan. Traditionally, migrations require downtime and usually scheduled outside of business hours. As many businesses operate 24/7 around the globe, this can be an extremely disruptive event. A metadata engine simplifies this process by virtualizing data into a global namespace that provides applications with simultaneous access to all storage integrated into the namespace. IT can then set objectives to move data to the cloud automatically, as soon as it hits an objective’s threshold such as being unopened for a quarter, a year, or whatever time frame makes the most sense for the business. If in the future, the data becomes active and as the objective permits, a metadata engine can move the data from the cloud back to a primary store that meets the application needs. With these capabilities, manual migrations become obsolete, as data can move automatically to ensure objectives are continually met throughout its life cycle.
From one to many
Integrating different types of storage into a global namespace is key to easily adopting a cloud-to-cloud or multi-cloud strategy. As cloud adoption grows, it is likely that more enterprises will be willing to pay for premium cloud performance in return for greater savings on both capital and operational expenditures. The result will be that enterprises have silos of data in the cloud, just as we have silos of data stored on flash appliances, NAS systems and SANs in the average enterprise today. Silos create complexity and isolated data, which is increasingly becoming a problem for enterprises that want to make use of big data applications and data mining to gain additional intelligence from their data.
With a global namespace, moving data from on-premises storage and between clouds is as simple as integrating the new resource into the namespace and identifying its attributes, such as performance, latency, cost and protection. Once this is complete, data that aligns with the features provided by the new tier of storage can automatically load balance to the new resource, with no impact to applications. Data can then easily tier from on-premise to the cloud, cloud to cloud, or cloud to on-premise. With a data management system that manages data down to the file level, enterprises can optimize cost of ownership by retrieving and re-hydrating only the requested files (rather than an entire LUN or directory).
Storage grows more complex every day, but the cloud is poised to simplify many storage challenges and the costs of less used data. It gives enterprises the opportunity to focus now on data management rather than storage management, and smart software can make it simple to transition towards a much more intelligent data center that leverages data to meet business needs by optimizing the use of all available storage resources, from the enterprise and into the cloud.