How to use Cloud Integrated Storage to support the distributed enterprise

The battle to deliver storage across the distributed enterprise is one of scale waged across two fronts: capacity and the number of locations where infrastructure is needed. Global organizations that must handle heavy workloads, with users operating on large data sets, stand to benefit from cloud-based storage technologies.

Cloud-based or Cloud Integrated Storage (CIS) shifts the actual data storage away from the hardware to the cloud where it can be accessed by a new class of storage controller or cloud gateway. These gateways are essentially local storage caches with built-in encryption and WAN optimization functions.

To the systems inside the data center, cloud gateways look just like traditional storage controllers and can offer the performance of a typical mid-range array. Gateways support the same file and block level protocols and integrate to the same directory and name services. Their aim is to fit neatly into the slot occupied by traditional storage controllers while at the same time bringing cloud-based scalability to the data center.

Unlike traditional storage controllers, the gateways have no capacity limits and require no ancillary backup or DR systems, making them ideal for rapid deployment to many locations. A data set or volume stored in the cloud can be accessed from multiple locations simultaneously, creating truly global file systems that multiple groups can use to collaborate. Ease of deployment across multiple locations, combined with an ability to absorb capacity as needed, make CIS a compelling alternative to traditional hardware-based storage.

The end of the hardware universe

CIS shifts the center of not only storage but control from the hardware in your data center to the cloud. Each hardware unit becomes an access point to the system. The data is completely decoupled from the hardware. Instead of having data volumes reside in a particular device, data lives in the cloud. Hardware is no longer at the center of the storage universe.

This shift has already happened for use cases that are not required to handle heavy workloads at the edge. In the rich ecosystem of Software as a Service (SaaS), companies such as Salesforce, WorkDay and Marketo aim to move the complete software and hardware stack off customer premises. Organizations should be ready to push everything that can be handled with SaaS outside of their infrastructure.

Outsourcing complex but ultimately common software functions – like sales management, human resources or marketing automation – allows the organization to focus resources on the specialized business functions that deliver the most value.

While data intensive applications tend to be industry specific, the infrastructure required to drive these applications – provisioning capacity, backup, and replication – is very similar across every industry. This makes IT infrastructure an obvious next step in the march towards leaner data centers.  

Hardware is still necessary in order to deliver cloud-based storage, but data is no longer bound by the hardware. The hardware needs to be powerful enough to handle heavy workloads (IOPS) and have enough local storage to address the capacity of the working set – data that is actually changing – but not the complete data set.

The ability for the addressable capacity to be larger than the physical storage of the hardware allows administrators to provision capacity on demand to any location. The cloud serves as a scalable backend making the cloud gateway into what looks like a bottomless storage system. A data volume in the cloud has no practical limits for capacity, number of files or even number of snapshots. Administrators can dial up hundreds of additional terabytes of storage without needing to change the physical end of the system.

Thanks to de-duplication and the sheer scale of the backend cloud storage, CIS can retain an unlimited number of snapshots without any impact to storage performance or utilization. This means that snapshots can finally eliminate the need for backup and, because access to the data can be re-established from different locations, including the cloud itself, CIS enables a simpler, more robust disaster recovery plan.

The risk of data loss is limited to the time it takes to synchronize changes to the cloud. In most systems these intervals can be set as low as one minute.  CIS evolves the traditional storage controller into a deep storage pool capable of absorbing the complete data set of an organization into a single namespace without any capacity limits: number of files, terabytes, and even file system snapshots.

Storage continues to be a challenge because data will not stop growing. The need to support storage in many locations compounds the problem. Relentless data growth in the distributed enterprise has a way of overrunning the best laid plans to simplify and contain storage costs.

Unlike traditional hosting models, CIS leverages the software functionality and the vast economies of scale of the large public cloud storage providers to reduce costs. Not only can usable storage capacity costs shrink by 30% to 50%, but unlimited on-demand capacity makes budgets more predictable and track closely to actual storage utilization. CIS gets at the root cause of the big storage challenge by severing the need to add more hardware in order to store more data. A new economic model with a lower base price and more predictability puts the CIO back in control of the IT budget.

Enlightened IT organizations are remaining strategic by vigorously leading the charge towards a thinning of the data center. Every application that can be outright outsourced is being pushed out of the data center to a pure SaaS model. Storage infrastructure is the next logical step.

CIS uses hardware to support heavy workloads but moves the storage infrastructure functions outside the data center. Once hardware is eliminated as the bottleneck, the challenges of scale fade into the background freeing organizations to focus on the decisions that truly impact the business.

The logical location of the data never changes even when data needs to be accessed from new locations, or during hardware refreshes of the end points. This is the fundamental difference between CIS and traditional storage. CIOs gain complete central control over a highly distributed infrastructure and a predictable cost model that fits the one thing that matters: actual usable storage.

Nasuni is an enterprise storage company that provides globally-distributed organizations with a simple, unified storage solution that includes mobile access for all of their remote and branch offices. By combining on-premise hardware with cloud storage, Nasuni delivers a secure, all-in-one data storage solution that provides local performance for users, simplified and centralized management for IT, and an easily scalable, complete remote office storage solution for the enterprise.

To comment on this article and other Network World content, visit our Facebook page or our Twitter stream.
Must read: Hidden Cause of Slow Internet and how to fix it
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.