Skip Links

Getting the most out of flash storage

By Gary Orenstein, vice president of product and technical marketing, Fusion-io, special to Network World
September 13, 2011 04:54 PM ET

Network World - This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.

Over the past few years mainstream enterprises have been turning to NAND flash storage to boost speed and decrease latency, but some vendors still produce products that inhibit customers from achieving flash's full potential.

Solid-state storage offerings that integrate NAND flash as they would traditional disk systems put data far away from the CPU, often behind an outdated storage controller. No matter how fast the NAND is, this setup creates latency, ensuring the application sees only small improvements in actual throughput.

Let's take a step back and look at the pain of disk storage, the pitfalls of applying conventional architectures to flash, and how to achieve the full potential of NAND flash.

IN DEPTH: Flash storage in post-PC devices advances

The pain

The speed limitations of disk drives compared to CPUs are well known. Less well known are the disk acrobatics administrators have to endure to configure drives for performance. This includes buying expensive Fibre Channel disk drives and configuring them in complex schemes that use only a portion of the drive platter to boost performance, which means adding stacks of disks with largely unused capacity that administrators must monitor for failure (not to mention the costs for power, cooling and space to house the systems).

ANALYSIS: EMC: Flash could spell doom for Fibre Channel

But even with these acrobatics, disks often struggle to meet required performance levels due to the distance of external disk storage systems from the CPU, as shown in Figure 1. While CPUs and memory operate in microseconds, access to external disk-based systems happens in milliseconds -- a thousandfold difference. Even when disk systems can pull data quickly, getting the data to and from the CPU has a long latency delay causing CPUs to spend a lot of time waiting for data. This negatively impacts application and database performance.

The pitfall

If you consider flash as a new form of media, like tape and disk drives are media, then implementing it the same way you implemented previous media technologies is only a small part of the way forward.

By itself, flash removes the part of the latency bottleneck caused by slow spinning disk drives, but it does nothing to resolve the delay in getting process-critical data to and from the CPU.

Storing data in a flash array puts process-critical data on the wrong side of the storage channel, far away from the server CPU that is processing application and database requests.

The result is a minimal performance gain and, in addition to adding more hardware, organizations must also implement complex and costly storage area network infrastructure, including host bus adapters, switches and monolithic arrays.

But most importantly, these architectures retain the traditional implementations of storage, as well as RAID, and SATA/SAS controllers -- all optimized to spinning drives, not NAND flash silicon. Figure 2 shows the layers still present in this legacy approach.

Our Commenting Policies
Latest News
rssRss Feed
View more Latest News