Skip Links

Can flash live up to the hype?

By Doug Rainbolt, vice president of marketing, Alacritech, special to Network World
June 26, 2012 10:07 AM ET

Network World - This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.

The popularity of flash memory has soared over the last year because flash has definite advantages over conventional media. It often isn't clear however, what distinguishes one flash offering from another. Here is a review of four common flash design implementations, each of which has strengths and weaknesses.

Let's start with the use of PCIe flash memory cards in servers coupled with software that treats flash as an extension of system memory. Applications that depend upon high performance database accesses where low latency is very important can benefit from the use of these cards.

IN DEPTH: Flash storage in post-PC devices advances

Data is generally moved as blocks closer to the application given the need for very high performance. Compared to traditional disk I/O, latency is far lower and the cost per IOPS is low. Because NFS is not the primary protocol being used for data access, customers that prefer this option are primarily SAN minded folk that are very sensitive to latency.

The cons associated with this approach are first, it's not a shared storage model; servers that benefit must be furnished with the flash cards. Second, it consumes inordinate amounts of CPU because the wear leveling and grooming algorithms require a great amount of processor cycles. Third, for some customers, consuming PCIe slots is a concern. All of these factors need to be factored into how servers are provisioned, assuring adequate processor and PCIe slot support.

The second design approach is to build storage arrays purely from flash memory. These constitute shared storage targets that often sit on a SAN. You wouldn't purchase these systems to accelerate or displace NAS, but you can include support for NFS caching so long as one flash memory array sits alongside an NFS gateway server. The added latency associated with including such a gateway make it less than ideal in performance-sensitive environments. The pure SAN model has gained significant traction displacing conventional storage from incumbent suppliers in latency sensitive environments, such as the financial markets.

Despite the raw performance, the storage management tools tend to lag. One of the major disadvantages with these systems is the processor utilization in the storage array. This will likely be the bottleneck that limits scalability. And once the processors hit 100%, it doesn't matter how much more flash memory is installed; the system will be incapable of generating incremental I/O. A better approach might be to apply flash to the data that needs it and make use of less expensive media for the data that doesn't. Aged or less important data doesn't require the same IOPS as hot data.

The third design approach has taken on chameleon-like qualities. It can function as either a write-through caching appliance that offloads NAS or file servers, or just as a file server. As a file server, it is positioned as an edge NAS that delivers performance to users. There is still a back-end NAS that sits behind this device where everything is stored. Active data isn't moved to the Edge NAS, it's copied to it, and this option makes use of faster media to increase performance for users.

Our Commenting Policies
Latest News
rssRss Feed
View more Latest News