Skip Links

SSD drives promise to enhance storage performance, but a new host-interface-standard holds the key

By Kam Eshghi, special to Network World
August 08, 2011 02:51 PM ET

Network World - This vendor-written tech primer was submitted by the senior director of marketing in Integrated Device Technology's Enterprise Computing Division on the behalf of the NVMe Promoter Group. NVMe is backed by Cisco, Dell, EMC, IDT, Intel, Micron, NetApp, Oracle, SandForce and STEC. Readers should note it favors NVMe's approach.

Flash-memory-based solid-state disks (SSDs) provide faster random access and data transfer rates than electromechanical drives and today can often serve as rotating-disk replacements, but the host interface to SSDs remains a performance bottleneck. PCI Express (PCIe)-based SSDs together with an emerging standard called NVMe (Non-Volatile Memory express) promises to solve the interface bottleneck.

SSDs are proving useful today, but will find far more broad usage once the new NVMe standard matures and company's deliver integrated circuits that enable closer coupling of the SSD to the host processor.

ANALYSIS: SSD could ultimately replace hard disk drives, Hitachi CTO says

The real issue at hand is the need for storage technology that can match the exponential ramp in processor performance over the past two decades. Chip makers have continued to ramp the performance of individual processor cores, to combine multiple cores on one IC, and to develop technologies that can closely couple multiple ICs in multi-processor systems. Ultimately, all of the cores in such a scenario need access to the same storage subsystem.

Enterprise IT managers are eager to utilize the multiprocessor systems because they have the potential of boosting the number of I/O operations per second (IOPS) that a system can process and also the number of IOPS per watt (IOPS/W) in power consumption. New processors offer better IOPS relative to cost and power consumption -- assuming the processing elements can get access to the data in a timely fashion. Active processors waiting on data waste time and money.

Storage hierarchy

There are of course multiple levels of storage technology in a system that ultimately feeds code and data to each processor core. Generally, each core includes local cache memory that operates at core speed. Multiple cores in a chip share a second-level and sometimes a third-level cache. And DRAM feeds the caches. The DRAM and cache access-time and data-transfer performance has scaled to match the processor performance.

The disconnect has come in the performance gap that exist between DRAM and rotating storage in terms of access time and data rate. Disk-drive vendors have done a great job of designing and manufacturing higher-capacity, lower-cost-per-gigabyte disk drives. But the drives inherently have limitations in terms of how fast they can access data and then how fast they can transfer that data into DRAM.

Access time depends on how fast a hard drive can move the read head over the required data track on a disk, and the rotational latency for the sector where the data is located to move under the head. The maximum transfer rate is dictated by the rotational speed of the disk and the data encoding scheme that together determine the number of bytes per second read from the disk.

Our Commenting Policies
Latest News
rssRss Feed
View more Latest News