• United States

SOCs aim to speed up throughput for storage systems

May 27, 20043 mins
Data Center

* Storage system on a chip

Storage system designers always face the challenge of providing faster throughput. When compared to other parts of computing systems, it is clear that I/O rates for getting data on and off storage devices have not kept pace. Compare I/O bus rates with the progress made in faster CPUs, faster and larger memory, and increased bandwidth and you will see what I mean. Complicating this is the expectation that “faster” should frequently be accompanied by “cheaper.”

Lots of new technology is on the way, however, and some of it addresses this issue.

One such trend likely to be appearing in local computer rooms in about three quarters is a type of storage processor called storage “system on a chip” (SOC). While you are never likely to buy one of these things directly, the vendors that sell you your systems may soon be purchasing them by the boxcar load, and the systems you buy are likely to contain one or more of them. Let’s see why there is going to be a SOC in your future.

A storage system on a chip will contain several different components that previously had only been available as discrete, separate parts.  Thus, functionality that in the last generation of storage arrays was spread across a board on several general purpose processors and custom ASICs – and was managed by software – is now crammed onto a single integrated piece of silicon, which is maybe cheaper to buy and which will almost certainly require less power to run.  A lower power requirement and taking up less space on the board translates into cheaper manufacturing and lower operating costs.

The contents of these devices will vary from manufacturer to manufacturer, and will of course depend on the product’s purpose, but within each will be a collection of features and functions that formerly were only available as discrete components.  The product from Atlanta start-up iVivity for example is aimed at supporting storage-area networks.  It contains support for both iSCSI and Fibre Channel at one end, for 10G bit/sec line connectivity on the other (with a TCP offload engine – a TOE), and is designed to work with Fibre Channel, SATA and serial attach SCSI storage devices.

Storage hardware vendors typically compete with one another on issues of price, performance, features, support and reliability.  For systems and components alike, at the high end performance is king.  At the low end, price is usually most important.  And in the middle, most of us tend to think in terms of a price-performance ratio and categorize equipment by placing it on a price-performance curve.  SOCs will appear in each segment.

Why are system builders interested? 

A system on a chip helps them in the area of price – it will be cheaper for them to build products that use smaller, integrated components in place of multiple processors and protocol ASICs:  less real estate will be required on each board, and the new components will require less power.  Depending upon what is integrated within each part, system vendors may also get value in the areas of performance, features and reliability. 

SOCs will appear on host bus adaptors, in arrays and on switches.  Who is building these chips?  Companies you may not have heard of yet, such as Aristos Logic, Astute, iVivity, and NetCell.  Each has its own market focus, and a visit to most of their web sites might be interesting.  

When it comes to storage devices, each is more than willing to explain why you should stuff a SOC in it.