Dell, LeftHand, NetApp score highest in performance testing

There is no messier can of worms to open than one containing a disk I/O performance benchmark. Performance is difficult to measure, because there are few trustworthy tools. Performance is also difficult to characterize, because every application (and even versions of the same application) uses the file system differently. And small configuration changes within either the test tool or the iSCSI server can lead to substantial changes in performance.

Rather than try and identify the fastest iSCSI subsystem, we focused on four sets of performance-related questions:

• First, how does iSCSI compare to locally attached disk storage? Is iSCSI fast enough to replace internal disks, or does the network act as a bottleneck?

• Second, how does iSCSI over Gigabit Ethernet perform? Is there a requirement for multiple connections to each iSCSI initiator? Is Ethernet a bottleneck? Does iSCSI need to be shelved until 10G Ethernet is widely available?

• Third, is there reason to pay extra money for Serial-Attached SCSI (SAS) drives compared with Serial ATA (SATA) drives?

• Finally, are there some general observations we can make about which iSCSI servers are faster than others?

To get a baseline, we ran basic benchmarks on locally connected SCSI drives. Each network and application server in our test bed had two internal 10K RPM SCSI drives, and we ran the same benchmarks across four servers with local drives that we would later run across the LAN to the iSCSI servers under test (see How we tested).

Results show that a single server talking to local disks would generally see lower levels of performance than the same single server running the same benchmark to an iSCSI SAN device. If you simply replaced internal disks on a single server with a iSCSI SAN server, even the slowest device tested would be faster than the local disks.

Tracking performance of iSCSI SAN servers with SAS drives

Of course, these iSCSI servers are designed to serve many servers at once, and you wouldn't buy a whole iSCSI server for a single application server. Because our test bed had four servers, we ran the same test with four servers hitting the iSCSI array at once. In that case, the sum of all eight local drives in all four servers turned in a more respectable performance when compared with iSCSI over Gigabit Ethernet, because the local disk performance scaled linearly. When four servers with local drives were compared with the iSCSI servers we tested, they placed in the bottom third of our results: nine of the iSCSI SAN servers were still faster, but five were slower.

Tracking performance of iSCSI SAN servers with SATA drives

It's impossible to guess exactly what the limits of the iSCSI SANs are without having a lot more servers (and time) to test, but a simple linear extrapolation indicates that for the test loads we used, you'd have to have about 15 servers, all running flat out with disk I/O, before the top three performing iSCSI SANs we tested (from Dell, LeftHand Networks, and NetApp) would be slower than locally attached disks. And, at that point, you'd have spent about $45,000 on disks and RAID controllers for the 15 servers, compared with $55,000 to $96,000 for one of the iSCSI servers we tested. That doesn't necessarily make the iSCSI SAN servers we tested a bargain, but the costs are not too far off — especially if you factor in the other value prospects these virtualized storage systems bring.

We conclude that iSCSI competes well with locally attached storage. If you've been buying inexpensive SATA drives, you can throw one of the SAS-based iSCSI SAN servers at your network, and you'll probably see a big jump up in performance. If you've been building individual iSCSI-based arrays in each of your servers and then pile a dozen heavily used servers on one of the SATA-based iSCSI servers, you will more likely be disappointed with performance.

Cheaper than Fibre Channel?

Our second finding was that standard gigabit Ethernet doesn't appear to be a bottleneck for SAN performance. Instead, it is the iSCSI server — most likely the internal disks — which is the actual bottleneck. In our testing, every iSCSI SAN Server had a minimum of two gigabit Ethernet ports, with most having four. Yet, with four clients pounding on the servers, only one test out of 48 (the LeftHand Networks during our simulated Web server test) exceeded a total aggregate bandwidth of 2Gbps.

While we wouldn't suggest using a single Gigabit Ethernet port for your SAN connection, in only eight tests (again, out of 48) did the four servers exceed a total aggregate bandwidth of 1Gbps. Those cases were: LeftHand Networks and NetApp in the file and Web server tests, Compellent, Dell and StoneFly in the Web server test.

These results suggest that four Gigabit Ethernet connections should be sufficient to saturate the capability of most iSCSI storage systems using normal traffic and that the main reason to run dual connections from an iSCSI initiator would be for high availability rather than to accommodate total performance. That's good news for network managers, because it means the additional cost and complexity required to use Fibre Channel as the interconnect for SANs does not pay off in measurably higher performance.

With two ports of Fibre Channel coming in at about $1,000 per port (Gigabit Ethernet is about $100 per port) and two Fibre Channel switches at about $10,000 per switch (Gigabit Ethernet is about $1,000 per switch), the price difference in the infrastructure between Gigabit Ethernet and Fibre Channel for a SAN with 20 servers would be more than $50,000 — enough to buy a spare storage server or a lot of extra space for your existing server.

SAS vs. SATA

When we looked at the performance of SAS drives compared with SATA drives, the results weren't entirely conclusive, but the trend is pretty clear: SAS drives will give you lower latency, more I/O operations per second, and higher throughput than SATA drives. They should, though, given the cost differential.

The cost of a SAS drive can be 10 times the cost of a SATA drive for the same capacity. For example, a 450Gb 15K RPM SAS drive has a street price of about $1,000 — if you can find them, as they just started shipping. Whereas a 500Gb 7.2K RPM SATA drive is about $100 — if you can find them, as they're considered nearly obsolete and have been largely replaced with 750Gb and 1000Gb drives. SAS drives have another cost as well: they aren't available in very high capacities so you use up more precious slots in iSCSI servers to get the same capacity.

In this test, six vendors sent us solutions using 15K RPM SAS drives and seven vendors sent solutions using 7.2K RPM SATA drives. Celeros's EzSANFiler XD34S and HP's StorageWorks 2012i can mix SATA and SAS drives, so we benchmarked each set separately. We also had one vendor (Compellent) send us a shelf of 10K RPM Fibre Channel drives. No matter how we looked at the statistics, as raw throughput, I/O operations per second or system latency, the top three performers were always the same: Dell, LeftHand Networks and NetApp, all using SAS drives.

However, the fourth place performer in all three categories was the Nexsan SATABeast, with its load of 1000Gb 7.2K RPM SATA drives. On the slow side of our testing, SATA-based iSCSI servers dominated our rankings. For each of the categories, the bottom scores were always SATA-based arrays.

Raw rankings, though, don't show the real difference in speeds. To compare SAS to SATA speed, we looked at the improvement in throughput on the top three SAS arrays compared with the top three SATA arrays. The average increase in performance of SAS over SATA across all four scenarios we tested was 221%, while individual results ranged from 157% (in the file server simulation test) to 270% (in the Web server simulation test). This seems to say that for normal read-write operations, especially in heavy, random-access environments such as e-mail, SAS-based iSCSI will turn in better than twice the throughput than SATA-based iSCSI SAN servers. In environments which are largely read-only, such as a Web server offering HTML documents, the performance difference is less significant, but still fairly obvious.

Which iSCSI SAN is fastest?

Finally, of course, we had to look at absolute scores and see which products were fast and which ones were slow. The usual cautions should appear here: our testing is based on simulated workloads, using simulation tools, and iSCSI servers which had "out of the box" configurations -- not necessarily tuned for each application. Your mileage may vary and past results are no guarantee of future returns.

That being said, we divided up the iSCSI servers based on their disk technology, because it doesn't seem fair to compare SATA-based iSCSI with SAS-based iSCSI. We compared the Compellent StorageCenter with SAS-based iSCSI servers because the Fibre Channel drives that it uses are sold as high-performance devices as competition with SAS drives, and not as a replacement for SATA drives.

It's clear that in the SAS bucket, the Dell, LeftHand Networks and NetApp SAN servers were head and shoulders above the other devices in performance. Their aggregate throughput, low latency and high I/O operations per second rates were dramatically higher than the other devices we tested. This performance comes at a cost: the costs per gigabyte of throughput ranged from $8.88 (LeftHand Networks) to $24.29 (NetApp). Next in line in the SAS-based server list were the products from HP and Reldata.

Where budget intrudes, or greater capacity is needed, the SATA-based iSCSI storage systems we tested still can hold up their end of the bargain, and at a fraction of the cost of the SAS-based systems. The iSCSI arrays we tested offer a cost per gigabyte of throughput ranging from a low of $0.77 (D-Link) to a high of $3.58 per gigabyte (FalconStor). In this bucket, two systems really stood out above the other SATA-based arrays, the Nexsan SATAbeast and the StoneFly Storage Concentrator, both scoring similarly on all our performance tests. Next in line were products from FalconStor and Kano Technologies.

< Return to test intro: NetApp, Compellent, HP, Dell top the field in 12-product test >

Learn more about this topic

 
Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Related:

Copyright © 2008 IDG Communications, Inc.