Products take multiple paths to interoperability

Simple OS and storage connectivity not an issue, support for load balancing measures is

With iSCSI SAN servers called upon to serve many different operating systems at the same time, an initial concern for any deployment is simple interoperability: Do these products work with different operating systems?

We tested each iSCSI SAN server with three very common use cases. We deployed them in a test environment with software initiators for Microsoft Windows Server 2008 and Centos 5 Linux, and a hardware initiator from QLogic. In iSCSI terminology "initiator" is the network equivalent to client, while "target" is the network equivalent to server. We'll use "virtual disk" instead of target to help keep things clear. Our results show many different models for how iSCSI initiators and virtual disk targets authenticate, load balance and interact.

Basic operating system interoperability was fine for all products tested. And we were able to connect and exchange data with the three different initiators -- eventually. Three products gave us some headaches along the way, though. The Nexsan SATAbeast and the Kano NetCOR 7500 systems both required software upgrades before they worked properly. The Kano NetCOR 7500 reacted to our Qlogic hardware initiators by actually crashing the controllers. Because the Kano NetCOR 7500 has dual controllers acting as a high-availability pair, one controller would crash and the other would take over, but the whole system never completely stopped working, even as the crash-takeover-reboot cycle kept repeating itself. The Nexsan SATAbeast didn't work well with Windows Server 2008 when we ran it with multiple data paths, refusing to connect to Windows some of the time. A firmware upgrade downloaded from the Nexsan Web site solved that problem.

We encountered a more subtle problem working with the Compellent StorageCenter on Windows 2008. Compellent uses a model for disk space where its products transparently allocate disk from various parts of the subsystem based on performance. When a new virtual disk drive is created and quickly filled with data, there may be system delays while the Compellent controller gathers array space. Compellent engineers advised us to edit the Windows registry settings to increase disk timeouts to resolve the problem.

The ensuing interoperability challenges we then encountered all focused on a single area: making the connection between the iSCSI initiator and iSCSI target fast and reliable.

Although a single 1Gbps link turns out to be faster than most of these storage arrays can handle while accommodating the storage workloads for applications such as Microsoft Exchange 2007, having more than one link can help performance and reliability. 

In the world of networking, we'd immediately turn to link aggregation (sometimes called bonding), an IEEE standard supported by all enterprise switches that lets you combine multiple links into a single "superlink" with higher throughput and resiliency. Because every system we tested was outfitted with two to 12 gigabit Ethernet ports per chassis, we thought that link aggregation would be a no-brainer. We were wrong. Five of the systems -- the Compellent StorageCenter, Dell PS5000XV, HP StorageWorks 2012i MSA, Kano NetCOR, and Nexsan SATABeast -- don't even support link aggregation. This seems a curious and unfortunate omission, especially as iSCSI competes with 2GB and 4GB Fibre Channel storage systems. As our testing demonstrated on the seven systems tested that did support it, link aggregation is an inexpensive, uncomplicated way to add both burst bandwidth capability and high reliability.

A different technology with similar goals to link aggregation commonly used in iSCSI systems is multipath. The idea behind multipath is that a storage system target and a client initiator can have multiple simultaneous TCP/IP connections. Although the initiator has two (or more) views of the same target device, multipath support allows the initiator to keep these views separate as well as use the two connections for load sharing or high availability. Multipath can provide higher performance (if the client is smart enough to load balance traffic over multiple links) and high availability (if the two TCP/IP connections are across different paths or to different controllers).

There's a slight technical difference between multipath and bonding as well, because iSCSI uses TCP for transport: multipath always has multiple TCP connections active at once, while bonding would usually use a single connection. Depending on the target and initiator support of certain advanced TCP options (especially an increased TCP window size) and the latency between initiator and target, two TCP connections using multipath could behave very differently than two physical connections bonded together in a bandwidth-intensive environment.

Multipath is more complicated than link aggregation because it requires close coordination between the iSCSI initiator and the iSCSI target. For Windows 2003, vendors who support multipath had to ship a product-specific plug-in, called a device specific module (DSM). We were lucky in choosing Windows 2008 Server, because storage vendors seem to have thrown in the towel on writing their own DSMs and all worked to be compatible with the multipath support built-in to Windows 2008.

On Linux, our testing of the multipath features were far less successful, as not every vendor supported it, and those that did seemed to require an inordinate amount of installation and reconfiguration to make it work.

The most common reason that vendors gave us for using multipath was distribution of load across controllers. Five of the systems we tested had two internal controllers, including the Dell PS5000XV, the HP StorageWorks 2012i, the Kano NetCOR 7500, the Nexsan SATABeast and the NetApp FAS2050. Also in this camp was the LeftHand NSM 2120, with multiple controllers cooperating to form a single iSCSI target. Each of these six products used multipath to spread the load, although some made that process more difficult than others. With the Dell PS5000XV, the Kano NetCOR and the LeftHand NSM2120, the storage system made it easy on us by presenting a single IP address and then taking care of the load balancing and failover automatically. In contrast, the multi-controller systems from HP, Nexsan and NetApp all made configuration of multipath a manual operation and required us to explicitly connect up each iSCSI initiator to the multiple IP addresses and targets presented by the storage server.

When connecting to any individual controller within a storage system, multipath offers no advantages — and some distinct disadvantages, including lower performance, complexity and management overhead — compared with link aggregation. For example, we tested the NetApp system using both multipath-only and link aggregation configurations, and found that the link aggregation offered 10% to 36% improvement in performance over "load sharing" using multipath.

Although we didn't have any complete interoperability failures for the products we tested, we did find that some products have design details that make life harder for the network manager trying to build high-performance, reliable storage networks.

< Return to test intro: NetApp, Compellent, HP, Dell top the field in 12-product test >

Copyright © 2008 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022