HP StorageWorks sets the bar for iSCSI SAN server security

But products were generally disappointing

Our testing of iSCSI SAN servers show they all handle basic functions as advertised. But we had to dig deeper into other enterprise features offered — such as security, high availability, and expandability -- to find bigger differentiations between the products.

We looked for the same level of security management and secure design in iSCSI storage systems as we would for any other critical network component. But we were sorely disappointed.

Something as simple as having a strict separation between management and data planes would be basic to these products, we thought, but less than half of the products tested – D-Link's DSN-3200-10, HP's StorageWorks 2012i, NetApp's FAS2050, NexSan's SATABeast and StoneSly's Storage Concentrator – have the ability to completely separate data and management.

How about encrypted management traffic? Most products supported that, although with some, enabling SSL was optional while with others it was very difficult to enable at all. Kano's NetCOR 7500 and Nexsan's SATABeast simply don't support SSL. The SATABeast was even more frightening: it runs, by default, without any username or password required for management.

And in another security faux pas, we found many products listening on Telnet ports that couldn't be disabled.

Our general conclusion is that the storage industry somehow thinks that because its products are sitting inside the corporate firewall that they're safe. They should rethink that point very carefully.

We did run into a few occasional security high points, though, such as delegated levels of system management offered in the Compellent StorageCenter and the NetApp FAS2050. But the only product in our test that easily met basic requirements for control security — a separate control plane, ability to enable/disable management services and encrypted management — was the HP StorageWorks 2012i. Next up was the NetApp FAS2050, which had many of the same features, but made them so difficult to use, that many managing this system would not bother to use them correctly. For example, controlling SSL and Secure Shell access — something HP facilitated with two check boxes within its Network Management GUI screen — takes NetApp 20 pages of documentation to describe.

On the data security side, we — thankfully — had a better experience in spite of the fact that the iSCSI protocol is a particularly dangerous one for most enterprises because of its "discovery" mechanism. This mechanism is a way for an iSCSI initiator to discover all of the virtual disks that an iSCSI target is advertising. Discovery makes it easy for an inattentive administrator to accidentally — or purposefully — attach a server to a virtual disk that they shouldn't, potentially causing data corruption or information leakage. An iSCSI storage system must have a clear security model that makes it easy for the storage administrator to unambiguously apply controls on which systems can connect to which virtual disks.

We were looking for products that would let us restrict volumes based on iSCSI initiator name, IP address or a username/password pair. Our testing showed that the Dell PS5000XV and Reldata Unified Storage Gateway, followed by the StoneFly Storage Concentrator, had the cleanest and most complete data security implementations, making it easy (more or less) to apply any protection needed.

The only product that failed our basic requirements for data security was the Nexsan SATABeast, because it did not support any sort of initiator/target authentication. While the other products made it variously hard, confusing or difficult to use authentication (CHAP is the authentication protocol commonly used in iSCSI), we did manage to make them all work sooner or later.

We also looked for encryption on the data plane, although it is likely that most storage managers will depend on a separate data network rather than encryption, to help assure privacy. NetApp's FAS2050 had it, and we were able to make it work. The Celeros EzSANFiler claimed to have IPSec encryption, but we couldn't make it work. Reldata's Unified Storage Gateway supports only manual key sharing rather than Internet Key Exchange, an approach that won't pass muster in the real world.

We found an interesting feature (which we didn't test) on the StoneFly Storage Concentrator: on-disk encryption. With StoneFly's implementation, encryption keying information is loaded on a USB memory card that must be present when the system is booted.

RAID level differentiation

Another difference between the iSCSI SAN servers tested is the number and types of devices and RAID levels supported. Different RAID types usually represent tradeoff choices between availability (the ability to survive a drive failure), performance (read and write speed), and capacity (the amount of space 'wasted' by redundant storage). In larger storage systems that can mix both expensive high-speed/low-capacity drives with less expensive low-speed/high-capacity devices, there's also a cost tradeoff to factor in.

In a traditional single system RAID environment, system managers are accustomed to having a lot of choice and a lot of control: disk striping (RAID 0), disk mirroring (RAID 1), striped mirrors (RAID 1+0), and distributed parity (RAID 5) are all commonly available for any set of disks, along with other variations (RAID 4, RAID 5+0, and more). System managers pick one or the other based on their requirements for efficiency, data integrity and performance. When dealing with eight or so identical disks, and a single application or two, it's easy to make these choices.

In the iSCSI storage systems we tested, the minimum number of drives is 12, with most systems offering expansion far beyond that. (Only the D-link DSN-3200-10, FalconStor NSS-S12, and Nexsan SATABeast did not allow expansion, although with a 42-drive capacity in the Nexsan SATABeast, it's hard to consider that a lack of expansion capability.)

Choosing RAID levels in an environment with dozens of drives and applications, multiple drive speeds and capacities, as well as features such as virtual drive expansion (supported in all the devices we tested) and snapshots, may be more than even a storage genius can intelligently handle.

The most innovative approach to plethora of RAID choices comes from Compellent, with its dynamic, tiered storage system. In a Compellent storage system, the responsibility for managing the performance/cost tradeoff falls on the Compellent controller rather than the system manager. A Compellent system (like many we tested) can combine high-speed, but expensive, drives with low-speed, higher capacity, less-expensive drives, all into RAID 0, RAID 1+0, RAID 5, and a double mirror stripe RAID 1+1+0. Rather than specifically lock a particular virtual drive into one set of physical disks and one RAID topology, Compellent's software (if you let it) will automatically migrate heavily used data to faster storage and less-used data to slower storage, based on up to three tiers that the system manager identifies. We tested this and watched as heavily used data during performance testing made our "tier 1" disk drive lights blink, while the data we wrote once and touched only at the end of the test week got pushed to "tier 2" physical drives — all within the same virtual disk.

No other vendor claimed to have anything like this automatic RAID migration, although we were pleased to see the Celeros EzSANFiler XD34S and HP StorageWorks 2012i systems support a mix of Serial Attached SCSI (SAS) (high speed, high cost, low capacity) and SATA (low speed, low cost, high capacity) drives in the same shelf, giving the system manager more concrete control over the cost/performance/capacity tradeoff even in fairly small deployments. All the other expandable systems require each storage shelf (typically 12 to 20 drives) to have a single type of device, either SAS or SATA. However, it should be noted that the Celeros and HP arrays suffered lower levels of performance on both SAS and SATA than other servers we tested, so this flexibility within the same chassis does come at a cost.

When it comes to simple enumeration of RAID levels, we resisted turning this part of our evaluation into a "more is better" list. Every device offered the choice to tradeoff reliability (by selecting RAID 1+0 or RAID 6) against efficiency (typically by opting for RAID 5), which seemed to be sufficient for most requirements.

One difference between the storage subsystems we tested and a typical RAID controller is that the systems we tested generally have RAID 6 support (only the Compellent StorageCenter, Dell PS5000XV, D-Link DSN-3200-10 and Reldata Unified Storage Gateway don't). RAID 6 is not as standardized as the other RAID levels, but refers to a parity redundant storage technique similar to RAID 5, but able to survive the loss of any two drives without failure.

Storage load balancing

The final areas we evaluated for enterprise features are load balancing and high availability, which often go hand-in-hand. Although we didn't ask for high availability configurations, all of the systems came with multiple power supplies and six of the systems came with dual controllers integrated into their iSCSI servers, so we peeked at the high availability capabilities anyway. The Dell PS500XV, HP StorageWorks 2012i, Kano NetCOR 7500, NetApp FAS2050 and Nexsan SATABeast were all shipped with two controllers integrated into their basic iSCSI storage systems. We tested each one and had only a single failure: the Nexsan SATABeast would not failover properly when we were using Qlogic iSCSI initiators, although it did work using the integrated Microsoft Windows 2008 iSCSI initiator. We tracked this down to an incompatibility between the MPIO feature set in the Qlogic initiator and the MPIO software in Windows 2008. This highlighted a fairly unusual high availability strategy in the Nexsan which required full MPIO support.

The Dell, HP, Kano and NetApp servers all worked using a more traditional system where one controller took over the IP address(es) of the other controller when it crashed. In our tests, these all worked flawlessly.

When investigating high-availability features, we also ran into some load balancing issues. In the world of storage systems, "active/active" load balancing means something very different than it does in the world of networking appliances. Storage servers — at least not the ones we tested — don't actually balance load across internal controllers. Instead, each controller takes primary responsibility for a set of virtual disks, and it's up to the system manager to make sure that each controller has a balanced load. In the world of networking, we'd call that "active/passive", but storage vendors prefer to use "active/active" to indicate that each controller is taking some load, even if they're not sharing load on a single virtual disk.

The easiest to manage load balancing was in the Dell PS5000XV, Kano NetCOR 7500 and LeftHand Networks NSM 2120 devices. In each implementation, the iSCSI servers present themselves to the network as a single IP address, even though multiple controllers and multiple IP addresses are in place, which dramatically reduces the workload as well as the potential for error when connecting an iSCSI initiator to a virtual disk. To handle load balancing, the devices transparently redirect iSCSI initiators to other controllers. Other devices we tested with load balancing capabilities require the system manager to be aware of the different IP addresses used by each controller and manually configure connections to each — an unnecessary complication that had us making phone calls to technical support to get things straightened out, especially after we had simulated device failures.

It's worth noting that the Celeros EzSANFiler, FalconStor NSS-S12 and the D-Link DSN-3200-10 cannot have dual controllers talking to the same disk array, so if you're looking for a iSCSI server that can survive the loss of something more sophisticated than a power supply, those might not be appropriate (FalconStor has other models that support multiple controllers).

We didn't test the high-availability capabilities of the Compellent, Reldata, or StoneFly solutions because they require additional external controllers. In each case, these iSCSI SAN servers consist of a controller in a separate box from the disk drives, so high availability requires adding a separate controller to the iSCSI system. We also didn't investigate the high-availability capabilities of the LeftHand Networks NSM2012, which uses an unusual architecture of independent disk plus controller "storage nodes" to provide high availability. LeftHand initially sent us six storage nodes to highlight their high-availability capabilities, but we elected to only evaluate them based on three storage nodes to make their system more equivalent in price and capabilities with the other devices tested. The LeftHand solution is intriguing, but gaining efficient high availability can be quite expensive unless you really need 18TB of high-speed SAS-based storage.

< Return to test intro: NetApp, Compellent, HP, Dell top the field in 12-product test >

Learn more about this topic

 
Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Copyright © 2008 IDG Communications, Inc.