Either because server disks are full or because virtualization is a natural growth path, organizations large and small are moving toward shared storage. For large enterprises, high-capacity storage-area networks make sense, but what about small or mid-sized enterprises new to shared storage?
Netgear's ReadyNAS appliances offer a simple and effective way to get started with network-attached storage (NAS). The ReadyNAS 3100 we evaluated in this Clear Choice test was a snap to set up, and it proved a capable performer in our NFS and iSCSI tests.
The ReadyNAS 3100 is delivered on a 1U SuperMicro server with four SATA disks. We tested the $4,799 8-Tbyte version, while Netgear also sells a $3,699 4-Tbyte model. While it's possible to build a lower-cost NAS device on similar hardware using Linux or FreeBSD, it wouldn't be as fast to set up or as easy to manage.
Netgear supplied the system already formatted using its proprietary X-RAID2 technology. X-RAID2 is similar to RAID5 in terms of storage capacity, with the system reporting about 5.5Tbytes of storage available from its 8Tbytes of disks. However, unlike RAID5 the Netgear method can expand volume sizes without replacing all drives at once, and without first backing up.
It took us less than five minutes to do initial configuration on the system and start sharing Windows and NFS drives. Creating a 1-Tbyte iSCSI target for use in our VMware cluster took only another two minutes. That's about as close to plug and play as it gets with storage devices.
The appliance supports many features found in much larger storage systems. For security, there's SSL access for management traffic. For performance, its two gigabit Ethernet interfaces support jumbo frames, and they can be bonded using link aggregation. For backup, a wizard makes it simple to schedule jobs. For supporting mixed-client environments, the appliance offers lots of access methods: Windows networking; Network File System (NFS); Apple Filing Protocol (AFP); FTP; Web; SSL; rsync; and, new to this version, iSCSI.
Management features found in some larger storage products are absent. The system has just one administrator account, for example, so different tiers of administrative rights can't be defined (however, passwords can be assigned to users and groups for file shares).
Also, the appliance can be tied into an uninterruptible power supply (UPS), but only if it directly monitors the UPS using NUT, an open-source UPS monitoring tool. That was a problem on our test bed, where we use apcupsd, another open-source UPS monitor, to monitor UPSs and all other hosts attached to the UPSs. There is a workaround, though: Netgear's support site has add-on packages that allow ssh and root access to the device, enabling apcupsd to be installed to its base Linux operating system.
Sharing files using the Network File System (NFS), a common task for NAS devices, served as the focus for much of our performance testing.
We constructed a six-step scenario in which a client mounted an NFS drive on the ReadyNAS; created a directory; wrote a 10-kbyte file in the new directory; deleted the file; deleted the directory; and then unmounted the drive (see "How We Did It").
Even though that's only six steps for the user, a sample packet capture showed 118 unique NFS and related calls on the wire. Working with the Mu Test Suite from Mu Dynamics, we created a scenario that replayed this sequence, but substituted unique port numbers, file names and other attributes each time. That's a big improvement over simple capture/replay testing, and a much better predictor of NFS performance in production. (The capture we used can be downloaded from Mu's pcapr community site.)
The goal of the NFS tests was to plot response time against the number of concurrent sessions. Each "session" included all the steps named, and ran repeatedly for five minutes. At the end of each iteration, the Mu Test Suite reported response time statistics as well as the number of transactions and errors.
The ReadyNAS proved a capable performer for up to 128 concurrent NFS users. Average response times remained very low – less than 20 milliseconds for up to 16 concurrent sessions and less than 200 millisec for up to 128 sessions. Maximum response times also scaled linearly, with worst-case times of less than 200 millisec for 32 sessions or fewer, and less than 500 millisec for 128 concurrent sessions.
Errors began to occur in tests with 256 and 512 concurrent sessions, meaning one or more NFS sequences was unable to complete successfully. Still, 128 concurrent sessions is a relatively large number, especially for small- and mid-sized organizations whose NFS traffic isn't likely to be anywhere near as stressful. Even in an organization with thousands rather than hundreds of users, it's unlikely all users would concurrently exercise NFS as rigorously as the Mu Test Suite did here.
ISCSI support is a major new feature in this release, making ReadyNAS a candidate for storing virtual machines created with VMware and other virtualization products. This ReadyNAS device carries VMware certification, and we verified it works well when using VMware's vMotion to move virtual machines between hosts.
A central question when moving to any sort of shared storage is what performance penalty, if any, is involved. To answer that question, we compared disk I/O performance for virtual machines using local and iSCSI storage. We compared performance using Windows Server 2008 R2 and CentOS 5.5 virtual machines, measuring I/O performance first on a local datastore, and then using the ReadyNAS datastore over iSCSI. We used the open-source IOzone tool to measure I/O performance.
Perhaps the most striking thing about the test results is that local and iSCSI rates are virtually identical, both for Windows and Linux. Truly, there's no performance penalty for using the ReadyNAS as an iSCSI datastore. In fact, with the benefits iSCSI delivers, such as vMotion support, there's no reason not to move to shared storage.
As expected in filesystem testing, reread and rewrite operations are faster than initial reads and writes. That's because filesystems cache information for reuse.
What's more surprising is that I/O rates are relatively low – at best, around 2.5GBps for a local Linux server doing rereads. Since local and iSCSI rates are quite similar, ReadyNAS clearly isn't the bottleneck.
A more likely culprit is the amount of data each virtual machine can read at a time. The IOzone results presented here are the averages of all results for file sizes ranging from 64K to 4GB, and rates fall off sharply for file sizes of 8MB and larger. (In fact, when we looked only at file sizes of 4MB and smaller, overall average rates nearly doubled, to a maximum of nearly 5GBps.) This shouldn't be taken as a knock on the ReadyNAS, but rather an indication that virtual machine setup parameters can have an effect on I/O performance.
With its combination of simple setup, easy management and good performance, we found the ReadyNAS 3100 to be a competent and capable NAS device. For small and mid-sized organizations looking to get into shared storage, this is a great place to start.
Newman is a member of the Network World Lab Alliance and president of Network Test, an independent test lab and engineering services consultancy. He can be reached at firstname.lastname@example.org.
Network World gratefully acknowledges the support of test bed infrastructure vendors who made this project possible. Mu Dynamics supplied its Mu Test Suite and engineering support to assess NFS scalability. VMware supplied vSphere 4.0 operating system, including ESX 4.0 host software.
Apple's iPhone 8 will likely launch in September, despite other reports to the contrary.
Microsoft removes and depreciates features in its Windows 10 Creators Update that apply to commercial...
A review of 18 companies that offer free cloud storage
Sponsored by Brocade
Three people from Illinois have sued Microsoft, claiming that the free Windows 10 upgrade they...
Developers require a powerful development environment, such as public cloud. They'll get what they need...
Catching an insider taking confidential information doesn't happen by chance, and policies and...
Old gear, bad locations and overcrowded volume top the poor choices that many companies make when it...