It takes something different to stand out in the crowded network-attached storage market. How does free, as in free beer and free speech, sound?
That's the premise behind FreeNAS, the open-source storage software that supports every major file-sharing protocol out there. FreeNAS can look like a Windows server or an iSCSI target, among other server types. It's managed by a Web interface that's more intuitive than some commercial storage appliances we've used. And FreeNAS offers the innovative ZFS file system, with built-in integrity checks, flexible and virtually unlimited scalability, and good performance.
In this Clear Choice test, we evaluated FreeNAS on an iX-2212 server supplied by server vendor iXsystems, a major supporter of the FreeNAS project. While iXsystems sells commercially supported TrueNAS systems built on FreeNAS, the company made clear that the software package is free, and can be installed on any PC hardware, 32- or 64-bit.
Installation is fast and straightforward. Once the system is set up, it's managed by either a well-designed Web interface or the command-line interface (CLI). Even allowing for our strong CLI bias, we could achieve virtually every task from the Web UI as well, right down to setting low-level parameters in the FreeBSD operating system on which FreeNAS is based. (For those new to FreeBSD, the default parameters worked fine in our testing; there's no need to change OS parameters, or know anything about FreeBSD, for that matter.)
FreeNAS supports multiple file-sharing protocols, including CIFS, NFS, and iSCSI, making it suitable as a file-sharing device for Windows, Mac, and Unix/Linux clients. And iSCSI support makes FreeNAS a good choice for shared storage of virtual machines. FreeNAS also can act as an FTP and TFTP server, and it supports rsync for backup to and from the appliance. And it can be configured as a backup server for Windows Shadow Copy and Apple Time Machine.
Thanks to its ZFS support, FreeNAS performs "snapshots" of its file systems for local and remote backups, similar to Windows Restore Points. FreeNAS can send snapshots incrementally, reducing backup sizes. Even if all the redundancy features in a FreeNAS system were to fail, the data would still be recoverable by restoring a backed-up snapshot to a new system.
A FreeNAS appliance can act as an iTunes streaming media server, a universal plug-and-play (uPNP) server, or a web server, all using available plug-ins. The plug-ins use FreeBSD's virtual "jails," which means a problem with one plug-in won't affect anything in the rest of the system.
Perhaps the best single feature in FreeNAS is its optional use of the zettabyte file system (ZFS), first developed by Sun and now actively maintained as a FreeBSD project.
A ZFS system can hold a 16-exabyte file (about 18 million terabytes) or 200 million files. Even in a Big Data world, capacity isn't going to be a problem with ZFS.
[STORAGE RELATED: Watching CES 2013: Storage]
ZFS is a speedy performer, as we'll show with test results, but it's also extremely flexible and easy to manage. It supports up to 18.4 quintillion snapshots for a virtually unlimited amount of rolling backward and forward.
Data integrity is a ZFS hallmark. Instead of relying on the underlying hardware to detect errors, every block in a ZFS system uses a 256-bit checksum to validate data. In a redundant system using mirroring or RAID, ZFS automatically reconstructs any corrupted blocks without user intervention. Because ZFS continually validates data integrity on disk, a FreeNAS appliance can survive loss of power without the need to run the Unix fsck command on each volume afterward.
And ZFS is really a RAID controller, volume manager, and file system rolled into one. There's no need for separate management tools for each, as in many other enterprise storage products.
On the RAID front, FreeNAS offers lots of choices for how volumes are assembled. In addition to many RAID choices (RAID 0, 1, 5, 6, 10, 50, and 60), ZFS has two of its own methods called raidz1 and raidz2. The raidz1 option is similar to RAID5, except that it can tolerate the loss of multiple disks, thus fixing RAID5's "write hole" problem. The raidz2 option is similar to RAID6, offering double parity checking, and like raidz1 it too can handle the loss of multiple disks.
Unlike conventional volume managers and file systems, ZFS doesn't use fixed-size partitions or volumes. If current volumes don't offer enough capacity, ZFS makes it easy to add more - to a live production system, with zero downtime. During testing, we expanded a ZFS storage pool using one command, with no need to take devices or file systems offline. This expandability even extends to adding different-size disks into a storage pool (though the usual size rules with RAID still apply).
ZFS also offers optional compression of selected storage pools. This can improve performance, since the time taken to compress a pool is faster than the time to read and write uncompressed data to disk. Compression is a natural fit for storage pools with lots of text files, such as logs.
The drawbacks with using ZFS are minor, and might not even be considered drawbacks in many cases. First, because ZFS owes much of its performance to caching, it's best installed on servers with lots of RAM. While 6GB will suffice in theory, in practice ZFS systems should have more - lots more. The system iXsystems supplied had 48GB of RAM, though we've also run FreeNAS in systems with 16G and 24GB of RAM with good results. In general, though, the more ZFS can cache, the faster its I/O performance.
If available RAM is really an issue, FreeNAS also can be installed using FreeBSD's regular UFS file system with as little as 2GB of RAM.
Second, due to licensing issues ZFS runs mainly on BSD systems, though there is a Linux port available (of ZFS only, not the entire FreeNAS system). The choice of operating system is a nonissue with FreeNAS, which is a turnkey distribution built on FreeBSD. Even users unfamiliar with FreeBSD should be fine with FreeNAS, since it's managed through an intuitive and powerful Web interface.
Licensing is really only an issue for developers. ZFS' Common Development and Distribution License (CDDL) license allows free re-use of source code (including the right to convert to closed-source code), while the GNU 2.0 and 3.0 licenses in the Linux world require changes to be committed back into open-source distributions.
Storage performance benchmarking is a complex topic, with many variables involved. To determine how FreeNAS would handle the most common types of operations, we set up a 10-gigabit test bed and used the open-source iozone benchmarking tool.
The key variables in I/O performance involve the kinds of operations a storage device will handle. Devices may move data in small or large blocks - think of a database handling small transactions, vs. a file manager moving large virtual machine images around. The type of operation also is important; writing to a disk tends to take longer than reading from it. Due to caching, an initial read or write operation probably will take longer than a re-read or re-write. And operations that use sequential blocks on a disk will outperform random reads and writes, since in the latter case, the disk head moves around a lot.
We configured the iozone tool to measure I/O performance for six test cases, each with the FreeNAS appliance acting as a Network File System (NFS) server for two NFS clients, also equipped with 10-gigabit Ethernet adapters. We ran all six sets of tests twice, using small and large record sizes.
One thing we did not do was allow FreeNAS to use all 48 Gbytes of the RAM in the server supplied by iXsystems. Like any modern operating system, FreeBSD puts as much data as possible into RAM before having to swap out to disk. Serving data from RAM means much higher performance for relatively small reads and writes, but it's not representative of the performance users would see in production. This is especially true when many users are involved; then, reading and writing from disk becomes inevitable.
To ensure a balance of disk I/O and caching performance, we configured the FreeNAS server to use only 6GB of RAM, the minimum supported with ZFS, and then we read or wrote 64GB in each test - well in excess of the available RAM. We also configured both NFS client machines to use 6GB of RAM, even though both had 16GB available.
FreeNAS performance is fast, especially with sequential reads and re-reads (see the figure, below). Storage performance tests usually measure I/O in bytes per second; when expressed in bits, FreeNAS read and re-read data at rates at or above 6Gbps.
That 6Gbps top speed also includes several other factors: The 6Gbps speed limit of SATA3 disks; the overhead added by the NFS protocol; the contention among multiple TCP flows (there were 16 threads active during these tests); and the amount of disk I/O relative to data read from RAM. The top speeds achieved here are about as fast as the hardware could possibly go under these test conditions.
Write and rewrite performance was slower than reads, as usual in I/O benchmarking. With sequential rewrites, FreeNAS moved traffic at rates of around 280MBps. Curiously, sequential rewrites went twice as fast with 4-kbyte records than with 64-kbyte ones. The most likely explanation is that the time involved in writing the larger amount of data to disk favored the smaller record size.
Sequential write and read tests are meaningful when writing or reading large amounts of data on a relatively empty disk. Once the disk fills up, or if the application involves reading from different parts of a database, then random read and write tests become more important.
Results are much slower for random read and write tests. That's not surprising considering that disk heads move around a lot more in a random test than they would with sequential operations. Here, the larger 64-kbyte records help, since there's more time spent writing or reading relative to disk seek time. Still, both 4- and 64-kbyte read and write times are just a fraction of the sequential times.
In the worst case, writes of 4-kbyte records are just 3MBps, compared with 276MBps with sequential writes. In fairness, though, any storage systems would do far worse in random tests than in sequential ones. These results aren't a reflection on FreeNAS or ZFS.
FreeNAS's price can't be beat.
Overall, FreeNAS offers a very positive story, with flexibility, ease of management, good performance - and a price that can't be beat.
Thanks to Arista Networks for supplying a 7124S 10G top-of-rack switch that tied together all systems on the test bed.
Newman is a member of the Network World Lab Alliance and president of Network Test, an independent test lab and engineering services consultancy. He can be reached at firstname.lastname@example.org.
How We Did It
We assessed FreeNAS in terms of usability, features, and NFS I/O performance. In performance tests, the device under test was an iX-2212 server supplied by iXsystems; as part of usability testing, we also installed the FreeNAS software on an older SuperMicro server and as a virtual machine running under VMware vSphere 5. We used FreeNAS version 8.3.0-RELEASE-x64 (r12701M) in testing.
Usability and features testing consisted of setting up the device to function as an NFS server, and then again as an iSCSI network-attached storage (NAS) device. In the NAS case, we created FreeBSD virtual machines using VMware vSphere 5 on VMware ESX 5.0 hosts, and used FreeNAS as the datastore.
We also assessed FreeNAS's ability to conduct other common management tasks, such as configuration of administrator rights; software upgrades; and setup of link aggregation groups using two 10G Ethernet interfaces.
For NFS I/O performance testing, we used iozone, an open-source file system benchmarking tool. The goal of these I/O tests was to compare client performance under six common scenarios: initial sequential writes; sequential rewrites; initial sequential reads; sequential rereads; and random reads and writes. Each of two NFS client machines ran FreeBSD 8.3 and ran iozone with file sizes of 32GB and eight threads apiece, for a total of 64GB and 16 threads per test. We repeated the iozone tests with 4- and 64-kbyte record sizes.
To get a sense of combined disk I/O and caching performance, we deliberately constrained both the FreeNAS server and the client machines to use 6GB of RAM, much less than the hardware RAM installed in server or clients. This forced a larger number of disk I/O operations in testing, as might be the case with larger numbers of users and/or files in enterprise settings.