- 18 Hot IT Certifications for 2014
- CIOs Opting for IT Contractors Over Hiring Full-Time Staff
- 12 Best Free iOS 7 Holiday Shopping Apps
- For CMOs Big Data Can Lead to Big Profits
Developments of the week in storage
We have been reporting on high performance file systems for the past few weeks, which quite naturally led us to look at Linux clustering files systems. In the last three weeks, we have gone over high performance Linux solutions from Cluster File Systems, HP and IBM. I expect to look at several other high performance file systems in the near future, but for now let's take a break from that and look at what is available for Linux users who don't want to deal with third-party providers.
I wrote a rationale for going with "proprietary" implementations of the Linux file system in my previous newsletter. But what is available for IT managers who have to support high performance database, file and compute serving, or who simply want to achieve efficiencies in terms of management and hardware usage through eliminating multiple copies of existing data that have arisen because the same file has been written to multiple file paths?
And what about those who want to try doing clustered Linux "on the cheap"? For that, we have to look at the components of the standard Linux distribution. In terms of standard Linux, the answer to what is available for free is... Nothing.
However, if you are willing to spend $2,200 in addition to the cost of your enterprise Linux license, Red Hat has something worth looking at.
Red Hat's Global File System (GFS) is an open source, Posix-compliant cluster file system and volume manager that sits on top of any of Red Hat's enterprise Linux products. The company claims that GFS will allow a cluster of several hundred servers to share a common file system, providing higher throughput and availability than is available with non-clustered configurations. GFS has been tested with a large number of storage-area network (SAN) products, and works equally well with both Fibre Channel and iSCSI devices.
In addition to increased throughput, another key value of clustering your hardware is that doing so eliminates any opportunity for a single-point-of-failure. GFS provides for this, supplying redundancy for storage components that enables them to continue operations even if individual nodes go down. Its dynamic multi-pathing capability routes data around failed components on the SAN.
GFS offers storage capacity management via cluster-wide quotas. File migration, file system reconfiguration and volume resizing can be done on the fly.
Red Hat's clustered file capability seems much more limited in terms of capacity than the products we looked at earlier in this series (see the archives http://www.nwfusion.com/newsletters/stor/index.html for the full list), which scale into the petabyte range and beyond. But if you are looking to test the waters insofar as clustered file systems are concerned, and if GFS' apparent 2T bytes limitation is not a problem, it is probably worth checking out.
Read more about data center in Network World's Data Center section.