Americas

  • United States

HP develops high-performance file system for Linux clusters

Opinion
Aug 10, 20043 mins
Data CenterServers

* A look at HP's StorageWorks Scalable File Share

In the past two weeks, we have looked at Lustre, the high-performance file system for Linux clusters from Cluster File Systems, plus the high-performance SAN Volume Manager and SAN File System from IBM (see editorial links below).  Today, we look at HP’s StorageWorks Scalable File Share (HP SFS), and how it is likely to be delivered.

HP SFS is HP’s implementation of Lustre and is on track for delivery in the third quarter of this year.  Like all Lustre implementations, it is fully Posix-compliant and is designed to service Linux clusters running applications that require fast data access across a large number of nodes.  It should be viewed as a strategically important part of HP’s overall vision for a “storage grid,” along with HP’s StorageWorks Reference Information Storage System (RISS), announced a few months ago.

The target market for Lustre (and hence, for HP SFS) can be described as being those clustering environments with I/O bandwidth and/or total storage requirements that are larger than what can be easily supplied by an NFS server.  Generally speaking, this does not mean smaller clusters, or supercomputing environments in which high-speed data transmission is not an issue. 

The real targets will be those clusters that are starting to be thought of as grids, environments with greater than 100 terabytes of storage in a single filesystem, and in which the demand for I/O bandwidth is in a range of tens of gigabytes per second.  Requirements such as these can be expected to occur in some areas of the biosciences, with digital content creation; in oil and gas exploration; and at national laboratories, for example.  It is also a fair bet that, because the need for computing expands like gas to fill all available space, this will at some point find lots of uses in commercial environments as well.

Because of the high-performance computing environments in which HP SFS and other Lustre implementations will be used, I can safely say that these filesystems are clearly not intended for the computationally faint of heart.  But because the filesystem supports the most stringent computing demands does not also mean that storage management should be another challenge, however.

Fortunately, HP SFS is designed both to scale up, with its single sharable filesystem, and to scale out in terms of both bandwidth and capacity. 

The entire filesystem, virtualized across a grid of (theoretically) almost any size, can be managed from a single management console.  As the system grows, the new capacity is incorporated within the existing filesystem.

HP adds capacity, and bandwidth scales out, with a grid, built in increments HP calls “smart cells.” Smart cells are clusters of standardized hardware – a mix of HP’s ProLiant servers and StorageWorks arrays – that are likely to use a mix of serial ATA (SATA), Fibre Channel, SCSI, and HP’s experiment with Fibre ATA drives in a RAID 5 configuration. 

Of course, Linux code is publicly open and is worked on by contributors all over the world.  And even though some companies (Red Hat and SuSE) do a lion’s share of the distribution, and despite the fact that several vendors (IBM and Sun, for example) ship Linux on their machines, the intent of the Linux community is to keep the operating system open, non-proprietary and available to all.

Why then we should ask, would anyone want to buy Linux-based storage from a single vendor, and risk the possibility of a proprietary implementation?  We will save that discussion for next time.