- 18 Hot IT Certifications for 2014
- CIOs Opting for IT Contractors Over Hiring Full-Time Staff
- 12 Best Free iOS 7 Holiday Shopping Apps
- For CMOs Big Data Can Lead to Big Profits
Network World - Solid state drives offer substantial benefits over traditional hard drives – they are faster, more reliable, use less energy and are quieter.
On the negative side, they have lifespans that are limited to an average number of writes per cell, and they can cost up to 70 times as much per gigabyte as standard hard drives.
So, where do SSDs fit in an enterprise network? In servers? In storage systems? Somewhere else?
To address those questions, we reviewed a variety of SSD-based products from seven vendors. Three of the products were Peripheral Component Interconnect Express (PCIe) boards – an Adaptec MaxIQ 5805/512 controller, two Apricorn PCIe Drive Arrays, and a FusionIO ioDrive.
In addition, we tested two SAN systems, a Compellent Storage Center 030, and a Dot Hill AssuredSAN 3730. Plus, we tested an HP BladeSystem c-class chassis with two server blades, each equipped with a 160GB StorageWorks IO accelerator module. And we looked at a Ritek 128GB SSD.
First, some definitions: There are two types of SSDs – single-level cell (SLC) and multi-level cell (MLC). SLC drives are faster, have longer life spans (about 100,000 writes per cell) and cost more.
MLC drives are less expensive, but have typical life spans of only about 10,000 writes per cell, making then generally inappropriate for write-intensive enterprise applications.
MLC drives can have a place in the enterprise for read-intensive applications such as serving videos or database lookups. They can speed throughput and access times at a lower cost than SLC drives.
SSDs are being used to replace standard hard drives in servers, but this is not typically the most effective way to use the drives. SLC-based SSDs are so much faster than standard hard drives that more than a couple of drives can overrun a standard storage controller.
Also, since SSDs typically are more reliable as well as more expensive than regular drives, using SSDs in a RAID configuration may not be the best use of the drives.
These issues are leading to new and different applications for SSDs. Some manufacturers are shipping PCI-X or PCIe boards
that can either have SSDs (or discrete flash memory) directly mounted on them or attached via standard SAS or SATA cables.
Other vendors have created appliances that are placed between servers and storage, operating as cache to speed up access to the storage without having to add SSDs to specific storage arrays.
And some vendors have added SSDs to their existing SAN storage systems, either as cache or as another storage tier (often called tier 0).
This test covers all the categories of storage using SSDs except the appliances that sit between servers and storage. A number of vendors in that category were invited, including Atrato, Dataram, IBM, Schooner Information Technology, Solid Access Technologies, Storspeed, Teradata and Violin Memory, but none were able to get product to us in time for the review.
Our test bed included an HP ML370G5 server running Windows Server 2003 with external storage connected via Fibre Channel through a 2Gbps HP FC switch. Storage performance was tested with IOmeter running a mix of tests intended to show overall improvements in throughput, IOps and latency.