- 15 Non-Certified IT Skills Growing in Demand
- How 19 Tech Titans Target Healthcare
- Twitter Suffering From Growing Pains (and Facebook Comparisons)
- Agile Comes to Data Integration
Network World - This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.
Enhancements to server and storage technology created an I/O performance gap in the enterprise storage network that has since been addressed by SSD-based caches and PCIe-based flash cards, but there are different approaches to deploying SSDs. This article compares SSD caching approaches and suggests a new approach that overcomes the individual drawbacks of each approach while combining their relative advantages.
The purpose of SSD-based caching is to address the I/O performance gap by reducing I/O latency and increasing IOPS performance. Any enterprise caching solution under consideration should be easy to deploy, OS/application transparent, and support caching on individual servers as well as multi-server clusters, including support for highly virtualized environments and clustered applications. It should also maintain existing SAN data protection and compliance policies, and deliver benefits across the widest range of applications in the enterprise.
[ IN THE NEWS: Samsung buys SSD caching vendor Nvelo
CLEAR CHOICE TEST: The skinny on solid state drives ]
Today, there are three main approaches to SSD caching in networks: array-based caching, caching appliances and server-based caching.
* Storage array-based SSD caching. Initial deployments of SSD caches involved installing SSDs/PCI-flash cards, along with the required software and firmware functionality, within shared storage arrays. Due to the plug-compatibility of early SSDs, these initial implementations did not require extensive modifications to existing array hardware or software and, in many cases, were available as upgrades to existing equipment.
Applying SSD caching to improve performance inside storage arrays offers several advantages that closely parallel the fundamental advantages of centralized network-attached storage arrays, including: efficient sharing of valuable resources, maintenance of existing data protection regimes, and providing a single point of change where network topologies and related procedures need not change.
However, adding SSD caching to storage arrays requires upgrading and, in some cases, replacing existing arrays (including data migration effort and risk). Even if all of the disk drives are upgraded to SSDs, the expected performance benefit is not fully realized due to contention-induced congestion latency at over-subscribed network and array ports. The performance benefits of SSD caching in storage arrays may be short-lived and caching may not scale smoothly. The initial per-server performance improvements are likely to decrease over time as the overall demands on the arrays and storage networks increase with growing workloads, and with physical server and virtual server attach rates. [Also see: "Can flash live up the hype?"]
* Caching appliances. Caching appliances are network-attached devices that are inserted into the data path between servers and primary storage arrays connected to a SAN switch. Like array-based caching, appliances share relatively expensive and limited resources, but do not require upgrades to existing arrays. Because these devices are independent of the primary storage arrays, they can be distributed to multiple locations within a storage network to optimize performance for specific servers or classes of servers.