When three major vendors all make similar product announcements, you know things are cooking in that space. In this case, Hitachi Vantara, HP Enterprise, and IBM all made news around SSD-based storage, much of it related to de-duplication and other ways to get control over data creep.\nWith users generating gigabytes of data every week, the solution for many enterprises has been to throw storage at it. That can get expensive, especially with SSD. SSD averages about 40 cents per gigabyte, while HDD storage averages about 5 cents per gigabyte.\nTo get control over data sprawl, storage vendors are offering de-duplication, or in the case of Hitachi Vantara, better de-duplication with their new systems. We\u2019ll run down the news alphabetically.\nHitachi updates its Virtual Storage Platform\nHitachi Vantara is the unifying of three Hitachi companies under one umbrella. It\u2019s not exactly up there with Dell EMC in terms of sales, but nonetheless it has competitive products and continues to plug away at the U.S. market.\nHitachi has updated its Virtual Storage Platform (VSP) all-flash and hybrid storage arrays, as well as its SVOS operating system. The arrays fall into two product lines: the all-flash F-series and hybrid flash\/hard disk G-series. The F-series has gotten a significant capacity upgrade, from 3.84TB in the old version to 15TB now.\nBoth systems received significant performance upgrades, with Hitachi boasting of up to 70 percent more IOPS per core, three times more IOPS performance and 2.5 times the scalability over older VSP systems. Hitachi also says the new systems offer up to 3.4 times faster deduplication and five-fold SVOS-based compression.\nCuriously, the F-series and G-series support Fibre Channel and SCSI connections, but not NVMe over fabric, which is the real game-changer for high-performance storage. Every other storage vendor is falling all over themselves to declare their products support NVMe over fabric, but no word from Hitachi yet.\nThe SVOS operating system has also been upgraded with new AI-based operations and container support. It has cloud and container integration, which supports new workloads. The company introduced Hitachi Infrastructure Analytics Advisor (HIAA), a so-called AI-powered \u201cbrain,\u201d to provide analysis of your data center optimizations across virtual machines, servers, networks, and storage. It uses machine learning to more efficiently optimize, troubleshoot, and predict data center needs.\nHPE upgrades its Nimble storage line\nWe\u2019ve already covered the HPE news in a separate piece, so I\u2019ll keep it brief here. HPE has given its Nimble storage line a significant upgrade and product line consolidation, similar to Hitachi. Nimble is one step below the company's top-of-the line 3PAR and XP storage arrays, but they are getting some 3PAR features.\nNimble breaks down into three product lines: the all-flash array AF series line, a hybrid disk-flash HF series of products, and the Secondary Flash (SF) line. The AF line goes from five to four products with capacity upgrades and support for storage-class memory (SCM) and NVMe-over-Fabric interconnects.\nSCM is a hybrid memory of sorts that fits somewhere between flash and DRAM. It\u2019s not as fast as DRAM, but it has much higher read and write performance than an SSD. It is memory designed from the ground up to improve storage performance. 3PAR has had it, now Nimble has it.\nThe HF arrays, with one exception, now support inline, variable block size deduplication, which HPE claims makes them "the most efficient hybrid arrays in the industry by a wide margin."\nIBM upgrades Storwize arrays\nIBM is upgrading its Storwize arrays for the first time in two years to add improved overall performance, cloud integration, and some seriously enhanced deduplication performance.\nStorwize is an all-flash array that provides data block storage and runs Spectrum Virtualize software, a part of IBM\u2019s IBM Spectrum Storage software-defined storage solution suite. It adds deduplication support to IBM\u2019s VersaStack converged infrastructure and its FlashSystem V9000.\nThe company claims up to a 5:1 data reduction and 100 percent data availability protection, so long as you use IBM HyperSwap and it is deployed by IBM Lab Services.\u00a0\u00a0\u00a0\nIBM is putting some numbers behind its dedupe claims, saying that combined with existing data reduction functions, over a three-year period you can reduce storage management and OPEX costs by over $2.8 million and CAPEX costs by as much as $600,000. This is based on a particular spec: an AFA Storwize V7000F system with approximately 700TB of usable space and the use of 7.68TB flash drives.\nIBM also announced Spectrum Virtualize for Public Cloud, which connects on-premises storage systems to the IBM Cloud service. This new release doubles scalability performance by expanding support from four nodes to eight, making it easier to use lower-cost cloud data centers as a target for disaster recovery (DR).\nThere\u2019s a lot more to all three announcements, but you get the point. The storage market is in significant flux as SSD opens up new possibilities and new advancements in SSD technology open up new markets. NVMe over fabric is particularly big because it means SSDs can be read and write by remote systems. Up to this point, the only computer that could access a SSD was the one it was plugged into.\nSSD opens up all kinds of performance potential now that they use faster controllers, which are still coming to market. The SSD in your PC uses the SATA interface designed for hard drives. SSDs can transfer data in parallel where HDDs cannot, so the speed potential is tremendous \u2014 with the right controller.\nIt\u2019s rather amazing that SSD storage is still advancing and not showing signs of slowdown. All of these nifty storage arrays from Hitachi, HPE and IBM will be replaced with something much better in just a few years.