In my earlier blog post on SSD storage news from HPE, Hitachi and IBM, I touched on the significance of NVMe over Fabric (NoF). But not wanting to distract from the main storage, I didn\u2019t go into detail. I will do so with this blog post.\nHitachi Vantara goes all in on NVMe over Fabric\nFirst, though, an update on the news from Hitachi Vantara, which I initially said had not commented yet on NoF. It turns out they are all in.\n\u201cHitachi Vantara currently offers, and continues to expand support for, NVMe in our hyperconverged UCP HC line. As NVMe matures over the next year, we see opportunities to introduce NVMe into new software-defined and enterprise storage solutions. More will follow, but it confuses the conversation to pre-announce things that customers cannot implement today,\u201d said Bob Madaio, vice president, Infrastructure Solutions Group at Hitachi Vantara, in an email to me.\n\nHitachi has good reason to get NoF religion like everyone else. NoF is a game-changer. There are two primary interfaces for SSD, SATA and PCI Express. There\u2019s also Serial Attached SCSI (SAS), but for the most part, people used SATA.\nSATA is a legacy hard-drive interface dating back to 2001 that even a cheap consumer SSD can easily max out. For a while, I did SSD reviews for a consumer-oriented enthusiast site and pretty much every SSD was maxed out at a certain level of read and write performance. It didn\u2019t matter if it was a \u201chigh end\u201d drive or \u201cmidrange.\u201d Read\/write performance was always within a narrow range. SSD chips were getting faster, but the SATA bus was a huge bottleneck.\nThe fact is the SATA bus is stuck at revision 3.0, despite the working group bumping it to version 3.3. Most motherboards and SSDs are rev 3.0, playing it safe for maximum compatibility, and that\u2019s a 6Gbit\/sec. interface from 2009. Great for a laptop. Not so great for a server.\nThe Power of NVMe\nFor the best throughput, you need a PCIe-based card, which has much greater bandwidth than SATA. NVMe was designed for the massively parallel transfer capabilities of SSD memory that SATA just can\u2019t handle. NVMe is a data transfer protocol designed to work with PCI Express to overcome the limits of SATA. NVMe can handle up to 64,000 data queues, and each queue can process 64,000 commands at the same time. SATA can hold only a single queue with 256 commands.\nWell, the NVM Express 1.3 spec introduced last year adds support for NVMe over Fabric, for supporting protocols other than PCI Express like InfiniBand. Up until now, PCIe SSDs would work only in the physical server in which they were placed. One server couldn\u2019t see a PCIe card in another server because PCIe is a point-to-point transport protocol never intended to be used in storage. It was meant for GPUs and network cards, with high throughput requirements.\nPlus, every PCI Express-based SSD had a custom driver that was slightly different from the rest, so you could not build a storage array with a mix of PCIe cards. You had to buy them all from one vendor.\nIn short, PCIe SSD was a real headache.\nIn addition to NoF, NVMe 1.3 added virtualization namespaces support, so now you can build an all-flash storage array for a virtualized system, something not possible before. Up to now, you had to run a virtualized environment on a HDD-based array instead of flash. So, your virtualized systems are going to get a lot faster and support a lot more throughput.\nSo, you can see why all of the hardware OEMs have gotten the NVMe over Fabric religion, and why you should make sure it\u2019s on your shopping checklist as well.