NVMe over TCP: How it supercharges SSD storage over IP networks

For maximum storage performance, NVMe/TCP marks the next step forward in SSD networking.

Data center corridor of servers with abstract overlay of digital connections.
Sdecoret / Getty Images

Soon after data centers began transitioning from hard drives to solid-state drives (SSD), the NVMe protocol arrived to support high-performance, direct-attached PCIe SSDs. NVMe was followed by NVMe over Fabrics (NVMe-oF), which was designed to efficiently support hyperscale remote SSD pools, effectively replacing direct-attached storage (DAS) to become the default protocol for disaggregated storage within a cloud infrastructure.

Most recently, NVMe over TCP has arrived to provide a more powerful NVMe-oF technology, promising high performance with lower deployment costs as well as reduced design complexity. In essence, NVMe over TCP extends NVMe across the entire data center using the simple and efficient TCP/IP fabric.

"Having the ability to communicate at high bandwidth with low latency, while gaining physical separation between storage arrays, and then adding a normal switched network incorporating the TCP protocol for transport, is a game changer," says Eric Killinger, IT director at business and technology advisory firm Capgemini North America. "Cloud hyperscalers are already adopting this technology, replacing formerly new two- and three-year-old SSD technologies to enable greater query access for data analytics and IoT," he says.

Background: Emergence of NVMe and NVMe-oF

Storage received a massive speed boost when the first arrays built with NVMe SSDs arrived, but the devices still talked to servers over a SCSI-based host connection. NVMe-oF deployments can support remote direct memory access (RDMA) for block storage devices based on NVMe across switched fabrics.

To continue reading this article register now

IT Salary Survey 2021: The results are in