For network managers who’ve outfitted their servers with Gigabit Ethernet network interface cards in recent years, the results definitely have been of the “glass half-full” variety. While Gigabit Ethernet NICs have allowed servers to deliver more than predecessor Fast Ethernet, other system bottlenecks usually combined to keep overall throughput half, or less, of Gigabit Ethernet’s potential maximum. Will Remote Direct Memory Access-based NICs change all this?Vendor efforts to push the benefits of Gigabit Ethernet notwithstanding, the fact remains that while it is no big deal for Layer 2/3 infrastructure to run at wire speed, actual end-to-end communication is another matter. In short, going up the stack almost always slows you down.The Tolly Group has studied this issue for years as part of its ITclarity hands-on research program. Let me summarize what we’ve found to give you an idea of what opportunities and challenges RDMA vendors face.In our initial study, “Gigabit Ethernet to the Desktop – A Reality Check on the Benefits and Burdens of Gigabit Ethernet over Copper“, published in 2002, we focused on determining the maximum throughput achievable between a pair of high-end machines running IxChariot – a standard network benchmarking test tool. Even using machines outfitted with high-performance, 64-bit, 66-MHz PCI bus architecture, throughput for the most highly optimized bidirectional “file transfer” application topped out at around 750M bit/sec – out of a possible 2G bit/sec (1G bit/sec each way).So while this is far better than what Fast Ethernet could offer, it is less than 50% of the theoretical maximum – and, worse, the test application deliberately only simulates file transfer (to isolate network performance). Because no data actually is read from or written to disk, performance with real applications likely would be worse. And it is. Last year, we extended our study to benchmark the effective throughput when running actual applications. Given that our focus was Gigabit to the desktop (rather than back-end, server-to-server) and that streaming applications tended to provide the best throughput, we sought out desktop back-up applications that effectively would upload data from client to server.The results, published in “Gigabit Ethernet to the Desktop: An Evaluation of Back-up Utilities over GbE and Fast Ethernet Networks“, were sobering – which is a nice way of saying they were terrible.We ran tests using multiple products from Dantz and, when the tests started, Veritas Software (it has since sold the product). With Fast Ethernet as the transport, the effective throughput (data delivered/time) was 60M or 70M bit/sec. Not bad, given that packet overhead is always present.When run again using Gigabit Ethernet, the results always went up – but marginally. The best results observed never broke 115M bit/sec – or 11.5% of the theoretical maximum for unidirectional traffic.Analyses of the traces show massive inefficiencies with how the back-up application moves data. Implemented “transparently” using the Common Internet File System protocol (aka Server Message Block), the application lets the Gigabit Ethernet link spend most of its time simply waiting to do something. The lower layers, where RDMA is focused primarily on helping, was, in our tests, never the bottleneck.Granted, things might be different in “server-to-server” applications, but my experience has shown that too many programmers seem to be blissfully unaware of how to write code that can take advantage of the underlying transport. RDMA optimizes the process at the bottom of the stack. It is more efficient, reduces latency and offloads the server CPU. But will the presence of massive bottlenecks higher up in the stack make all of that irrelevant? The answer to that question might make all the difference in the world to vendors of RDMA products. Related content how-to Doing tricks on the Linux command line Linux tricks can make even the more complicated Linux commands easier, more fun and more rewarding. By Sandra Henry-Stocker Dec 08, 2023 5 mins Linux news TSMC bets on AI chips for revival of growth in semiconductor demand Executives at the chip manufacturer are still optimistic about the revenue potential of AI, as Nvidia and its partners say new GPUs have a lead time of up to 52 weeks. By Sam Reynolds Dec 08, 2023 3 mins CPUs and Processors Technology Industry news End of road for VMware’s end-user computing and security units: Broadcom Broadcom is refocusing VMWare on creating private and hybrid cloud environments for large enterprises and divesting its non-core assets. By Sam Reynolds Dec 08, 2023 3 mins Mergers and Acquisitions news analysis IBM cloud service aims to deliver secure, multicloud connectivity IBM Hybrid Cloud Mesh is a multicloud networking service that includes IT discovery, security, monitoring and traffic-engineering capabilities. By Michael Cooney Dec 07, 2023 3 mins Network Security Cloud Computing Networking Podcasts Videos Resources Events NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe