TCP/IP has long been the common language for network traffic, and recent initiatives such as Internet SCSI and Remote Direct Memory Access are making it the protocol of choice for storage and clustering.
However, processing TCP/IP traffic requires significant server resources. Specialized software and integrated hardware known as TCP Offload Engine (TOE) technology eliminates server-processing constraints.
TOE technology consists of software extensions to existing TCP/IP stacks that enable the use of hardware data planes implemented on specialized TOE network interface cards (TNIC).
This hardware/software combination lets operating systems offload all TCP/IP traffic to the specialized hardware on the TNIC, leaving TCP/IP control decisions on the server. Most operating system vendors prefer this approach, which is based on a data-path offload architecture.
NICs today process TCP/IP operations in software, which creates substantial system overhead. The three areas that cause the most system overhead are data copies, protocol processing and interrupt processing.
The explosion of the number of packet transactions generated per application network I/O causes high interrupt load on servers. Hardware interrupt lines are activated to provide event notification.
For example, a typical 64K bit/sec application write to a network results in 60 or more interrupt-generating events between the system and a generic NIC to segment the data into Ethernet packets and process the incoming acknowledgements. This creates significant protocol-processing overhead and high interrupt rates. While some operating system features such as interrupt aggregation can reduce interrupts, the corresponding event processing for each server-to-NIC transaction is not eliminated.
A TNIC dramatically reduces the network transaction load on the system by changing the system transaction model from one event per Ethernet packet to one event per application network I/O. The 64K bit/sec application write becomes one data-path offload event, moving all packet processing to the TNIC and eliminating interrupt load from the host. A TNIC provides maximum benefit when each application network I/O translates to multiple packets on the wire, which is a common traffic pattern.
Standard NICs incorporate hardware checksum support and software enhancements to eliminate transmit-data copies, but can't eliminate receive-data copies that consume significant processor cycles. A NIC must buffer received packets on the system so the packets can be processed and corresponding data coupled with a TCP connection. Next, the receiving system must associate the unsolicited TCP data with the appropriate application and copy the data from system buffers to the destination memory location.
Because a TNIC performs protocol processing locally before placing data on a system, it can use zero-copy algorithms to place data directly in application buffers, avoiding intermediate host-side buffering and the associated expensive receive-data copies.
TNICs dramatically reduce system overhead associated with moving data. Recent benchmarks have shown that replacing a NIC with a TNIC delivers the equivalent of double the number of processors in file servers and systems with heavy content-delivery demands. For footprint and power-conscious systems, TNICs use a fraction of the power of a corresponding NIC and microprocessor for the task of filling Gigabit Ethernet pipes.
TOE analysis tools are available to help administrators evaluate systems benefits in transitioning from the NIC I/O model to the TNIC I/O model.
IT managers are deploying TNICs to provide servers with the necessary hardware processing to handle increasing data delivery demands. As TNIC functionality is integrated into blade servers, embedded systems and eventually desktop machines, OEMs and end users will benefit from the efficiency of TNICs in offloading all TCP-based networking and storage protocols "all TCP-based traffic."
Gervais is the director of product marketing at Alacritech. He can be reached at firstname.lastname@example.org.