Latency and jitter: Cut-through design pays off for Arista, Blade

In many data centers, latency and jitter are the most important metrics. Even the small amounts of delay (latency) or delay variation (jitter) introduced by a switch can have a profound impact on application performance. This is different than with general enterprise switching and routing, in which speed-of-light propagation times dwarf the latency added by any given switch or router.

In contrast, devices in the data center are only a few meters apart, or less, so every microsecond counts. Also, there's often a simple business driver involved: The more transactions an organization can process in a given unit of time, the more revenue it can expect to realize.

To help reduce latency and jitter, some switches (Arista, Blade, and Cisco) use so-called cut-through switching. A cut-through device begins forwarding a frame after examining only the first 12 bytes of its Ethernet header. In contrast, a store-and-forward switch caches the entire frame before making a forwarding decision.

The Dell, Extreme and HP switches all used store-and-forward mode in these tests. Extreme's Summit x650 can be configured in either mode; Extreme's engineers opted to use store-and-forward switching in order to achieve lower latencies.

Cut-through designs typically deliver lower latency, but there are tradeoffs. The biggest issue is that cut-through switches will forward corrupted frames, since they don't wait to see if the checksum at the end of each frame is valid. A router or other store-and-forward device will keep corrupted frames from leaving the data center, but such traffic could be a problem inside the data center, especially in large broadcast domains.

Another possible concern is that relatively low latency often means relatively small buffers. This isn't a problem when moving traffic between pairs of ports operating at the same speed, but speed mismatches between ports (say, gigabit and 10G Ethernet) or congestion from many-to-one traffic patterns could cause frame loss earlier than with store-and-forward devices.

Some vendor data sheets claim lower latencies than those we measured. Those claims may be valid, but they're not necessarily the most meaningful numbers for end-users (see related story "Lies, Damned Lies and Latency").

Chart of unicast latency results

A cut-through design clearly paid benefits for the Arista and Blade switches, which delivered far lower latency across all frame sizes than their competitors. Blade's G8124 wins bragging rights with the lowest unicast latency – 750 nanoseconds with 64-byte unicast frames – but both the Arista and Blade devices consistently posted numbers around 800 nanoseconds in other tests.

Cut-through doesn't automatically translate to low latency, as the numbers from Cisco's Nexus 5010 make amply clear. When handling small frames, the Cisco switch delivered average latency that was 20 or more times higher than some other switches' maximum delays. Moreover, its maximum latency with 64-byte frames was a staggering 181 microseconds. At line rate, that means nearly 300 frames were in flight. That would be a high delay in a gigabit Ethernet switch, let alone a 10G Ethernet device intended for data center service.

Cisco attributes the latency numbers to superframing, a technique the Nexus 5010 uses to aggregate many frames into a single large unit for switching across the unit's crossbar component. The result, Cisco says, is a design that handles both Ethernet and storage traffic (in the form of FCoE), while accommodating FCoE's need for deterministic delay (that is, delay that's predictable and with little variation, or jitter).

Certainly it's true that the Nexus 5010's large-frame latency is far lower, around 3.37 microseconds, and that jitter is also much smaller. With large frames, the Cisco switch's maximum latency was 3.45 microseconds, only 80 nanosec higher than the average. This validates Cisco's claim of delivering deterministic delay and jitter.

On the other hand, the best-case latency for the Nexus 5010 is still more than four times higher than that of the Arista or Blade switches – and maximum delays for both those switches also were around 80 nanoseconds higher than their averages too, regardless of frame size. With the Cisco switch, in contrast, superframing only limits jitter with large frames. There's a bit of apples-and-oranges in this comparison; the Nexus switch handles Fibre Channel and FCoE in addition to Ethernet, and the other switches don't. Still, the simpler Ethernet devices clearly delivered lower latency and jitter across all frame sizes.

The Extreme and HP switches both exhibited significantly higher latency with large unicast frames, especially jumbo frames. Curiously, this was not the case with Dell's PowerConnect switch, even though it uses the same store-and-forward technique as the Extreme and HP devices. Dell's switch delivered very predictable latency for unicast traffic across all frame sizes, with average delays of less than 2 microsec and maximum delays around 200 nanosec higher.

Chart of multicast latency results

Multicast latency and jitter was virtually identical to unicast for the Arista, Blade and Dell switches. The Arista and Blade switches both achieved the lowest latencies we saw in the entire test, around 740 nanosec when handling 64-byte frames. Dell's numbers were also very consistent across all multicast frame sizes, and very similar to its unicast results.

Multicast latency for Cisco's Nexus 5010, while certainly lower than in the unicast tests, was still high vis-à-vis most other switches. Apparently superframing does not play a major role in multicast switching, since we observed relatively flat average and maximum delays across all frame sizes.

HP's ProCurve switch also turned in very different multicast latency and jitter compared with its unicast results, with higher average and maximum delays for jumbo multicast frames. This is surprising considering that the HP switch's throughput was substantially lower for multicast than for unicast traffic (latency is measured at the throughput rate).

See next part: Link aggregation: Arista, Blade and Cisco fare best

Return to main test.

Copyright © 2010 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022