Americas

  • United States
Neal Weinberg
Contributing writer, Foundry

3Com’s XRN architecture

Opinion
Mar 25, 20033 mins
Network SwitchesNetworking

* Using XRN to interconnect backbone switches in order to get full redundancy at Layer 2 and Layer 3

Look who’s targeting the high-end enterprise market with a new interconnect technology that allows customers to lash together multiple workgroup switches. Yes, it’s 3Com.

The Reviewmeister was curious so we tested 3Com’s XRN interconnect technology.

The basic idea behind XRN is to interconnect backbone switches so they offer full redundancy at Layer 2 and Layer 3. Our tests, in which we tested a 3Com SuperStack 4050 and SuperStack 4060 connected via the XRN Interconnect Kit, showed that this redundancy worked well, with subsecond failover in all cases.

To create redundancy, users first connect core switches to create an XRN stack, and then dual-attach workgroup switches or computers to the stack. In a Layer 3 environment, the XRN stack appears as one device with a single IP and media access control (MAC) address, even though each element in the stack contains its own routing table. In testing, this design worked as intended. The XRN stack offered line-rate throughput for Layer 2 traffic and throughput equivalent to around 95% of line rate for Layer 3 traffic.

If one element in the XRN stack fails, the routing table of another element in the stack takes over. 3Com says it keeps the different routing tables synchronized through periodic triggered updates using a feature called Distributed Resilient Routing (DRR).

A major differentiator of the XRN approach is its use of active-active load sharing. To increase available bandwidth, XRN also supports 802.3ad link aggregation for dual-homed connections. For example, two switches could be interconnected with two Gigabit Ethernet links, which appear as one virtual circuit with 2G bit/sec of capacity in either direction.

One downside of the XRN approach is that it’s proprietary. It’s not possible to build an XRN stack with switches from 3Com and other vendors.

We conducted two sets of tests: First, we evaluated XRN’s resiliency by measuring recovery times when we severed a link or disconnected the power to one of the XRN stack’s components. Second, we ran Layer 2 and Layer 3 stress tests to determine the devices’ forwarding and delay characteristics.

In five trials, Layer 3 failover always occurred in less than 1 second. On average, it took 822 millisec for all traffic to be redirected onto the remaining uplink to the XRN core stack. We got even better results when disconnecting power from one switch in the XRN stack during the test. It took only 438 millisec to failover all traffic to the remaining switch.

In the second test, we used the Spirent SmartBits analyzer/ generator to offer traffic to 20 Fast Ethernet ports on each of the Superstack 4400 devices, with traffic destined to all 20 Fast Ethernet ports on the other SuperStack 4400 in a partial mesh pattern.

For Layer 2 forwarding, the 3Com switches performed perfectly. We measured line-rate throughput with 64-, 256- and 1,518-byte frames.

Latency was also low, with four switches adding average delay of just 31 microsec for 64-byte frames, 120 microsec for 256-byte frames, and 478 microsec for 1,518-byte frames. These numbers don’t come anywhere close to the point where application performance would be degraded.

The XRN switches didn’t quite run at line rate in our Layer 3 tests. The switches achieved throughput equivalent to around 94% of line rate when routing 64-byte IP packets, 97.5% of line rate when routing 256-byte IP packets, and around 95% of line rate when routing 1,518-byte IP packets. 3Com officials say the slight differences in Layer 2 and Layer 3 results are because of the hashing algorithm used.

Then again, even though the switches’ Layer 3 throughput was lower, latency still was well below the point where it would disrupt applications.  For the full report, go to https://www.nwfusion.com/reviews/2003/0303xrnrev.html