• United States
by David Newman, Network World Global Test Alliance

XRN Interconnect architecture

Mar 03, 20038 mins

New XRN architecture offers high availability, low cost

3Com is targeting the enterprise backbone with its new Expandable Resilient Networking architecture, and our tests support 3Com’s claim that its XRN interconnect technology combines high availability with excellent performance.

3Com  is targeting the enterprise backbone with its new Expandable Resilient Networking architecture, and our tests support 3Com’s claim that its XRN interconnect technology combines high availability with excellent performance.

3Com, long a player in workgroup switching, is adding redundancy features to top-end models to compete with established enterprise backbone vendors, including Cisco, Extreme Networks and Foundry Networks. By using the XRN Interconnect Kit to aggregate multiple switches  as one virtual unit, 3Com says it is matching competitors’ availability and performance – at a significantly lower price. For example, 3Com says two of its 4060 switches connected in an XRN stack would cost $1,061 per Gigabit Ethernet port, vs. $1,859 per port for Cisco’s Catalyst 4500 and $1,914 per port for Extreme’s Alpine 3800 in equivalent configurations.

XRN road map


Archive of Network World reviews

Subscribe to the Product Review newsletter

Furthermore, 3Com says the stackable nature of the XRN approach lets users take a pay-as-you-go approach, purchasing new capacity as needed.

The basic idea behind XRN is to interconnect backbone switches so they offer full redundancy at Layer 2 and Layer 3. Our tests, in which we tested a 3Com SuperStack 4050 and SuperStack 4060 connected via the XRN Interconnect Kit, showed that this redundancy worked well, with subsecond failover in all cases. To create redundancy, users first connect core switches to create an XRN stack, and then dual-attach workgroup switches or computers to the stack. In a Layer 3 environment, the XRN stack appears as one device with a single IP and media access control (MAC) address, even though each element in the stack contains its own routing table. In testing, this design worked as intended. The XRN stack offered line-rate throughput for Layer 2 traffic and throughput equivalent to around 95% of line rate for Layer 3 traffic.

If one element in the XRN stack fails, the routing table of another element in the stack takes over. 3Com says it keeps the different routing tables synchronized through periodic triggered updates using a feature called Distributed Resilient Routing (DRR).

A major differentiator of the XRN approach is its use of active-active load sharing. Competitors’ switches allow Layer 2 redundancy via spanning-tree bridging, but this is an active-passive approach. With spanning tree, one switch sits idle until a link, interface or switch fails. With XRN, more bandwidth is available because all switches in the XRN stack share the load until a failure occurs.

Similarly, competitors’ switches can add Layer 3 redundancy through protocols such as the Virtual Router Redundancy Protocol (VRRP) or proprietary variations from Cisco, Extreme and Foundry. Only one router is active at a time with these protocols, whereas all components in an XRN stack will forward traffic until a failure occurs.

In theory, users could achieve Layer 3 redundancy and active-active availability with any Layer 3 switch by using a combination of VRRP and the equal-cost multipath feature of Open Shortest Path First (OSPF ) routing. However, 3Com says this approach requires more configuration and management with no gain in performance – an assertion supported by our test results.

3Com says XRN’s ease of configuration and expandability are other key features. Because the XRN stack appears to other devices as a single router, only one routing table needs to be administered.

To increase available bandwidth, XRN also supports 802.3ad link aggregation for dual-homed connections. For example, two switches could be interconnected with two Gigabit Ethernet links, which appear as one virtual circuit with 2G bit/sec of capacity in either direction.

One downside of the XRN approach is that it’s proprietary. It’s not possible to build an XRN stack with switches from 3Com and other vendors. Then again, even though VRRP is an open standard, most companies tend to implement it with matched pairs of devices from one vendor.

Put to the test

To assess XRN performance, we conducted two sets of tests: First, we evaluated XRN’s resiliency by measuring recovery times when we severed a link or disconnected the power to one of the XRN stack’s components. Second, we ran Layer 2 and Layer 3 stress tests to determine the devices’ forwarding and delay characteristics.

For all tests, we set up the XRN switches the way they’re most commonly used: as highly redundant backbone switches handling traffic from Layer 2 workgroup devices. In this case, the redundancy came by dual-attaching 3Com SuperStack 4400 workgroup switches to each of two XRN devices: a SuperStack 4050 and 4060. Both switches in the XRN stack appeared to the workgroup switches as a single IP and MAC address.

The devices we tested use character-based menus for configuration. While the menu layout was intuitive, we wouldn’t want to have to wade through menus when configuring dozens of ports. 3Com says a command-line interface with text upload and download is in the works.

In the resiliency and failover tests, we configured a Spirent  SmartBits generator/analyzer to offer traffic to one of the SuperStack 4400 workgroup switches, all destined to the other workgroup switch across the XRN stack.

Around 10 seconds into the test, we physically disconnected one of the two cables connecting the first SuperStack 4400 to the XRN stack. Some frame loss is inevitable because the XRN stack redirects all traffic over the remaining interface. We derived failover time from the number of frames dropped.

In five trials, Layer 3 failover always occurred in less than 1 second. On average, it took 822 millisec for all traffic to be redirected onto the remaining uplink to the XRN core stack. We got even better results when disconnecting power from one switch in the XRN stack during the test. It took only 438 millisec to failover all traffic to the remaining switch.

These results are roughly comparable with failover times for other enterprise backbone switches using OSPF equal cost multipath (See “Testing 10 GBE Switches” ). There are some differences from the earlier results, however: The 3Com switches also offered full redundancy of routers, not just links, using 3Com’s proprietary DRR protocol. Furthermore, the 3Com approach required configuration and management of one router per XRN stack, while OSPF equal cost multipath requires configuration and monitoring of at least two routers. The advantage of the first feature is redundancy of the whole device, not just a single link. The advantage of the second feature is simplified configuration and management.

We used the Spirent SmartBits analyzer/ generator to offer traffic to 20 Fast Ethernet ports on each of the Superstack 4400 devices, with traffic destined to all 20 Fast Ethernet ports on the other SuperStack 4400 in a partial mesh pattern.

For Layer 2 forwarding, the 3Com switches performed perfectly. We measured line-rate throughput with 64-, 256- and 1,518-byte frames.

Latency was also low, with four switches adding average delay of just 31 microsec for 64-byte frames, 120 microsec for 256-byte frames, and 478 microsec for 1,518-byte frames. These numbers don’t come anywhere close to the point where application performance would be degraded.

The XRN switches didn’t quite run at line rate in our Layer 3 tests. The switches achieved throughput equivalent to around 94% of line rate when routing 64-byte IP packets, 97.5% of line rate when routing 256-byte IP packets, and around 95% of line rate when routing 1,518-byte IP packets. 3Com officials say the slight differences in Layer 2 and Layer 3 results are because of the hashing algorithm used.

Then again, even though the switches’ Layer 3 throughput was lower, latency still was well below the point where it would disrupt applications. We recorded average delays of 46 microsec, 112 microsec and 395 microsec for 64-, 256- and 1,518-byte packets, respectively. The delays are not only very respectable compared with other 100Base-T devices but also far too small to have any appreciable effect on application performance.

XRN Interconnect architecture


Company: 3Com, (800) 638-3266 Cost: $40,585*. Pros: High availability, low cost, excellent performance. Cons: Proprietary; no OSPF for a few more months; only two switches supported in first release.  
XRN Interconnect architecture

Resiliency features 25% 

 Forwarding performance 25%  4.5
Price 25%  5
IP routing support 15%  2.5
Ease of use 10%  4


Individual category scores are based on a scale of 1 to 5. Percentages are the weight given each category in determining the total score. Scoring Key: 5: Exceptional showing in this category. Defines the standard of excellence; 4: Very good showing. Although there may be room for improvement, this product was much better than the average; 3: Average showing in this category. Product was neither especially good nor exceptionally bad; 2: Below average. Lacked some features or lower performance than other products or than expected; 1: Consistently subpar, or lacking features being reviewed.