How we tested the Foundry BigIron MG8 switch.We asked Foundry to supply two switch chassis (priced at $38,000), a management module ($6,500), four 10G Ethernet interfaces ($50,000, plus $5,000 for each of four 1,310-nanometer LR Xenpak transceivers), and two of its new 40-port Gigabit Ethernet blades ($50,000 each, plus $445 each for each of 40 850-nanometer SFP transceivers). Foundry also supplied 850-nanometer SR Xenpak transceivers ($3,500) for interconnecting chassis because of the lower cost of the SR option.As in earlier reviews, we assessed device performance in terms of pure 10G bit/sec throughput, delay and jitter; 1G bit/sec throughput, delay and jitter across a 10 Gigabit backbone; failover times; and quality-of-service (QoS) enforcement.Our primary test instrument was the SmartBits performance analysis system from Spirent Communications, equipped with XLW-3721 TeraMetrics 10G Ethernet cards and LAN-3325 TeraMetrics XD 10/100/1000 Ethernet cards. We used Spirent’s SAI, SmartFlow, and TeraRouting applications to generate traffic. For the 10G Ethernet and backbone tests, the test traffic consisted of 64-, 256-, and 1,518-byte Ethernet frames. The duration for all tests was 60 seconds, and the timestamp resolution of the SmartBits was plus or minus 100 nanosec.In the 10G Ethernet tests, we asked Foundry to assign a different IP subnet to each of four 10G interfaces in one chassis. We configured the SmartBits to offer traffic from 510 virtual hosts per interface in a fully meshed pattern (meaning traffic was destined for all other interfaces). We measured throughput, latency and jitter. In the backbone tests, we asked vendors to set up two chassis, each equipped with one 10G Ethernet interface and 10 edge interfaces using Gigabit Ethernet. We again asked vendors to assign a different IP subnet to each edge interface and we configured the SmartBits to offer traffic from 510 virtual hosts per interface. This time, we offered traffic in a partially meshed multiple-device pattern; as defined in RFC 2889, that means the traffic we offered to one chassis was destined to all interfaces on the other chassis and vice versa. Once again, the metrics were throughput, latency and jitter.In the failover tests, we set up two chassis, each equipped with one Gigabit Ethernet and two 10G Ethernet interfaces. We asked vendors to configure Open Shortest Path First metrics to that one 10G Ethernet interface, which would act as a primary route, and one 10G Ethernet interface would function as a secondary.We offered a single flow of 64-byte frames to one Gigabit Ethernet interface at a rate of 1,000,000 frame/sec; thus, we transmitted one frame every 1 microsec. Approximately 10 seconds into the test, we physically disconnected the primary link, forcing the switch to reroute traffic onto the secondary path. We derived failover time from frame loss. We attempted to repeat the same test with 2 million flows, forcing 1 million to be failed over, but were unable to complete this event because of design issues with the Foundry device.In the QoS enforcement tests, we set up two chassis, each equipped with 12 Gigabit Ethernet interfaces and one 10G Ethernet backbone interface. Because we offered all 24 edge interfaces 128-byte frames at line rate in a partially meshed pattern, we congested the switches by a 12-to-10 ratio. For this test we offered three classes of traffic in a 1-to-7-to-4 ratio.We asked Foundry to configure its switches to enforce four conditions. First, they would have to mark incoming frames using specified Differentiated Services code points, something we verified by capturing and decoding traffic. Second, of the three traffic classes we offered, the switches should have delivered all high-priority traffic without loss. Third, the switches should have limited the rate of low-priority traffic so that it would not consume more than 2G bit/sec of backbone capacity. Finally, the switches should have allocated any remaining bandwidth to medium-priority traffic.As a check against allocating a fixed amount of bandwidth to high-priority traffic, we reran the tests with only medium- and low-priority traffic present in a 9-to-3 ratio. Foundry was not allowed to reconfigure devices between the first and second tests, and we expected the switches to allocate bandwidth previously used by high-priority traffic to the other classes. Back to review: Foundry BigIron MG8 Related content news Cisco CCNA and AWS cloud networking rank among highest paying IT certifications Cloud expertise and security know-how remain critical in building today’s networks, and these skills pay top dollar, according to Skillsoft’s annual ranking of the most valuable IT certifications. Demand for talent continues to outweigh s By Denise Dubie Nov 30, 2023 7 mins Certifications Certifications Certifications news Mainframe modernization gets a boost from Kyndryl, AWS collaboration Kyndryl and AWS have expanded their partnership to help enterprise customers simplify and accelerate their mainframe modernization initiatives. By Michael Cooney Nov 30, 2023 4 mins Mainframes Cloud Computing Data Center news AWS and Nvidia partner on Project Ceiba, a GPU-powered AI supercomputer The companies are extending their AI partnership, and one key initiative is a supercomputer that will be integrated with AWS services and used by Nvidia’s own R&D teams. By Andy Patrizio Nov 30, 2023 3 mins CPUs and Processors Generative AI Supercomputers news VMware stung by defections and layoffs after Broadcom close Layoffs and executive departures are expected after an acquisition, but there's also concern about VMware customer retention. By Andy Patrizio Nov 30, 2023 3 mins Virtualization Data Center Industry Podcasts Videos Resources Events NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe