StarFabric is a multiprotocol switched interconnect technology for board-to-board and chassis-to-chassis connectivity that provides more scalability and flexibility for designers of next-generation data, voice and video equipment than parallel bus architectures.\nStarFabric was designed to support seven traffic classes, including asynchronous, isochronous, multicast and high-priority provisioning. In the past, system designers would have needed separate in-system interconnects for each traffic type.\nA voice-over-IP gateway would need three independent buses: one for TDM traffic, one for packet traffic and one for control traffic. With StarFabric, each blade in the system would have one or more bridge devices on it, depending on the traffic types supported.\nThese bridge devices, such as a PCI-to-StarFabric bridge or an H.110-to-StarFabric bridge, would then connect to two redundant switch blades in a dual point-to-point fashion over the backplane.\nThe PCI Industrial Computer Manufacturing Group recently ratified the PICMG 2.17 CompactPCI StarFabric Specification, which specifies how to implement StarFabric.\nStarFabric provides a simple migration path from existing open platform architectures that are based on a parallel bus architecture. It is 100% backward compatible with PCI, H.110 and Utopia. The StarFabric architecture supports 2.5G bit\/sec point-to-point links and allows for the use of standard cabling and connector technology, such as RJ-45 connectors and Category 5 cabling.\nIn the case of an ATM device such as an access concentrator or edge switch, at the board level, many of these systems have network processors on every blade. This decentralized architecture requires expensive line cards. With StarFabric, line cards can be made dumb and inexpensive. The data that enters the system on the line card is encapsulated and switched through the system via StarFabric to centralized processing resources.\nAn ATM edge switch using traditional architectures would have every line card burdened by having a Layer 2 network processor and associated memory. This burden is felt in terms of cost, power and density. If a new service needed to be added and the Layer 2 network processors were not powerful enough, all the line cards would have to be replaced.\nIn the case of an ATM edge switch using StarFabric, the line cards are high-density and do not require local intelligence. The network processing unit (NPU) cards are high-performance centralized resource cards that connect to the line cards via the switch fabric. Traffic would enter the system on the line cards, go to the NPU cards and exit via the WAN cards.\nStarFabric also can be used to solve chassis-to-chassis connectivity problems. In many communications systems a single shelf cannot meet the mission parameters. The solution is to connect additional chassis full of line cards to share expensive processing resources or to share a WAN uplink circuit. Many solutions are too costly in terms of price, power, board space and protocol overhead.\nIn the simplest case, each chassis would contain a blade consisting of a StarFabric bridge device. With two ports per bridge, each chassis could connect to two other chassis at 2G bit\/sec via Cat 5 cables with RJ-45 connectors. If the system requires more than three chassis, system designers have a number of options. They can add a switch blade to the master chassis or they can add a 1-U switch box to one of the racks.\nToday's market has many interconnects vying for dominance. HyperTransport and RapidIO are high-speed chip-to-chip interconnects with direct competition from PCI-Express. Because StarFabric is a board-to-board and chassis-to-chassis interconnect it does not compete directly with them.\nHowever, future serial versions of RapidIO would compete directly with StarFabric and the Advanced Switching extensions to PCI-Express.\nWhelan is the director of product marketing at StarGen. He can be contacted at Whelan@stargen.com.