Skip Links

The data center fabric checklist you should know

Users share experiences in converging operations with three different implementations

By , Network World
July 15, 2013 01:08 PM ET

Network World - Like anything in IT, there are several considerations to mull over before employing fabric technology to converge data center operations.

At its purest form, a switching fabric is a network topology where nodes are connected to each other through switches employing multiple active links. This is in contrast to a broadcast medium like traditional Ethernet, where only one path is active – but Ethernet is evolving through standards bodies like IEEE and IETF to incorporate multiple active paths and a link-state routing protocol to replace Spanning Tree for data center fabric deployments.

But there are several checklist items to comb through before deciding if a fabric, and which fabric technology, is right for your environment:

  • Are you a greenfield or brownfield shop?
  • Which fabric – Ethernet? Infiniband? Fibre Channel?
  • What do you want it to do?
  • How will you design your fabric architecture?
  • Should your environment be purely one or the other, or a hybrid?
  • Should you route or switch between the different fabrics in a hybrid environment?
  • What are the factors to consider when converging data and storage?

[CHOOSING THE RIGHT ONE: What to look for in network fabrics]

MORE: Epic Interconnect Clash! InfiniBand vs. Gigabit Ethernet]

PayPal is a Mellanox Infiniband shop. It has more than 300 servers and about 12 storage arrays across three Infiniband hypercube clusters, with converged storage and network transport. The fabric has been in place since 2008, and PayPal is migrating from a 16Gbps Double Data Rate (DDR) environment to a 56Gbps Fourteen Data Rate (FDR) infrastructure.

PayPal looked at 10G Ethernet but “we knew we would (overrun) that with storage,” says Ryan Quick, principal architect. Infiniband provides better bandwidth and latency than both Ethernet and Fibre Channel, Quick found.

“IB brings a lot to the table, especially for storage,” Quick says. It has a 64K packet size vs. 9K; wire-speeds are much higher; there are lots of different learning and path calculation capabilities, including dynamic routing at the fabric level; and multipathing works “right out of the box.”

“It had one big negative,” Quick says of the Infiniband fabric. “No one’s using it in the enterprise yet. But it had an awful lot of positives.”

PayPal was a greenfield fabric deployment in 2008 with convergence as its target. The company has a hybrid Infiniband/Ethernet environment with an internally-developed router connecting the two.

“It’s easier to inspect packets at Layer 3,” Quick says. “But none of the vendors are offering a Layer 3 router for IB-to- something else. We had to build our own.”

The router has two to four 10G network interface cards (NIC) for gateways, and a pair of Infiniband Quad Data Rate – 8/32Gbps -- NICs on the other side. Hypervisors are configured to create a virtual switched network with “pseudo Ethernet,” Quick says.

“The guests think they’re using Ethernet but it’s really [Infiniband] on the other side,” he says.  

Storage is directly cabled into the hypercube via the SCSI RDMA Protocol with “dual rails” configured for failover, Quick says.

Our Commenting Policies
Latest News
rssRss Feed
View more Latest News