- 18 Hot IT Certifications for 2014
- CIOs Opting for IT Contractors Over Hiring Full-Time Staff
- 12 Best Free iOS 7 Holiday Shopping Apps
- For CMOs Big Data Can Lead to Big Profits
Several factors have to be weighed, such as synchronizing switch clocks for the higher-speeds, especially among multivendor equipment; ensuring latency remains at acceptable levels; keeping the network design and architecture optimal for 40/100G; and making sure the existing cabling infrastructure can accommodate the 4x to 10x increase in bandwidth.
[A LOOK BACK: 20 milestones in Ethernet's first 40 years]
[PREPARE FOR IT: 100G, SDN leaving older switches behind]
One of the caveats that users should be aware of as they migrate from 10G to 40/100G Ethernet is the need to ensure precise clocking synchronization between systems – especially between equipment from different vendors. Imprecise clocking between systems at 40/100G – even at 10G – can increase latency and packet loss.
The latency issue is a bigger problem than most people anticipate, industry experts say. At 10G, especially at high densities, just the smallest difference in the clocks between ports can cause high latency and packet loss. At 40G, it's an order of magnitude more important than it is for 10G.
This is a critical requirement in data centers today because a lot of the newer innovations are meant to address lower latencies.
“Where you’re going to have the biggest challenges will be different latency configurations if RDMA (remote direct memory access) is used,” says Shaun Walsh, Emulex senior vice president of marketing and corporate development. RDMA is a low-latency, high throughput data transfer capability where application memory is exchanged directly to and from network adapters without copying it to operating system buffers.
“You see a lot more in-rack virtual switching, VM-based switching that is very application specific,” Walsh says. “New line cards in new backplane architectures mean different levels of oversubscription. There’ll be generational tweaks, configuration ‘worrying’ that has to occur. The biggest thing (testers) are running into is making sure you get the 40G you are paying for (with regard to) latency issues, hops, and congestion visibility.”
Emulex late last year acquired Endace, a developer of network performance management tools. Demand for the Endace product and the 40G capabilities of Emulex’s XE201 I/O controller are picking up as more data centers and service providers upgrade from 10G to 40G.
Walsh expects 40G Ethernet to be a $700 million market in four to five years, roughly half the time it took 10G Ethernet to reach that mark. Driving it are next-gen blade server mid-plane interfaces and architectures, big data, analytics, video and data over mobile, BYOD and high frequency trading, Walsh says.
Another challenge is readying the cabling infrastructure for 40/100G, experts say. Ensuring the appropriate grade and length of fiber is essential to smooth, seamless operation.
This is a big consideration for users because it could mean re-wiring a significant portion of their physical plant, if not all of it. That could be an expensive and disruptive undertaking.