IEEE starts work on standards that could lead to 40G or even 100G Ethernet.
At the YouTube Web site, traffic at peak times is hitting 25Gbps and is expected to climb to 75Gbps soon. "User traffic to our site is continuing to grow. We add multiple 10-gig circuits a month to meet this growth," says Colin Corbett, director of networking.
The problem for large-scale data centers, high-performance computing, R&D networks, Internet exchanges and content providers is that typically it takes as long as four years for the IEEE standards process to unfold. For companies approaching the breaking point today, that seems light-years away.
Video is driving the push to 100G
"There is a dynamic happening in the marketplace, where there is a shift in how video is used and consumed by customers," says Suraj Shetty, director of Cisco's Service Provider Routing Technology Group.
Shetty points to new services -- business- and consumer-oriented -- that enable real-time global viewership. It's this move to mass consumption of video that is straining worldwide resources and sparking the need to move beyond 10G Ethernet, he says.
"Data requires about 10Kbps; a voice-over-IP call, about 50K to 100Kbps; standard video, about 300K to 500Kbps. But if you broadcast over high-definition, you're up to about 1.5M to 2Mbps of sustained utilization," Shetty says.
Households are doing a triple play of voice, video and data over televisions, computers and other devices, sending bandwidth requirements into the stratosphere, Shetty says. Simultaneous use of advanced services, such as peer-to-peer file sharing, IPTV and gaming, coupled with high-definition TV, could push bandwidth needs to 50Mbps per household.
Companies such as Cisco also are seeing the bandwidth impact of using video in new ways. "When John Chambers makes speeches, it's not about classic, old videoconferencing; he wants to make you feel that the person who is in Taipei talking to him is sitting right beside him," Shetty says.
Another area seeing amazing growth in bandwidth consumption is healthcare. Advances in medical imaging that lets doctor view patient results from anywhere in real time are pushing the boundaries of today's bandwidth capabilities. In fact, some companies are generating and transmitting MRI scanner data that reaches 500MB per hour and 11TB per day.
Link aggregation doesn't cut it
To accommodate this tremendous growth, many organizations have created workarounds. The solutions that help organizations get to 100Gbps today, however, are costly and complex to manage. This runs contrary to most IT groups' mantra of simplifying the network and reducing operating expenses.
For instance, take link aggregation. Some content providers, such as YouTube, have used the IEEE's 802.3ad standard to perform link aggregation and pool their 10G links into a 100G structure. "We split our growth across multiple regions, multiple providers and multiple peers," Corbett says.
Aggregation has numerous drawbacks, however, including the complexity of cable and link management, difficulties in troubleshooting because of multiple links on a single logical interface, and challenges in planning for capacity and traffic engineering. In addition, the standard has severe limitations that create inefficient distribution of large traffic flows, and the introduction of load balancing requires packet inspection and other overhead-heavy mechanisms.
"It becomes untenable with link aggregation to scale more than four links together," says Bob Noseworthy, technical director and chief engineer at the University of New Hampshire Interoperability Lab in Durham. Yet some Internet exchanges are being forced to string together 16 links -- the upper limit -- to achieve the capacity they require during peak hours.
"There are many problems associated with link aggregation. Eight links suddenly become 16, and it quickly turns into a management headache. It's also a lot of money to spend on nonrevenue-producing ports. It's really just a temporary fix for increased bandwidth," says John D'Ambrosia, scientist of components technology at Force10 Networks.
The need for higher speed
Although many members agree that 100G Ethernet should be the ultimate goal, they also are considering the interim benefits of a 40G Ethernet standard, which capitalizes on the OC-768 standard. In fact, some products on the market, including those from Foundry and Cisco, already claim to support 40G networking.
"This is not a simple question. What we need is a bigger pipe. A lot of people are getting hung up on a single speed, but it's conceivable there are a lot of ways to fill up the pipe," D'Ambrosia says.
He adds that this goal has to be met without breaking the bank. "History has said you want 10 times the performance at three times the cost to have economic feasibility. With 100G Ethernet, we're being challenged because of operational costs. There's going to have to be a perceived cost identified for the applications it can support," he says.
With the aggressive timeline of fewer than four years to hammer this out, vendors, carriers and content providers are putting pressure on component makers.
Need speed 'across all aspects'
"We see a need for bandwidth increases across all aspects -- our peers, our transit providers and the end users in their homes. Overall, switch capacity and port density will need to scale to support multiple 40-gig and 100-gig ports," YouTube's Corbett says.
"A lot of things need to align," says Val Oliva, director of product management with Foundry's Enterprise Business Unit. "When you transfer packets at that speed, you need a higher-speed memory to transfer the data from the pipe into the packet processor. Then it goes into the memory, and memory speeds today are not fast enough to support 100G. We're not talking about 64-bit processing -- this has to be an order of magnitude higher."
In addition to memory, Oliva says processors also need to be faster. "You have to build them to exceed 100G, just like we built 15G processors to support 10G ports."
The last consideration is cost. "The third and most important thing here is that if you are building a 100G processor, you want to make sure the chipset is small. Otherwise, it becomes expensive and unaffordable," he says.
But Oliva is optimistic that once a demand is shown for 100G, the components will be manufactured quickly. "The technology doesn't exist yet, but it can be built quickly -- within a year or two," he says.
For Noseworthy and others, the success of getting 100G to the masses by 2010 will depend on the momentum of the IEEE study group and its ability to keep the effort standards-based. "You don't want a bunch of proprietary solutions. You don't want 20 different flavors, because it doesn't benefit from economies of scale and doesn't drive the cost down," Noseworthy says.
Gittlen is a freelance writer in Northboro, Mass. She can be reached at firstname.lastname@example.org.
Learn more about this topicVoltaire upgrade takes aim at high-speed server clusters
11/06/06Deciding the future of Ethernet
02/13/06Energy Sciences Network adds 10G Ethernet metropolitan-area net in key cities