Lately I have seen many data center designs that contain 10 Gigabit Ethernet links at the access, distribution and the core hierarchy layers. Traditionally, the bandwidth increases as you reach the core of the network. Historically, networks were like trees. The access network "leaves" are smaller, the distribution network "branches" are a little bigger, and the core network "trunk" is thick. However, due to the prolific use of 10 GE interfaces traditional network design oversubscription ratios are not achievable.
When constructing a multi-tiered network design it is important to consider the bandwidth oversubscription ratios at every layer of the Ethernet switching hierarchy. The idea is that the upstream bandwidth at each layer of the hierarchy must provide adequate bandwidth for those downstream devices. However, statistics drive the ratios that make the total size of the uplink not need to sum to the total amount of the downstream links. This "oversubscription" ratio of downlinks to uplinks is what needs to be closely monitored so that at places in the network bottlenecks do not form that could be difficult to detect and provide poor network connectivity for downstream devices.
Common access-downlink to access-uplink ratios are 20:1 and distribution-downlinks to distribution-uplink ratios are 4:1. Below is a figure that illustrates this concept. This diagram below shows a 20:1 ratio between access-ports on an Intermediate Distribution Frame (IDF) switch and the uplinks to the distribution switch as well as a 4:1 ratio of distribution switch downlinks to its core uplinks. Traditionally, single Gigabit Ethernet links are used to connect servers, the uplinks are 10GE links, and the core is connected with four 10GE links.
A similar diagram can be found in the Cisco Enterprise QoS Solution Reference Network Design Guide (SRND) version 3.3.
Many newer servers and blade centers are coming with 10GE interfaces. The links between core devices are also using 10GE interfaces. Now we have a design where the leaves are as thick as the trunk of the tree. Therefore, 10GE is changing oversubscription ratios commonly used in network designs.
For example, if an IDF has 240 ports (5 switch stacks of 48 port 10/100/1000Mbps switches) then the total downstream bandwidth is 240Gbps. Therefore, the uplink bandwidth should be 1:20 of 240Gbps or 12Gbps. Those uplinks will probably be a pair of 10GE links. Then consider a set of distribution switches that support only four of those IDFs. Therefore, the total distribution layer downstream bandwidth would be 960Gbps. The uplink bandwidth should be 1:4 of 960Gbps or 240Gbps. However, since we lack the ability to deploy that amount of bandwidth then we are faced with probably using a set of four 10GE links from each distribution switch to each of the pair of core switches.
The second example is when we have servers with 10GE links. Let's say a Nexus switch has 32 10GE links to servers, clusters, and blade centers in the data center. The 20:1 rule would indicate that there would be 16Gbps of uplink bandwidth. That could be satisfied with a couple of 10GE uplinks to the distribution switches. Those distribution switches could only have a couple of these IDF switches downstream in order to require only a few 10GE uplinks to the core switches.
The distribution layer is getting squeezed out with the extensive use of 10GE interfaces within data centers and more organizations may be looking at a 2-tier model rather than the traditional 3-tier model. In the two-tier model, the only ratio used is the 20:1 from the access downlinks to the access uplinks to the core.
40GE and 100GE on the Horizon:
This oversubscription ratio problem won't remain like this for long. We can see already see 40 Gbps Ethernet and 100 Gbps Ethernet on the horizon. Earlier this year the NYSE announced plans to deploy 100Gbps Ethernet. Service providers like Qwest are planning early deployments of 100Gbps in their high-performance backbones. In fact, the some of the first 100Gbps links have already been sold. In my opinion, I agree with those who are proponents of skipping 40Gbps Ethernet and going straight to 100Gbps Ethernet. I also feel that 100Gbps Ethernet is going to gain wider industry adoption than OC-768. History has shown that you just can't beat Ethernet for simplicity, performance, and price.
The use of 10GE interface for access, distribution, and core will cause network architectures that have leafs with the same bandwidth as the tree's trunk. In order to maintain oversubscription ratios the industry is looking toward using 100GE in the years to come. Network World published their "100G Ethernet cheat sheet" a few weeks ago. I encourage you to check out these articles and keep track of how 100Gbps Ethernet will affect how you design networks in 2010.