- Top 10 Recession-Proof IT Jobs
- 7 Hot IT Jobs That Will Land You a Higher Salary
- Link Building Strategies and Tips for 2014
- Top 10 Accessories for Your iPad Air
Page 2 of 3
• Heavyweight algorithms such as Spanning Tree Protocol (STP) to avoid deadlock have encumbered the use of flat networks, and therefore have often required complex tiered Layer 2 and 3 switches to support scale-out architectures; this imposes significant latency penalties when in operation and also necessitates significant bandwidth over-subscription
• Individual switch silicon has imposed severe latency on individual packets and when compared to the latency at the server side, adds significantly to overall round-trips in large systems
• Congestion within switches incurred by hotspots in the network can cause catastrophic drop-off in overall bandwidth
Conventional large-scale Ethernet deployments have relied upon three-tier architectures of switch, distribution (or aggregation) and core deployments in order to control particular network operations. Such designs have inhibited scalability due to systemic constraints in the architecture: Network resources soon become over-committed, especially in the presence of device-level (east-west) communication.
Such constraints have necessitated a move away from a three-tier model to a flatter network based on leaf-switches, providing access to devices (storage and server), and spine-switches, creating a rich multi-path fabric in which potentially all the available bandwidth can be used to sustain device level communication irrespective of the location of these devices.
In practice however, such networks cannot deliver complete isotropy due to the inability to manage congestion as transmission and receive-flows change rapidly in operation. Typically, congestion within the network is formed through either egress port buffering, whereby the volume of traffic attempting to access an attached device is greater than the available bandwidth over the given egress interface, or within the network, when the aggregate traffic volume taking a particular path is greater than the available bandwidth on that path.
Both of these scenarios will cause traffic to be buffered within the network leading to head-of-line (HoL) blocking wherein, traffic that is not necessarily contributing to the congestion, is affected. The impact of this can be seen in additional latency or jitter or worse, frame loss.
A separate issue exists in that the paths taken by traffic within the network are often based upon a static mapping mechanism, which is unaware of network load. Typically this is based upon a hashing mechanism that will always result in a given traffic flow following the same path regardless of congestion that may lie ahead on that path. To overcome this hurdle, network architects are often forced to over-provision bandwidth leading to under-utilization of the available resource, which is both inefficient and costly.
When considering the sources of all incoming packets to a switch, conventional Ethernet will allocate outgoing bandwidth fairly among them. However, critically, it pays no regard to the journey these packets have made. An ingress port may have a node attached or it may be the final hop of a large network connecting thousands of nodes. The net result is that large areas of network workloads can be locked out for considerable periods of time and expensive links remain irretrievably idle.