- Google I/O 2013's Coolest Products and Services
- 10 Star Trek Technologies That are Almost Here
- 19 Generations of Computer Programmers
- 25 Must-Have Technologies for SMBs
Network World - This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.
Access to data center resources needs to be fast, secure and reliable, a significant challenge for the data center network infrastructure which is tasked to adhere to the following principles:
* Deliver data center application resilience, high availability and fault tolerance.
* Achieve maximum data center application delivery optimization and acceleration.
Such functionality can also be, and in fact at times indeed is, delivered by the applications without any special requirements from the network. Security services, for example, can be in the form of SSL encryption and host-level firewalls, while application resilience can be achieved through clustering and, finally, better written code can, to a certain extent, account for optimized and accelerated application delivery.
Why then do we need the network to play a role? First and foremost, not all applications were designed and written with security, reliability and efficiency in mind. What's more, network delivered services scale better for larger environments and can complement server and application level functionality, rather than be orthogonal to it.
Let's look at how these services should be integrated into the data center fabric and some pros and cons of each approach.
The traditional model of data center service delivery relies on physical service appliances (or service modules) positioned adjacent to the data center Layer 2/Layer 3 boundary, which most often occurs in the Aggregation (aka Distribution) Layer. In fact, data center service delivery models have become so common that a new design layer simply called the Services Layer was introduced. Service appliances can operate in two main modes, routed or bridged, and while there are more variations of each, let's stick with the main ones:
In routed mode, service appliances behave like routers with the client-facing side and the server-facing side belonging to two different IP subnets. Traffic forwarded through the service appliance gets routed based on the ability to reach IP addresses while service functionality gets applied. Introduction of routed mode appliances into an existing topology often requires IP addressing changes to accommodate the requirement of having different IP subnets on client and server facing sides.
Servers can either be Layer 2 or Layer 3 adjacent to the appliances themselves, although with Layer 3 they are not really adjacent but rather reachable through the routed hop(s). In case of Layer 2 adjacency, service appliances become default gateways for the servers, while Aggregation/Distribution devices behave purely as Layer 2 switches for those respective server VLANs.