I had a great time meeting with a variety of customers at Cisco Live in Orlando back in June. We covered a lot of different topics around data center security. One specific item that came up more than once was the use of Layer 2 versus Layer 3 firewalls in the data center. In fact, this topic comes up fairly often, so as a follow-up I thought it would be great to write a post that discussed this topic.
In my first post, I discussed the importance of establishing security as part of the data center fabric. Deploying firewalls and other security tools are vital to mitigating threats among server and application zones. However, not everyone outside of the security team is always gung-ho on deploying these services in internal data centers. They see the addition of these security devices as intrusive and limiting the flexibility and elasticity of the data center. These objections can be addresses with data center security designs that demonstrate added security while supporting business enablement.
Being able to deploy firewalls in the data center in a model that requires minimal changes or disruption to the existing infrastructure are security-added benefits. Deploying firewalls in Layer 2 transparent (bridge mode) is one way to accomplish this. Let me explain.
First, I do not believe there is only one model everyone should follow. There are dependencies on architecture, design, restrictions, and business objectives of the organization.
Flexibility of deployment allows one to pick and choose and sometimes used a “mixed-mode” of Layer 2 and Layer 3 in their approach. Firewalls deployed in Layer 2 mode provide the most transparent method for integrating with existing routing and IP designs as well as existing services - load balancers, etc. In the case of Layer 2 firewalls, the preservation of existing server gateways, IP subnets, and addressing is preserved. This, of course, does not mean there are not use cases for Layer 3 firewalls – there are plenty: multi-tenant, Layer 3 to the access, private cloud. From an ease of deployment and integration perspective, Layer 2 has some advantages in the data center. Let’s go through some of these.
Figure 1. Logical view of Layer 2 and Layer 3 firewall modes
Lets take a look at the typical Layer 3 default gateway for a server. In the figure below the server has an IP address of 192.168.100.50 residing in VLAN 100. This could be a standalone physical server or in this case a virtual machine. The Layer 3 default gateway resides on the Aggregation switch with an IP address of 192.168.100.1. For Layer 3 high availability you can rely on technologies like HSRP, VRRP, GLBP, etc.
Figure 2. Layer 3 default gateway for server
If we insert a firewall between the server and the Layer 3 gateway on the aggregation switch we can deploy that firewall in either Layer 2 or Layer 3 mode. If the firewall is deployed in Layer 2 transparent mode we can reserve the current IP scheme and require no IP address changes on either the gateway or the server or application. This is shown in figure 3 below.
Figure 3. Inserting a Layer 2 firewall between the server and Layer 3 gateway
In this model the firewall is inserted between the server and aggregation switch. Because the firewall is simply acting as a Layer 2 bridge all that needs to be added is an inside VLAN. The firewall will then bridge the two VLANs which both reside in the same IP subnet space. In this case the server continues to use the same IP address and Layer 3 gateway it had before the firewall deployment.
On the ASA you also need a BVI address (Bridged Virtual Interface) for each Layer 2 context you create. This BVI is used for management access to the layer 2 transparent context and must be on the same subnet as the host.
Figure 4. Layer 2 BVI and VLAN information on ASA
There are almost always additional security and application services deployed in the data center. This transparent deployment also allows for an easy integration with these existing services. Most load balancers, for example, are also capable of operating in both Layer 2 and Layer 3 modes. By combining multiple Layer 2 services you are able to create a flow between services simply by bridging VLANs.
Figure 5. Multiple Layer 2 services
Of course you could accomplish this with firewall and other services all configured in Layer 3 mode. It just means you need to use more subnets and possibly re-IP addressing more servers and devices. In a multi-tenant or cloud environment where you have dedicated services per tenant it may make sense to run everything in Layer 3 routed modes.
This linkage of services in the data center and cloud has been advanced through a function known as service chaining. In service chaining virtualized services (firewalls, loadbalancers) are decoupled from the network topology allowing the data path to be programmed and each service to be inserted and removed dynamically. Cisco accomplishes service chaining through the use of vPath tied directly to the Hypervisor via the Nexus 1000v virtual switch.
Because service chaining decouples services from network topology where the services physically reside is less important. Services can be carved up and assigned per zone, tenant, or department. The services can be either Layer 2 adjacent or one or more Layer 3 hops away. This is extremely useful in virtualized and cloud environments because it provides even more flexibility in deployment.
Figure 6. Service Chaining for virtualized services
The relevance for service chaining can also be linked to Software Defined Networking (SDN) where virtualized services are abstracted and applied to network flows via decoupled control and data planes. Service chaining allows new services to be applied dynamically in a quicker manner to reflect immediate business needs via a much lower cost model.
I’ll be exploring more advanced concepts of data center security service chaining and security for software defined networking in future posts.
Until next time…