Firewalls have been slowly changing over the years as their network architectures have been evolving. Firewalls are becoming more decentralized and becoming increasingly virtualized. As firewalls move from solely located at the perimeter inward toward the servers, many other changes are taking place. The pendulum of centralized versus distributed systems continues to swing back and forth as the industry finds the optimal equilibrium for security architectures.
One of the first books I read on the subject of firewalls was "Building Internet Firewalls" by Elizabeth D. Zwicky, Simon Cooper, D. Brent Chapman. This book covered the topics of least privilege, defense in depth, choke point, weakest link, fail-safe stance, universal participation and diversity of defense. The concept of the choke point helped organizations focus their attention on defining a security perimeter and placing the firewalls at that single point of entry. At the time most organizations had a single perimeter and many organizations could only afford a single firewall at their Internet connection.
It is clear that firewalls have changed over the years. Many firewalls lack policy granularity and many organization's firewalls end up having lots of NAT and policy rules. Some say that firewalls do not impede the bad traffic, they just impede the good traffic. Most attacks take place at the application layer over TCP port 80 anyway. Stateful firewalls are only seeing one aspect of the security picture by looking at the packet header. We need firewalls to perform more content filtering and deep packet inspection. Unified Threat Mitigation (UTM) firewalls evolved as we expected more functionality at the single choke point. We now rely more on DPI/IPS, behavioral analysis, anomaly detection, Data Loss Prevention (DLP) and Web Application Firewalls (WAFs) to protect our critical systems. A firewall can define a network perimeter but they can't protect against the insider/malware threat. Since 1997 I thought the end of the firewall era was right around the corner. In recent years we have seen the "erosion of the security perimeter" and our firewalls have turned into Swiss cheese. Because of all these trends, the firewall as a concept has slowly died or had its role in the security architecture diminished.
The other day I was joking with someone who was complaining about their slow computer and I flippantly suggested turning off their antivirus software. AV software can put a strain on computer resources and running without it sure does speed up the function of a computer. However, you wouldn't think of running a critical computer without AV software. Running a network without a firewall can make the transmission of data very fast. Yet, none of us would ever consider running an Internet-connected network without a firewall.
Years ago, firewalls used to be confined to the Internet perimeter to create that choke point. Now organizations use firewalls at multiple perimeters and internally. As businesses started to do more with firewalls and segment their environments to create separate "enclaves", "zones" or "compartments" they moved the firewalls to the core. There are challenges with using firewalls on the interior of your network. The rule-sets get large or they get less granular to make it manageable. Policies in these types of firewalls tend to have subnets as their minimum level of granularity of the source or destination address. In the end, these firewalls only delay the legitimate internal traffic and do not necessarily keep out the bad guys. If you make the assumption that the bad guys are already in the internal portion of your network you are probably on the right track.
There are several reasons why firewalls are non optimal As the policy size increased so do the demands on the CPU and memory resources of the firewall. We also expected more logging from our firewalls because we want to send that data to our Security Information Management Systems (SIMS). This logging further drives down firewall performance. In the early years of firewalls we had a hard time implementing firewalls that provided redundancy and sufficient performance. As the edge bandwidth increased, the firewalls needed increasingly higher interface speeds. Now we have firewalls with 10 Gigabit Ethernet interfaces. This makes for an expensive firewall that has bandwidth and CPU resources to keep up with that amount of traffic. We are basically turning our firewalls into slow routers.
There is a distinct trend in the industry to move the stateful firewalling closer to the servers within an IT environment. With server virtualization and server consolidation we can have virtual servers with different trust levels on the same physical server. With perimeter or core firewalls, now the firewall is not close enough to the server. Having a firewall close to the server provides maximum security for each server and allows servers to communicate with other servers of diverse trust levels only through a stateful firewall. This technique of firewalling at the hypervisor/server virtualization layer, prevents unacceptable server-to-server communications.
The following diagram shows how there can be many virtual computers running within one physical computer. Each may have a different level of trust or classification of data it handles. Therefore, having stateful packet filtering within the virtual environment is required to maintain separation and security. Using stateful packet filtering at this level of the architecture may also be a requirement of meeting security compliance standards.
The current trend is moving more toward host-based firewalls. Although there may be organizations out there that are using iptables/ip6tables on their virtualized firewalls, there are many other organizations looking to use a more sophisticated firewall at the hypervisor layer. Now there are a variety of companies offering these types of virtualized firewall products in this new area.
Cisco Nexus 1000V Virtual Security Gateway (Virtual ASA)
Juniper Networks vGW Virtual Gateway (formerly Altor Networks)
VMware has a VMSafe Partner Program to "approve" those vendors who have solutions that work with VMware. Earlier this year Ellen Messmer wrote a good article on security in virtualized environments titled "Battle looms over securing virtualized systems".
The other issue that comes to light early on in an organization's consideration of using a virtual firewall system is management. Who maintains the virtualized firewall? Does the responsibility for configuration and management of the virtual firewall fall on the network team, the security team or the system administrators? This is becoming a larger issue as more appliances are also moving to the virtualization layer. It may be easy to predict that Server Load Balancers (SLB) and Application Delivery Controller (ADC) systems will become virtualized and being implemented in the hypervisor layer. As systems become increasingly more virtualized the traditional lines of physical demarcation are bluring.
This is the time of the year for horror movies. One of my favorite actors when I was growing up was Vincent Price and I liked the movie "Pit and the Pendulum". This reminds me of how trends swing back and forth like a pendulum. Whether it is bell-bottom-jeans or how IT systems move from centralized to distributed and back again, the pendulum is always in motion. Many years ago there were mainframe computers with centralized computing. Over the 1980s to 2000s we had distributed our computing resources and made them geographically diverse. This might have supported our Disaster Recovery (DR) goals, but it made it difficult to manage such a distributed environment. However the pendulum has swung the other way as companies created centralized server farms, consolidated data centers, performed server consolidation. The pendulum has moved from mainframes to distributed servers and now we are moving back toward larger physical servers with virtualized operating systems. This sounds remarkably like timesharing on a mainframe. As far as firewall architectures is concerned, it appears that the pendulum has swung from using centralized firewalls at the perimeter to using distributed firewalls at other areas as the picture below illustrates.
We are also witnessing the pendulum swing in the core routing/switching realm. We have had a widely distributed set of routers performing distributed packet forwarding with a distributed control plane for almost 20 years. Routers have distributed intelligence and use routing protocols to share reachability information. Each router operates autonomously. Now we may be moving back toward a centralized control plane with technologies like OpenFlow. OpenFlow is centralizing the control plan but leaves the forwarding and data planes distributed across the network topology.
Firewall architectures have changed over the past 20 years. Firewalls have moved from the perimeter to the network core, toward the server edge and to the virtualization layer. We have seen computers move from centralized to distributed and back again with server consolidation and virtualization. These pendulums will continue to swing over the years until the industry matures and discovers the best equilibrium to support our businesses with the least cost. As Heraclitus said, "nothing endures but change."