Endpoints and Network Integration

Addison Wesley Professional

The network and the devices and people who connect to it must work as a team if predictable and reliable control is to be achieved. Although many of us are going to have to deal with the legacy hardware and software that has been added to our networks over the course of time, some of us are in the process of designing the networks of tomorrow.

No matter what your present network state, if you make decisions that drive your network toward a closed-loop process control model, you will be in a better position to address the security issues of today and those of tomorrow.

Précis

We start this chapter with a discussion about architecture and how your existing infrastructure can influence your decision on how to integrate CLPC into your enterprise. The question "do you need a forklift?" is discussed, as are some interesting alternatives to a complete reworking of your network.

We then move to a discussion about how the vendor community supports endpoint security and integrity. We discuss who some of the players are and what their strengths and weaknesses are.

Making a decision about permitting or denying access to the network without some form of remediation is clearly a nonstarter, so we discuss vulnerabilities and some remediation technology.

Special Points of Interest

The discussion regarding what a vulnerability is seems to have sparked a vigorously contested set of definitions. Although this is a subject worthy of much discussion, we are going to take a pass on it for now. In this chapter, we use the term vulnerability to mean a way to attack the endpoint and the network.

We expand the definition of authentication a little bit to encompass the operating system and associated applications. This is key to the CLPC concept because trust is measured against a stated policy.

We've tried to solve this problem on the network and at the endpoint. Vendors, depending on their product, pick a direction and work from it à la Cisco and Microsoft. The plain fact of the matter is that the network needs to help the endpoint, and the endpoint needs to help the network. Only through this symbiosis can we provide the requisite controls needed to build a CLPC.

Architecture Is Key

When we start any design process, prudent engineering tells us that we should start with some basic assumptions that will drive our architecture. When we go down a specific architectural path, changing our minds or changing the basic assumptions usually means making compromises that reduce the effectiveness of our intended design.

There are three ways to architect our control solution. One is based on industry standards, and two are based on proprietary solutions offered by the leaders in their respective lines of business.

Cisco is a networking vendor that is trying to control and protect the endpoint, and Microsoft is an endpoint solution vendor trying to use parts of the network infrastructure to add security.

Some vendors, such as Juniper (through their Funk acquisition) and Symantec (through their Sygate acquisition), have expanded the capability of the 802.1x supplicant beyond simple authentication, and this is where we begin to see the foundation for a real proportional control standard that truly closes the loop.

These new products, such as Symantec's Enterprise Protection,1 go beyond checking the endpoint to ensure that system-level policy issues have been complied with prior to allowing the system connectivity with the network. They also add some protection to the system that ensures that it's capable of protecting itself and the data it contains.

Basics

To make CLPC work, you need to have an infrastructure that supports it. You have to have some mechanism that enforces your policy; otherwise, it's a voluntary participation model.

You can work from two basic choices if your intent is to actively enforce your policy: 802.1x and Dynamic Host Configuration Protocol (DHCP). Of course, you can realize a passive type of policy enforcement through compartmentalization. The downside of compartmentalization is that it adds complexity to your network architecture, and complexity means the increased possibility of failure. More on compartmentalization later.

How Old Is Old?

So, let's assume that we're going to use 802.1x as our enforcement mechanism. We've gone out and bought a product that supports our notion of CLPC and have installed the agents and supplicants on our endpoints.

Having clients that support 802.1x is great, but you also need the network infrastructure to support it. Much, if not all, of the older infrastructure doesn't support 802.1x. An easy way to tell is to look at when you bought your network equipment. If you bought your stuff before the days of wireless, you don't have the ability to do 802.1x. If you're lucky enough, you may be able to upgrade, but it's going to be expensive.

During the course of my day-to-day business, one of my clients, a major financial institution, did a survey of their equipment and determined that it would cost more than a million dollars to upgrade their network to the point where they could support 802.1x.

Oh, and there's all those pesky embedded systems that we look at in Chapter 12, "Embedded Devices," such as printers and respirators, that don't speak 802.1x (or antivirus for that matter) that still need to be addressed.

Compartmentalization Is Still Effective

As you work through the pains of designing a new network architecture, you realize that the real fun is going to start when you begin the cutover. Few of us have had the opportunity to build a brand new network from the ground up. You usually have to transition from the old architecture to the new one in some controlled manner. This transition represents a huge problem for mergers and acquisitions people.

This is where compartmentalization can be utilized as a migration tool as well as a security tool. By placing security gateways between your enforced network and your legacy network, you can reduce the security risk to your entire network. Notice that I didn't say "eliminate"; I said "reduce."

But how do you compartmentalize? Do you do it based on the classification of systems and data, or should you break your network into functional zones? It makes a difference, especially if access from zone to zone is managed by access control lists (ACLs) in your routers or firewalls. Making the incorrect choice can mean that the business processes are impacted in significant ways.

For example, Figure 5-1 depicts a basic method of compartmentalization that is predicated on use. It is the classic concentric architecture. The users from the Internet have limited access to extranet services based on business needs and requirements. The dotted line surrounding them suggests a certain level of porosity that must be tolerated with such a service. However, when you get to the corporate network perimeter, you're greeted with a classic firewall, antivirus (AV), and intrusion detection. Moving to the intranet services, we encounter some access controls that limit how we interact with the supplied service. We might not have write access, and the service might not be available to all areas of the network.

The user area of the network is our next layer. It will have access to the Internet, intranet, and possibly the extranet. It will also have limited access to critical internal services. Critical internal services are protected by access controls such as ACLs, authentication, and authorization services. In the Windows world, these controls are enforced using Active Directory (AD) and groups. In non-Windows worlds, this means Remote Authentication Dial In User Service (RADIUS) and Lightweight Directory Access Protocol (LDAP).

The downside of this architecture is that it is fairly promiscuous, and if you're not up-to-date with your AV data files or you get hit with some zero-day exploit, you'll have quite a bit of damage to address.

At the other end of the spectrum is a method used to compartmentalize your network that is based on carving it into containment zones, as shown in Figure 5-2. By creating numerous zones and controlling all communication between the zones through the use of a firewall, you hope to prevent the spread of viruses and control the movement of data.

Figure 5-1

Figure 5-1

Classic compartmentalization by use. Core infrastructure systems are heavily protected and managed.

Unfortunately, the underlying message is that you don't trust your endpoints or the people who use them. You don't trust the endpoints to protect themselves, which implies that you don't have much faith in your security program. I can understand the "belt and suspenders" approach, but the rules, ACLs, and complex procedure that must be in place to support them begins to look like a ripe place for errors and failure.

As you can see from Figure 5-2, each zone has a firewall and therefore a firewall rule-set associated with it. You must explicitly allow traffic to leave a zone, and you must explicitly allow traffic to enter a zone. That means that if you want the folks in User Zone 1 to be able to access Corporate Services, you must tell the User Zone 1 firewall to allow traffic to pass outbound, and you must tell the Corporate Services firewall to allow the inbound traffic. I know what you're thinking: "The firewalls allow all outbound traffic by default." The firewall will allow outbound traffic unless you don't trust the systems on the inside and you configure the firewalls such that they stop all traffic by default. Remember that one of the purposes of such a design is to prevent the spread of viruses.

It's been my experience that such a zone-based architecture can become overly complex fairly quickly if not managed properly. It's easy for this type of architecture to develop a life of its own and in the process become quite a nightmare for the users and the security team. I've seen examples of this architecture where every change to the network required security group approval because a firewall rule change was necessary. This created quite a bottleneck. A special group had to be created just to address the changes to the firewall that were required on a daily basis just to keep the business going. The result was the inability of business groups within the organization to innovate at the speed of business.

Figure 5-2

Figure 5-2

Zone-based compartmentalization can grow to be very complex.

Another victim in a complex zone-based architecture can be service level agreements (SLAs). SLAs are there to ensure that a basic level of functionality is always present so that business can be processed. SLAs ensure that file servers are always running with minimum delays and that Web services provide their proscribed function to your user community. In a complex zone-based architecture, the tools required to measure and gauge service levels can be severely hampered (if not completely cut off).

To be fair, this type of architecture usually results from numerous mergers and acquisitions and the requirement that the security people have to manage the overall security of a patched-together network. I guess that means that it's the financial guys' fault that the network is overly complex.

I will concede that compartmentalization, if done properly, is also an effective tool for controlling the spread of malware. It gets back to the question I asked before, however: Do you want to compartmentalize by data classification or by function? If you're a government agency or the military, this question already has one answer: You've compartmentalized by data classification. This seems easy enough until you need to connect services or share data between classification zones. As shown in Figure 5-3, the Bell/LaPadula rule states that you can have no "write down" and no "read up" capability.2,3 What this means is that systems with a higher classification can't write data to a lower classification system nor can a lower classification system read from a system with a higher classification. If you consider your network a "higher" classification level than the Internet, you break that rule every time your email server receives email.

Figure 5-3

Figure 5-3

The Bell/LaPadula rule prevents the migration of high classification data to lower classifications.

There must be some middle ground, and I believe that is to compartmentalize by function and monitor traffic patterns. We have seen this type of compartmentalization in our demilitarized zone (DMZ). We put systems that act as data bridges in our DMZ so that external users can access the data while not getting access to the internal network services. We even give it a fancy name and call it an extranet.

Some security architects take this idea a bit further by breaking up the network into smaller compartments—not as extreme as a zone-based architecture, but not quite as open as a classic security architecture. As you can see in Figure 5-4, a logical demarcation point is the difference between user endpoints and server endpoints. This segregates endpoints based on their function as sources and sinks of data.

Figure 5-4

Figure 5-4

The extranet allows Internet users to exchange data with internal users.

The added advantage to this is that you can track flow information and when it looks like a system has changed roles—a sink becomes a source, for instance—you can take appropriate action. An example of this is a user's system spewing data to the Internet. It's either been hacked and is spamming the universe, or a large amount of data is being extracted from it.

Related:
1 2 3 4 Page 1
Page 1 of 4
The 10 most powerful companies in enterprise networking 2022