Chapter 4: A Virtualization Technologies Primer: Theory

Cisco Press

1 2 3 4 5 Page 2
Page 2 of 5
  • A logical router (LR) uses hardware partitioning to create multiple routing entities on a single device. An LR can run across different processors on different cards of a router. All the underlying hardware and software resources are dedicated to an LR. This includes network processors, interfaces, and routing and forwarding tables. LRs provide excellent fault isolation but do require abundant hardware to implement.

  • A virtual router (VR) uses software emulation to create multiple routing entities. The underlying hardware is shared between different router processes (note that we mean an entire instance of something like the nonkernel parts of IOS, not a single router process). In a well-implemented virtual router, users can see and change only the configuration and statistics for "their" router.


Note - The previous definitions and Figure 4-2 were derived from RST-4314 2004 Networkers "Advances in Router Architecture: The CRS-1 and IOS-XR," by David Tsiang and David Ward.


From the preceding list and Figure 4-2, which gives a pictorial idea of the difference between VRs and LRs, you can see that only the LR is completely virtualized. Because of the cost involved of having all that extra hardware and device management, LRs tend to be high-end systems. A VR is a software-based virtualization solution, where all the tasks share the same hardware resources.

In both cases, the granularity of what is virtualized can differ. Some implementations allow multiple router processes (for instance, one VR per customer domain), others allow you to allocate resources to tasks (an LR can have Border Gateway Protocol [BGP] running on one hardware subsystem and Intermediate System-to-Intermediate System [IS-IS] on another, for example).

Figure 4.2

Figure 4-2

Logical and Virtual Routers

VRF Awareness

Now that there are multiple routing and forwarding instances on a router, many of the router subsystems that use the information in these tables (which is a long list) need to become "VRF aware." A VRF-aware feature can be configured to refer to routing and forwarding information from a specific VRF and understand that only certain subinterfaces can be used with certain VRFs. Without this information, the feature uses the global table. For example, to assign an interface to a VRF, the basic interface ip address command was modified to take a VRF name parameter and become ip address vrf NAME forwarding.


Note - VRF awareness is an important implementation detail. Unfortunately, no canonical list exists of VRF-aware features (the list keeps growing), so the best approach is to check the latest online documentation or pester your Cisco representative.


If all features required for a particular application are VRF aware, you can use VRFs to emulate a VR and hence provide virtualized device functionality. This is the approach you will see used in the design sections of this book.

Layer 2 Again: VFIs

VFI is a service-specific partition on a switch that associates attachment circuits in the form of VLANs with virtual switched interfaces (VSIs).

If that did not make much sense, it is useful to have some background on the service itself, namely Virtual Private LAN Services (VPLS), to understand VFIs.

VPLS is a Layer 2 LAN service offered by service providers (SPs) to connect Ethernet devices over a WAN. The customer devices (call them customer edges [CEs] for now; we review this in more detail in Chapter 5, "Infrastructure Segmentation Architectures") are all Ethernet switches. However, the SP uses a Layer 3 network running Multiprotocol Label Switching (MPLS) to provide this service. The device on the edge of the SP network is called a provider edge (PE). Its role is to map Ethernet traffic from the customer LAN to MPLS tunnels that connect to all the other PEs that are part of the same service instance. The PEs are connected with a full mesh of tunnels and behave as a logical switch, called a VSI. Another way to think about this is to see the VPLS service as a collection of Ethernet ports connected across a WAN. A VSI is a set of ports that forms a single broadcast domain.

In many ways, a VSI behaves just as you would expect a regular switch to. When a PE receives an Ethernet frame from a customer device, it first learns the source address, as would any switch, before looking at the destination MAC address and forwarding the frame. If the port mapping for the destination MAC address is unknown, or is a broadcast, the frame is sent to all PEs that are part of the VSI. The PEs use split horizon to avoid creating loops, which in turn means that no spanning tree is needed across the SP network.

Obviously, the previous explanation hides a fair amount of detail, but it should be enough to give a high-level view of what is going on.

Once again, there is a need to define and manage groups of isolated ports and tunnels on a switch. The VLAN construct is too limited, and a VRF is strictly a Layer 3 affair, so it is necessary to come up with a new virtual device structure for VPLS, called a VFI.

The VFI lists addresses of all the PEs that form a VSI. Recall that VPLS uses a full mesh of point-to-point tunnels for inter-PE connectivity, so there will be connections to each PE listed. The customer-facing ports map VLANs to a VFI name. Example 4-6 shows a short configuration extract that will make this clearer. Figure 4-3 shows the corresponding network topology. The thick line represents the VLAN that runs across the MPLS backbone and connects the VSIs on the PE devices. The CE switches "think" they are connected by a 802.1q trunk on VLAN100. The thin lines between each PE are the actual pseudowires defined in the l2 vfi statement of Example 4-6.


Note - A pseudowire is a tunnel. The term is often used in the context of a Layer 2 service.


Figure 4.3

Figure 4-3

VPLS Topology

Example 4-6 VFI Configuration

l2 vfi VPLSA manual
 vpn id 100
 neighbor 13.13.13.13. encapsulation mpls
 neighbor 12.12.12.12 encapsulation mpls
 
interface loopback 1
 ip address 11.11.11.11 255.255.255.255
interface fastethernet1/0
 switchport
 switchport mode dot1qtunnel
 switchport access vlan 100

interface vlan 100
 no ip address
 xconnect vfi VPLSA

VPLS configuration has two components. The first, which we have already referred to, defines the mesh of pseudowires that together act as a virtual switch. The second maps the VLAN trunk port to a VSI using the xconnect command. This appears at the end of Example 4-6.

Virtual Firewall Contexts

Device virtualization is not limited to switches and routers. As a final example, consider a firewall device. For essentially economic reasons, you might want to share a single firewall between multiple different customers or network segments. Each logical firewall needs to have a complete set of policies, dedicated interfaces for incoming and outgoing traffic, and users authorized to manage the firewall.

Many vendors provide this capability today and undoubtedly have their own, well-chosen name for it, but on Cisco firewalls the term context is used to refer to a virtual firewall. Unlike VRFs, VFIs, or VLANs, a context is an emulation of a device (so an example of the VR concept discussed earlier in this chapter).

Firewall contexts are a little unusual in the way they assign a packet to a context. All the partitions we have seen up to now have static assignment of interfaces (you can assign IP packets to a VRF dynamically. We cover that later). A firewall module looks at an incoming packet's destination IP address or Ethernet VLAN tag to decide which context a packet belongs to. All the firewall needs is for one of the two fields to be unique. So, either each context has a unique IP address space on its interfaces or the address space is shared, but each context is in a different VLAN.

Figure 4-4 shows a simple setup with an Ethernet switch connected to a firewall context using two VLANs. The switch binds the VLANs to VRF BLUE (at the top) and VRF RED. The firewall has two different contexts. The blue one receives all frames on VLAN 101 and the red one gets VLAN 102. In this way, packets from the outside (on the right side of the figure) that belong to VLAN 101 go through a different set of firewall rules than those belong to VLAN 102.

Figure 4.4

Figure 4-4

VRF on Switch Connected to Firewall Contexts Across VLANs

Network Device Virtualization Summary

True device virtualization allows resources to be allocated to tasks, or applications. We looked at four different primitives that virtualize the forwarding paths on switches or routers: VLAN and VFI for Layer 2, VRF for Layer 3, and contexts for firewalls. Each of these functions slightly differently. VRFs have the most extensive tie-ins with other features, which we use extensively in the design sections. Before covering data-path virtualization, one word about data center designs. We are focusing on network devices exclusively in this book and do not address the details of server and storage virtualization, which are two important topics in their own right.

Data-Path Virtualization

Connecting devices with private paths over a shared infrastructure is a well-known problem. SPs have solved this with different iterations of VPN solutions over the years. Not surprisingly, we can use and adapt many of these same protocols in enterprise networks to create virtualized Layer 2 and Layer 3 connections using a common switched infrastructure. The focus in this section is on the more relevant of the rather overwhelming menu of protocols to build a VPN. Some of this section is a review for many readers, especially the material on 802.1q, generic routing encapsulation (GRE), and IPsec, and we do not devote much space to these topics. However, we also include label switching (a.k.a. MPLS) and Layer 2 Tunnel Protocol Version 3 (L2TPv3), which are probably less familiar and which consequently are covered in more detail.


Note - In addition to the references listed at the end of the book, we refer interested readers to Appendix A, "L2TPv3 Expanded Coverage," for more detail about L2TPv3.


Layer 2: 802.1q Trunking

You probably do not think of 802.1q as a data-path virtualization protocol. But, the 802.1q protocol, which inserts a VLAN tag on Ethernet links, has the vital attribute of guaranteeing address space separation on network interfaces.

Obviously, this is a Layer 2 solution, and each hop must be configured separately to allow 802.1q connectivity across a network. Because a VLAN is synonymous with a broadcast domain, end-to-end VLANs are generally avoided.

Generic Routing Encapsulation

GRE provides a method of encapsulating arbitrary packets of one protocol type in packets of another type (the RFC uses the expression X over Y, which is an accurate portrayal of the problem being solved). The data from the top layer is referred to as the payload. The bottom layer is called the delivery protocol. GRE allows private network data to be transported across shared, possibly public infrastructure, usually using point-to-point tunnels.

Although GRE is a generic X over Y solution, it is mostly used to transport IP over IP (a lightly modified version was used in the Microsoft Point-to-Point Tunneling Protocol [PPTP] and, recently, we are seeing GRE used to transport MPLS). GRE is also used to transport legacy protocols, such as Internetwork Packet Exchange (IPX) and AppleTalk, over an IP network and Layer 2 frames.

GRE, defined in RFC 2784, has a simple header, as you can see in Figure 4-5.

Figure 4.5

Figure 4-5

GRE Header

The second 2 octets of the header contain the payload protocol type, encoded using Internet Assigned Numbers Authority (IANA) Ethernet numbers (you can find the most recent version on http://www.iana.org/assignments/ethernet-numbers). IP is encoded as 0x800.

The simplest possible expression of a GRE header is a Protocol Type field. All the preceding fields are typically 0, and the subsequent ones can be omitted. You can find freeware implementations that work only with the first 2 octets, but all 4 should be supported.

GRE is purely an encapsulation mechanism. How packets arrive at tunnel endpoints is left entirely up to the user. There is no control protocol, no session state to maintain, no accounting records, and so forth; and this conciseness and simplicity allows GRE to be easily implemented in hardware on high-end systems. The concomitant disadvantage is that GRE endpoints have no knowledge of what is happening at the other end of the tunnel, or even whether it is reachable.

The time-honored mechanism for detecting tunnel reachability problems is to run a dynamic routing protocol across the tunnel. Routing Protocol (RP) keepalives are dropped if the tunnel is down, and the RP itself will declare the neighbor as unreachable and attempt to route around it. You can lose a lot of data waiting for an RP to detect a problem in this way and reconverge. Cisco added a keepalive option to its GRE implementation. This option sends a packet through the tunnel at a configurable period. After a certain number of missed keepalives (the number is configurable), the router declares the tunnel interface as down. A routing protocol would detect the interface down event and react accordingly.

GRE's lack of control protocol also means that there is essentially no cost to maintaining a quiescent tunnel active. The peers exchange no state information and must simply encapsulate packets as they arrive. Furthermore, like all the data-path virtualization mechanisms we discuss, the core network is oblivious of the number of tunnels traversing it. All the work is done on the edge.

We do not want to suggest that GRE is the VPN equivalent of a universal solvent. There is a cost to processing GRE—encapsulation/decapsulation, route lookup, and so forth—but it's in the data path.

GRE IOS Configuration

On Cisco devices, GRE endpoints are regular interfaces. This seemingly innocuous statement is replete with meaning, because anything in Cisco IOS that needs to see an interface (routing protocols, access lists, and many more) will work automatically on a GRE tunnel.

Example 4-7 shows a GRE endpoint configuration, corresponding to the R103 router of Figure 4-6.

Figure 4.6

Figure 4-6

GRE Topology

Example 4-7  R103 GRE Configuration

interface Tunnel0
 ip address 40.0.0.1 255.255.255.0
 tunnel source Serial1/0
 tunnel destination 192.168.2.1
Related:
1 2 3 4 5 Page 2
Page 2 of 5
SD-WAN buyers guide: Key questions to ask vendors (and yourself)