VMWare Brings Networking Into the Server

As I discussed in my last blog, VMWare brings significant changes to data center networking. The biggest change is that VMWare extends the network into the VMWare ESX server itself. The VMWare kernel runs a layer-2 virtual switch(es) that has almost all the properties of a typical layer-2 switch. On one side, this switch provide network connectivity to each virtual machine's (Windows, Linux, etc) virtual NICs. On the other end, this virtual switch provides connectivity to the physical network via the physical NICs on the server.

virtual switch image
There are up to four virtual NICs in each VM (Windows, Linux, etc). This is a virtual PCI-slot limitation for each VM. Sending more than 4 VLANs to a VM requires a certain type of trunking with the VMWare ESX server (see below). Normally, these four vNICs provide connectivity to four individual VLANs which are on the vSwitch. The vNICs are created in the VM by a virtual NIC driver presented to the VM by the ESX server. Each vNIC appears as a 100FULL connection, but in reality, the interface is not limited to 100 Mbps, or any speed for that matter. Speed is only limited by the memory and bus on the server. Each vNIC connects to a port on a virtual switch. Ports are assigned to a port group which gives the port characteristics (VLAN, load balancing, trunking, etc). The ESX server can actually have multiple virtual switches (as seen in the graphic above). The only limitation is that the physical interfaces, with the connections to the physical network, can only connect to a single virtual switch. Physical NICs cannot be shared between multiple virtual switches. However, this can also be a benefit. A virtual switch with no external connections can provide connectivity between two VMs without using the physical network. A web server that only communicates with Firewall VMs can be limited to a virtual switch with no internal connectivity.
multiple virtual switches
This introduces some nice network design options to control traffic flow. VMs on the same VLAN can communicate directly via the vSwitch. Thus, this traffic will not be sent to the physical network. Actual packet forwarding inside the virtual switches is sort of like a cut-through method. The packet header is analyzed by the virtual switch and the destination VM is given a memory location where the packet contents are stored. The destination VM then references the memory and gets the data. There is no concept of store-and-forward buffers or QoS since there are no bottlenecks. The physical interfaces on a server running VMWare connect to the physical network. These NICs can be single interfaces, teamed for redundancy, trunked, and/or channeled. All this is configured in the ESX server. Load balancing is also configurable. There are options for load balancing based on port ID, source MAC address hash, and IP address hash. The first two options always send traffic from a single vNIC out the same physical interface while IP Hash will balance different sessions across all physical NICs. The trick with the physical interfaces is getting enough aggregate and per-session bandwidth out of the server. A gig or two was good for a single server, but now you may have 15-20 VMs using those NICs. Now 1 Gbps may not be enough. Proper channeling and load balancing will be key, along with a proactive network capacity management program. Trunking is the final piece of the design. On the physical interfaces, the VMWare ESX server can be on a single VLAN ("External Switch Trunking"), VLANs can be trunked to the vSwitch ("Virtual Switch Trunking"), or VLANs can be trunked all the way to the VMs through the vSwitch ("Virtual Guest Trunking"). The most practical, with the numerous VLANs that typically make up a data center, is Virtual Switch Trunking. This sends VLANs to the vSwitch, via dot1q on the physical NICs, that the vSwitch can then use. Each vNIC on the VMs is put in a single VLAN. This is a typical design for a Layer-2 switch. The concept is simply extended to the VMWare ESX server.
As you can see, just getting the proper design for the Layer-2 switch inside the VMWare ESX server will take considerable skill, thought, and testing. I didn't even discuss daily management of the vSwitch (is it the server team or the network team?) These are just some of the huge issues that must be addressed as you begin using VMWare.


Copyright © 2008 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022