Device Type | Device Driver Vendor | Device Driver Name | Notes |
Network | Broadcom | bcm5700 | |
Broadcom | bcm5721 | ||
Intel | e1000 | Quad-port MT is supported on ESX >= 2.5.2 | |
Intel | e100 | ||
Nvidia | forcedeth | ESX >= 3.0.2 only | |
3Com | 3c90x | ESX <= 2.5.x only | |
AceNIC | Acenic | ESX <= 2.5.x only | |
Fibre Channel | Emulex | Lpfcdd | Dual/single ports |
Qlogic | qla2x00 | Dual/single ports | |
SCSI | Adaptec | aic7xxx | Supported for external devices |
Adaptec | aic79xx | Supported for external devices | |
Adaptec | adp94xx | Supported for external devices | |
LSI Logic | ncr53c8xx | ESX <= 2.5.x only | |
LSI Logic | sym53c8xx | ESX <= 2.5.x only | |
LSI Logic | mptscsi | ||
RAID array | Adaptec | dpt_i2o | ESX <= 2.5.x only |
HP | cpqarray | External SCSI is for disk arrays only. ESX <= 2.5.x only | |
HP | cciss | External SCSI for disk arrays only | |
Dell | aacraid | ||
Dell | megaraid | ||
IBM/Adaptec | ips | ||
IBM/Adaptec | aacraid | ||
Intel | gdth | ESX <= v2.5.x only | |
LSI | megaraid | ||
Mylex | DAC960 | ||
iSCSI | Qlogic 4010 | qla4010 | ESX v3 only |
If the driver in question supports a device, in most cases it will work in ESX. However, if the device requires a modern device driver, do not expect it to be part of ESX, because ESX by its very nature does not support the most current devices. ESX is designed to be stable, and that often precludes modern devices. For example, Serial Advanced Technology Attachment (SATA) devices are not a part of ESX version 2.5, yet are a part of ESX version 3.5 (soon to be available). Another missing device that is commonly requested is the TCP Offload Engine NIC (TOE cards), and the jury is still out on the benefit given the network sharing design of ESX. As noted in the table, various SCSI adapters have limitations. A key limitation is that an Adaptec card is required for external tape drives or libraries and that any other type of card is usable with external disk arrays.
Best Practice Regarding I/O Cards - If the card you desire to use is not on the HCL, do not use it. The HCL is definitive from a support perspective. Although a vendor may produce a card and self-check it, if it is not on the HCL VMware will not support the configuration.
Table 1.1 refers particularly to those devices that the VMkernel can access, and not necessarily the devices that the COS installs for ESX versions earlier than 3.0. There are quite a few devices for which the COS has a driver, but the VMs cannot use them. Two examples of this come to mind, the first are NICs not listed in Table 1.1 but that actually have a COS driver; Kingston or old Digital NICs fall into this category. The second example is the IDE driver. It is possible to install the COS onto an Intelligent Drive Electronics (IDE) drive for versions of ESX earlier than version 3, or SATA/IDE drives for ESX version 3. However, these devices cannot host a Virtual Machine File System (VMFS), so a storage area network (SAN) or external storage is necessary to hold the VM disk files and any VMkernel swap files for each VM.
For ESX to run, it needs at a minimum two NICs (yes, it is possible to use one NIC, but this is never a recommendation for production servers) and one SCSI storage device. One NIC is for the service console and the other for the VMs. Although it is possible to share these so that only one NIC is required, VMware does not recommend this except in extreme cases (and it leads to possible performance and security issues). The best practice for ESX is to provide redundancy for everything so that all your VMs stay running even if network or a Fibre Channel path is lost. To do this, there needs to be some considerations around network and Fibre configurations and perhaps more I/O devices. The minimum best practice for network card configuration is four ports, the first for the SC, the second and third teamed together for the VMs (to provide redundancy), and the fourth for VMotion via the VMkernel interface on its own private network. For full redundancy and performance, six NIC ports are recommended with the extra NICs being assigned to the service console and VMotion. If another network is available to the VMs, either use 802.1q virtual LAN (VLAN) tagging or add a pair of NIC ports for redundancy. Add in a pair of Fibre Channel adapters and you gain failover for your SAN fabric. If there is a need for a tape library, pick an Adaptec SCSI adapter to gain access to this all-important backup device.
Best Practice - Four NIC ports for performance, security, and redundancy and two Fibre Channel ports for redundancy are the best practice for ESX versions earlier than version 3. For ESX version 3, six NIC ports are recommended for performance, security, and redundancy.
If adding more networks for use by the VMs, either use 802.1q VLAN tagging to run over the existing pair of NICs associated with the VMs or add a new pair of NICs for the VMs.
When using iSCSI with ESX version 3, add another NIC port to the service console for performance, security, and redundancy.
When using Network File System (NFS) via network-attached storage (NAS) with ESX version 3, add another pair of NIC ports to give performance and redundancy.
If you are using locally attached tape drives or libraries, use an Adaptec SCSI adapter. No other adapter will work properly. However, the best practice for tape drives or libraries is to use a remote archive server.
For ESX version 3, iSCSI and NAS support is available, and this differs distinctly from the method by which it is set up for ESX version 2.5.x and earlier. iSCSI and NFS-based NAS are accessed using their own network connection assigned to the VMkernel similar to the way VMotion works or how a standard VMFS-3 is accessed via Fibre. Although NAS and iSCSI access can share bandwidth with other networks, keeping them separate could be better for performance. The iSCSI VMkernel device must share the subnet as the COS for authentication reasons, regardless of whether Challenge Handshake Authentication Protocol (CHAP) is enabled, although an NFS-based NAS would be on its own network. Before ESX version 3, an NFS-based NAS was available only via the COS, and iSCSI was not available when those earlier versions were released. Chapter 8, "Configuring ESX from a Host Connection," discusses this new networking possibility in detail.