Chapter 1: System Considerations

Prentice Hall

1 2 3 4 5 Page 5
Page 5 of 5

A plan was devised that made the best use of the resources, including remote backups, backup to tape, and storage of tapes offsite. In essence, everything was thought about, including the requirement for a third site in case the impossible regional disaster hit. Little did we know....

The DR plan that was implemented made restoration much easier when the natural disaster hit. What could have taken weeks to restore took just days because the customer had DR backups of the virtual disk files for every VM on the system. These types of backups happen through the COS and should be considered as part of any deployment of ESX. A backup through the VMs, which is the traditional method to back up servers, requires other data-restoration techniques that take much longer than a backup and restore of a single file.

Hardware Checklist

Now that we have been through a few of the concepts related to the hardware and the individual limitations of various machines listed, we can devise a simple hardware checklist (see Table 1.3) that, if followed, will create a system that follows best practices.

Table 1.3: Hardware Checklist

Hardware

Best Practice

Comments

Network adapters (discussed further in Chapter 8)

Two gigabit ports for service console

Two gigabit ports could be used for ESX version 3.0 with load balancing and failover, but for ESX version 2.5.x or earlier a watchdog is necessary

 

Two gigabit ports for VMotion

ESX 2.5.x: Two gigabit ports could be used, but the second port is purely for failover.

 

Two gigabit ports per network available to the VMs

More than two gigabit ports in a team can cause switching issues. 802.1q VLAN tagging is also available.

 

Two gigabit or more ports for NAS

ESX version 3.0 only. Two gigabit ports provide failover and bandwidth.

  

ESX version 3.0 only. NFS is the only supported NAS protocol. CIFS is not supported.

iSCSI

Two gigabit ports for iSCSI either in the form of gigabit NICs or an iSCSI HBA

ESX version 3.0 only. Support for boot from iSCSI required an iSCSI HBA. An iSCSI HBA is a specialized TCP Offload Engine NIC.

Fibre Channel adapters (discussed further in Chapter 5)

Two 2GbE (Gigabit Ethernet) ports

This will provide failover and some multipath functionality with active-active style of SANs.

 

Two 4GbE ports

In the future, 4GbE Fibre Channel ports will be supported.

Tape drives or libraries

Adaptec SCSI card

Internal and external tape drives or libraries require an Adaptec SCSI card to be of use.

CPU

Match CPUs within a host

 
 

Match CPUs between hosts

Required for VMotion.

Disk (discussed further in Chapter 12)

Minimum a 72GB RAID 1 for OS

 
 

Minimum a 2xMemory RAID 0 for virtual swap

If 2xMemory is 64Gb or less, only one RAID 0 is necessary. If 2xMemory is 128GB, two 64GB RAID 0 disk is necessary. ESX <= 2.5.x only.

 

RAID 5 for local VMFS

This is mainly for DR purposes, or if you do not have SAN or iSCSI storage available.

Extra Hardware Considerations

All versions of ESX support connections from the VirtualCenter Management Server, and for ESX version 3 there is the license server and the VMware Consolidated Backup (VCB) proxy server. Because these tools are used to manage or interact with the ESX datacenter it might be necessary to consider the need for specialized hardware to run them and the databases to which they connect. Although many administrators run VirtualCenter within a VM, others never run it from a VM.


Best Practices for Virtual Infrastructure non-ESX Servers - VCB proxy server must run from a physical server because the LUNs attached to the ESX Servers must be presented to the VCB proxy server.

VirtualCenter Management Server can run from a VM, but the best practice is to use a physical server.

VMware License Server should always run on a physical server. It does not need to be a large machine. It is a good idea to keep it with the VirtualCenter Management Server.

Database Server, used by VirtualCenter, should reside on a SQL clustered set of servers. One node of the cluster could be a VM for backup functionality.


Conclusion

There is quite a bit to consider from the hardware perspective when considering a virtualization server farm. Although we touch on networking, storage, and disaster recovery in this chapter, it should be noted that how the hardware plays out depends on the load, utilization goals, compression ratios desired, and the performance gains of new hardware (which were not discussed). The recommendations in this chapter are suggestions of places to start the hardware design of a virtualization server farm. Chapter 2, "Version Comparison," delves into the details of and differences between ESX version 3.0 and earlier versions to help you better understand the impact of hardware on ESX. Understanding these differences will aid you in coming up with a successful design of a virtual environment.

Copyright © 2007 Pearson Education. All rights reserved.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Related:

Copyright © 2008 IDG Communications, Inc.

1 2 3 4 5 Page 5
Page 5 of 5
SD-WAN buyers guide: Key questions to ask vendors (and yourself)