Review: VMware’s vSAN 6.6

vSAN is an attractive, if expensive, choice for hyper-converged storage

Review: VMware’s vSAN 6.6

VMware has been shipping its vSAN (virtual storage area network) software for years, and the latest version ups the ante with improved hyper-convergence functionality and in-place encryption.

What vSAN does is access available storage in physical servers and create a shared storage pool that supports VMware virtual machines. vSAN takes advantage of existing 10GBE links in the servers, eliminating the need for iSCSI or proprietary third-party software to serve as intermediaries between disk subsystems and VMs. In this latest version, VMs can be encrypted in place, anywhere in the vSAN.

+See how VMware's vSAN can be the storage component in hyper-converged data centers+

Upsides of vSAN

We found vSAN to be very flexible, with only a few hindrances. And the speed of VM live movement among storage hosts was approximately the same as using an iSCSI SAN. If you’d like to de-couple storage from the host upon which the workload is running, this might be your ticket – if you do the necessary homework.

The benefit of vSAN done properly is comparatively barrier-less storage, reduced dependence on host/storage stacks, and progress towards the ultimate goal of hyper-convergence, in which compute, storage, networking and virtualization are totally integrated.

Also, we find the VM encryption-in-place capability to be valuable, as the disassociated components of a hyper-converged workload are encrypted together. This represents another strong layer of security that’s especially useful in establishing multi-tenancy trust.

Requirements and considerations

A number of start-ups are offering hyper-convergence through pre-configured appliances. VMware, now owned by Dell, has gone to almost extreme lengths to be hardware-agnostic with vSAN. You can use hardware from HPE, Dell, Lenovo, SuperMicro and more, as long as the specific systems and components are on VMware’s pre-approved list.

We used VMware-recommended Lenovo server hardware, along with Lenovo high-speed SSD drives (see how we tested) to perform our tests.

You give up about 10% of your host CPU overhead to run vSAN 6.6, but this is the price of extreme automation and the benefits of the hyper-converged flexibility. Hosts need at least 32GB of memory for a fully operational vSAN. No scrimping. More is better. And hosts need to have 10GBE adapters. Indeed, if a cluster uses flash, NVRAM or SSDs, then 10GB is a must. The faster the drive the better, and dedicating at least one adapter per host to vSAN traffic is a wise investment, because large VM movements will drag down something like a 4-GBE host.

If you have two separate VMware host clusters, they can't work together, but must be used as a single cluster. And vSAN does not support vSphere Distributed Power Management (DPM), Storage I/O control, SCSI reservations, RDM, VMFS (i.e. can't use VMFS as file system for vSAN storage), or diagnostic partition. You must disable vSphere High Availability (HA) before you enable vSAN on the cluster. Then you can re-enable it.

You’ll need at least three (four are recommended) similarly configured servers, with supported hardware on VMware’s compatibility list or a vSAN ready-node server. The servers must be running vSphere 6.5 as well be connected to a vCenter configured in a cluster before vSAN can be enabled and configured. More servers can be added later to increase capacity, but three are required to start. (see the vSAN ReadyNode Configurator and the VMware Compatibility guide.)

Unfortunately, none of the vSAN options are able to be configured, monitored or otherwise manipulated in the HTML5 vSphere client. You still need to use the old Flash-based client, which at times can be quite slow.

Costs of vSAN 6.6

A license of vSAN 6.6 is not inexpensive, coming in three varieties: Standard at $2,495 per CPU, Advanced at $3,995 and Enterprise at $5,495.  That’s in addition to whatever vSphere licenses you need.

So, is vSAN less expensive than the alternatives? We found a link that actually helps with this, because it’s a complex question with so many components involved. VMware has a capacity calculator that might help in the cost analysis, one for hybrid deployments and one for flash deployments. We found it pretty realistic.

Here’s a cost-saving tip: Adding storage is done by adding extra disks to your current vSAN cluster host servers or adding an entire host with its requisite additional storage. However, adding a host requires additional licensing for the new CPUs. The best tactic, we found, is finding hosts that can accommodate the largest number of hot-pluggable SSD drives per chassis. Adding storage to existing servers will likely be more cost-effective until you need more hardware for additional VMs. With vSAN 6.6, dense storage is cost effective.

To continue reading this article register now