VMware has been shipping its vSAN (virtual storage area network) software for years, and the latest version ups the ante with improved hyper-convergence functionality and in-place encryption.\nWhat vSAN does is access available storage in physical servers and create a shared storage pool that supports VMware virtual machines. vSAN takes advantage of existing 10GBE links in the servers, eliminating the need for iSCSI or proprietary third-party software to serve as intermediaries between disk subsystems and VMs. In this latest version, VMs can be encrypted in place, anywhere in the vSAN.\n+See how VMware's vSAN can be the storage component in hyper-converged data centers+\nUpsides of vSAN\nWe found vSAN to be very flexible, with only a few hindrances. And the speed of VM live movement among storage hosts was approximately the same as using an iSCSI SAN. If you\u2019d like to de-couple storage from the host upon which the workload is running, this might be your ticket \u2013 if you do the necessary homework.\nThe benefit of vSAN done properly is comparatively barrier-less storage, reduced dependence on host\/storage stacks, and progress towards the ultimate goal of hyper-convergence, in which compute, storage, networking and virtualization are totally integrated.\nAlso, we find the VM encryption-in-place capability to be valuable, as the disassociated components of a hyper-converged workload are encrypted together. This represents another strong layer of security that\u2019s especially useful in establishing multi-tenancy trust.\nRequirements and considerations\nA number of start-ups are offering hyper-convergence through pre-configured appliances. VMware, now owned by Dell, has gone to almost extreme lengths to be hardware-agnostic with vSAN. You can use hardware from HPE, Dell, Lenovo, SuperMicro and more, as long as the specific systems and components are on VMware\u2019s pre-approved list.\nWe used VMware-recommended Lenovo server hardware, along with Lenovo high-speed SSD drives (see how we tested) to perform our tests.\nYou give up about 10% of your host CPU overhead to run vSAN 6.6, but this is the price of extreme automation and the benefits of the hyper-converged flexibility. Hosts need at least 32GB of memory for a fully operational vSAN. No scrimping. More is better. And hosts need to have 10GBE adapters. Indeed, if a cluster uses flash, NVRAM or SSDs, then 10GB is a must. The faster the drive the better, and dedicating at least one adapter per host to vSAN traffic is a wise investment, because large VM movements will drag down something like a 4-GBE host.\nIf you have two separate VMware host clusters, they can't work together, but must be used as a single cluster. And vSAN does not support vSphere Distributed Power Management (DPM), Storage I\/O control, SCSI reservations, RDM, VMFS (i.e. can't use VMFS as file system for vSAN storage), or diagnostic partition. You must disable vSphere High Availability (HA) before you enable vSAN on the cluster. Then you can re-enable it.\nYou\u2019ll need at least three (four are recommended) similarly configured servers, with supported hardware on VMware\u2019s compatibility list or a vSAN ready-node server. The servers must be running vSphere 6.5 as well be connected to a vCenter configured in a cluster before vSAN can be enabled and configured. More servers can be added later to increase capacity, but three are required to start. (see the vSAN ReadyNode Configurator and the VMware Compatibility guide.)\nUnfortunately, none of the vSAN options are able to be configured, monitored or otherwise manipulated in the HTML5 vSphere client. You still need to use the old Flash-based client, which at times can be quite slow.\nCosts of vSAN 6.6\nA license of vSAN 6.6 is not inexpensive, coming in three varieties: Standard at $2,495 per CPU, Advanced at $3,995 and Enterprise at $5,495.\u00a0 That\u2019s in addition to whatever vSphere licenses you need.\nSo, is vSAN less expensive than the alternatives? We found a link that actually helps with this, because it\u2019s a complex question with so many components involved. VMware has a capacity calculator that might help in the cost analysis, one for hybrid deployments and one for flash deployments. We found it pretty realistic.\nHere\u2019s a cost-saving tip: Adding storage is done by adding extra disks to your current vSAN cluster host servers or adding an entire host with its requisite additional storage. However, adding a host requires additional licensing for the new CPUs. The best tactic, we found, is finding hosts that can accommodate the largest number of hot-pluggable SSD drives per chassis. Adding storage to existing servers will likely be more cost-effective until you need more hardware for additional VMs. With vSAN 6.6, dense storage is cost effective.\n Tom Henderson\nvSAN installation and configuration\nWe installed from media, and having read the documentation, did the initial configuration and installation. It takes only a few moments. Because VMware is already installed on the hosts, vSAN 6.6 easily picks up the hardware specifications. No RAID or JBOD associations are necessary, as hardware discovery was made by the VMware ESXi hardware license in our test configurations. The vSAN then creates the cluster.\nThere are vSAN hardware compatibility list checks for specific host firmware and device controllers. These are integrated in the custom ESXi distribution which is downloadable from each vendor. The firmware and device drivers can be upgraded in the vSAN Updates configuration section.\nWe found several advantages to using vSAN. First, there is no longer a single point of failure if the SAN goes down, unless connectivity dies, and if that happens, you\u2019re likely to have even more problems, as it may mean a communications failure unrelated to vSAN.\nIn the event of a host failure, because of the distributed nature of the storage cluster, not all your VMs will go down-- just the ones on the disk(s) from the host server that did go down. If one uses HA, then the backup disk stays alive, something that\u2019s pretty interesting to know when there\u2019s no RAID underneath. \nThere is a vSAN configuration assistant, new in 6.6, and it helped us determine whether our setup was configured properly. There is also a one-click controller for firmware and driver upgrades, but there were no updates during testing.\n Tom Henderson\nIt\u2019s also possible to manage vSAN via a vSAN SDK as well as additional VMware PowerCLI commands to view configuration and status, do upgrades, check performance, as well as control iSCSI via cmdlets. We were intrigued by these, but didn\u2019t have time to test this new functionality.\nWe could add more storage just by adding a new server to the cluster or adding extra SSDs to one of the current servers in the cluster. We tried doing this several times, and as our Lenovo servers allow \u201chot\u201d installation of drives, we simply plugged them in or removed them at will.\nAdministratively, there\u2019s little muss or fuss. There\u2019s none of the tedious and complicated iSCSI\/Fiber SAN setup on each of the ESXi servers within a cluster. We could setup vSAN on the cluster and grab the unused disks from the servers, adding them for instant readiness, or conversely moving them.\nIt\u2019s liberating, and even more liberating compared to the proprietary subsystems that one could connect.\n Tom Henderson\nPerformance test: VSAN 6.6 vs iSCSI\nThe speed of vSAN was about the same as a comparable iSCSI external datastore in our brief test. We did a simple comparison of live-migrating a relatively high CPU usage VM; one stored on vSAN and another on an iSCSI datastore. Both tests were done in the same cluster migrating between the same two servers. The only difference was the back-end storage.\nIt took 13 seconds for the vSAN live-migration to finish, while it took 12 seconds for the iSCSI one. The VM had 8GB RAM, a 60GB hard disk, and 40 vCPUs. The time it took was based on the start and stop times recorded by vCenter logs. However, it only showed seconds and not milliseconds so the difference could have been less than a second. Both movements were relatively close in speed through several iterations.\n Tom Henderson\n Tom Henderson\nvSAN Health and Performance Monitoring\nThe vSAN 6.6 platform includes quite a bit of performance monitoring (see screenshots below). Metrics include the vSAN network link, resyncing activity, iSCSI messaging, and client cache efficiency\/use.\n Tom Henderson\n Tom Henderson\nPart of the memory and CPU given up for vSAN goes towards subsystem health monitoring. The screenshot below shows a variety of pass\/fail health tests.\n Tom Henderson\/IDG\nThere is improved host decommissioning available, for example, when upgrading ESXi or entering maintenance mode for one of the hosts in a vSAN cluster. There are now alerts showing what will happen when starting maintenance mode, which gives an admin some options on how they\u2019d would like to deal with VM data in use, since hyper-convergence now causes diffuse workloads vs. local storage. (See screenshot below).\nWe were also warned that the size of the temp space required during this process is decreased. The distributed nature of hyper-convergence means that any specific host in the cluster may have an impact on other operations within the cluster, and a \u201ctrial\u201d maintenance mode helps insure that these \u201cSiamese twins or triplets\u201d won\u2019t affect each other because their resources are mutual, rather than side-by-side.\n Tom Henderson\nThere were some Enterprise license features that we couldn\u2019t test, although we found them intriguing. One involved a claimed improvement in \u201cstretched clusters\u201d, which are clusters stretched over long distances, for example in separate datacenters.\nWe were unable to test data-at-rest encryption, as it\u2019s suggested to use a third-party KMIP-compliant key manager app. In this case, the entire datastore would be encrypted, whereas using only VM encryption would encrypt just the files related to the VM.\nBoth ways have their use cases, though using VM encryption on an encrypted vSAN datastore is not recommended; we speculate that double encryption represents a CPU-hogging process that cuts available workload processing power.\nDownsides of vSAN\nWe also found some disadvantages, including a modicum of vendor lock-in. As mentioned, there is a 10% CPU overhead per server and you will need lots of RAM (32GB minimum).\nUsing vSAN requires that all hosts be in a vSAN cluster. However, there\u2019s an exception for an OS using an iSCSI disk provided by the vSAN cluster (e.g. Windows Server connecting to the vSAN iSCSI target). Unfortunately, vSAN doesn't allow an iSCSI connection to an ESXi host. Therefore, you are unable to use an old ESXi machine to connect to the vSAN via iSCSI.\nThis implies that most vSAN installs will likely be new installations, rather than retrofits.\nWrap-up\nOrganizations heavily invested in VMware will find the reduction in storage subsystems worth the price of this potentially expensive hyper-convergence. We can\u2019t prove enormous speed benefits, lacking sufficient infrastructure to tax it. We did enjoy that fact that it reduces base costs to commodity hardware, although licensing fees could offset the lower hardware costs.\nIf there\u2019s a fleet of VMs to be managed, the ability to achieve the final abstraction that hyper-convergence represents may be a logical goal for larger organizations. This is aided by a host platform-agnostic recipe, in which a host becomes essentially software-definable as regards its VMs and workloads.\nIf done properly, vSAN delivers comparative administrative ease and makes it simple to move workloads around. For organizations using iSCSI-based piles of storage, vSAN represents a compelling alternative.\nHow we did the testing\nWe used three Lenovo x3650 M5 machines each with 512 GB of RAM, two 2.6 GHz Xeon processors with 14 cores per CPU, and five SSDs (1 for cache, 4 for capacity). Then we installed vSphere 6.5 with vCenter. Then, we put the machines into a cluster after which we could enable vSAN. We made sure to make use of the configuration assistant to see that all the settings were optimal. This is a new feature in 6.6.\nWe tested the clusters in a configuration where all servers were connected via 10GBE through an Extreme Networks Summit Series 24-port switch over IPv4 on the same logical network for each test.