REVIEW: Deep dive into Windows Server 2016
Microsoft delivers a boatload of new virtualization, storage and security features, along with a nod to open source.

Windows Server 2016 was officially released in September, but we waited until all of the bits were at production level before taking a deep dive into Microsoft’s flagship server operating system.
What we found is an ambitious, multi-faceted server OS that focuses much of its energy within the Microsoft-centric world of Windows/Hyper-V/Azure, but also tries to join and leverage open source developments and initiatives, such as Docker.
One item we noticed right away is that older 64-bit CPUs won’t work with Microsoft’s Hyper-V virtualization infrastructure. This meant our older Dell 1950 servers weren’t compatible with Hyper-V and an older HP 560 Gen4 with 16 cores barely coughed into life as a Windows 2016 server.
A Windows Server 2016 deployment requires plenty of thought and planning. There are two license options, Datacenter or Standard. And there are three installation choices, the regular GUI server version, the server core (no GUI) version and lastly Nano server.
The Datacenter edition, which is the most expensive, has all the best roles and features. Those roles include: Storage Spaces Direct, Shielded Virtual Machines/Host Guardian Service, Storage Replica (synchronous replication service for disaster recovery and replication), and Network Controller (for the Azure cloud communications fabric).
The total price for Standard and Datacenter versions equals the cost of the server software plus client access licenses (CAL). Prices vary widely between list price, OEM prices, enterprise, education, and other options. Also, there is an Essentials Server version limited to 25 users and 50 devices licensed by processors instead of cores, and it requires no CALs.
In this review, we will go through the various new and improved features of Windows Server 2016. We found that many of them worked as advertised, while others weren’t totally baked yet.
New Nano Server option
Windows Nano Server, as the name implies, is designed for DevOps and minimized kernel API. Lean, mean virtual machine/container deployments are to ostensibly ensue. At less than 200MB, Microsoft calls it just enough OS to run apps.
We built Nano Server from PowerShell commands, which is the only way they can be built. They’re currently in vhdx format, and at press time aren’t supported on hypervisors other than Hyper-V, although that’s likely to change.
The key roles that can be used in a Nano server deployment include Hyper-V, Storage, Cluster, the all-important webserver IIS, Dot-Net and ASP-Net core, and containers. All of these roles need to be setup during instantiation, and not later.
There are a number of limitations on uses for Nano Server: 64-bit apps, tools, agents are supported, but it can’t be an AD domain controller. Nano isn’t subject to group policy, can’t use a proxy server or Systems Center Configuration Manager and Data Protection Manager. Nano also has a more limited PowerShell vocabulary.
But the limitations may be poised to emancipate certain Windows 2016 uses to support popular open source deployment and management frameworks. We found primitive but useful OpenStack support for Nano and potential support for VMware vSphere.
Nano is licensed either as Datacenter or Standard server roles, which might include a DNS server or IIS server. This will please many, and can become the substrate for a lush variety of other app/server/service use cases.
Nested virtualization
Windows Server 2016 supports virtualizing within itself. In other words, VMs within VMs. Currently only Hyper-V under Hyper-V is officially supported, but we were able to get Hyper-V running under vSphere 6.
The use cases for this are rather limited but this can be useful where you want to run Hyper-V containers instead of running containers directly on the host or for lab environments for testing different scenarios.
Enabling nesting on a virtual machine under HyperV required us to set a flag via PowerShell. (Set-VMProcessor -VMName test-server-core -ExposeVirtualizationExtensions $true). Networking requires IP MAC address spoofing. (Get-VMNetworkAdapter -VMName test-server-core | Set-VMNetworkAdapter -MacAddressSpoofing On)
We were able to run containers, as well as other Windows Server 2016 VMs, in the nested machine. This was relatively simple to get working for us, and its convenience won’t be lost on coders and system architects.
Just for fun, we tried nesting Windows 2016 in vSphere 6.0. We initially set up a Windows Server 2016 VM running in ESX. There is a setting that enables this in the VM properties. We then installed HyperV in the VM and we were able to install a nested Windows Server 2016 VM. This worked pretty well.
The increased complexity of nested VMs isn’t quite as high as we suspected, providing the hypervisor allows para-virtualization that doesn’t rob a nested VM of needed resources.
Shielded VMs
Another way to make working processes go dark or opaque is to encrypt them. Windows 2016 Server Shielded VMs are virtual machines that have been encrypted, and can live alongside unencrypted VMs. Shielding requires modern TPM chip sets on the physical hardware to be setup or a one-way trust through Active Directory.
Shielded VMs are encrypted with BitLocker technology and only Windows VMs are supported. Unfortunately, shielded VMs can only be used with the Datacenter edition of Server 2016 not the Standard one — and there are dangers.
The trade-offs have to be completely understood. As an example: The only way to connect to shielded VMs is through RDP, and we found it is not possible to connect through the console or another means. So if your VM loses network connectivity, you are totally screwed unless you’ve made other specific working arrangements to get inside the VM.
It is possible to create shielded VMs without Virtual Machine Manager (part of System Center 2016) or Azure but we couldn’t find documentation on how this is done.
Host Guardian Service
Related to encrypted VMs is the Host Guardian Service (HGS). Third-party vendors have been offering SSO, identity, and key management services for Windows server and client environments, and with Host Guardian, Microsoft delivers its own service.
HGS provides two things: key protection to store and provide BitLocker keys to shielded VMs, and attestation to allow only trusted Hyper-V hosts to run the shielded VMs.
This service must run in its own separate Active Directory forest with a one-way trust. The Active Directory forest will be created when installing the role automatically.
There are two forms of attestation available to use for the HGS. The first is using TPM-trusted attestation, which requires the host physical hardware to have TPM 2.0 enabled and configured, as well as UEFI 2.3.1+ with secure boot. The shielded VMs will be approved based on their TPM identity.
The second form is admin-trusted attestation, which can support a broader range of hardware where TPM 2.0 is not available. This form also requires less configuration. For this mode, VMs are approved based on the membership in a certain AD Domain Services security group.
Microsoft recommended that a Host Guardian Fabric be installed on a three-machine physical cluster, but it can be installed on VMs for test purposes. If you use Azure or System Center Virtual Machine Manager, it should be easier to setup the host guardian service and the guarded hosts.
Also note that if the HGS becomes unavailable for whatever reason (which is why it is recommended to run it as a three-machine physical cluster), the shielded VMs on the guarded hosts will not run.
The setup is a bit complicated without System Center so the following describes how to set it up using mostly PowerShell commands. We used “extreme2.local” as our test domain:
PowerShell Commands used to make host guardian after installing the role in a script like this one:
Install-HgsServer -HgsDomainName ‘hg.extreme2.local’
Restart-Computer
New-SelfSignedCertificate x2 (one for signing and one for encryption)
Export-PfxCertificate x2 (to create the files for the self-signed certificates)
Initialize-HgsServer -HgsService ‘hgs’ -SigningCertificatePath ‘cert.pfx’ -SigningCertifactePassword $pass -EncryptionCertificatePath ‘enc-cert.pfx’ -EncryptionCertificatePassword $pass2 -TrustActiveDirectory (can also use -TrustTPM)
Get-HgsTrace (to check and validate the config)
The following string of PowerShell commands was used for non-TPM servers, as TPM wasn’t initialized on the servers. This seemed the best way to test using AD-trusted attestation - TPM servers are a bit more complicated (on the DNS server):
Add-DnsServerConditionalForwarderZone -Name “hg.extreme2.local” -ReplicationScope "Forest" -MasterServers 10.0.100.43
(on the main AD server)
netdom trust hg.extreme2.local /domain:extreme2.local /userD:extreme2.local\Administrator /passwordD:<password> /add
Here we created a new security group on the Active Directory server and added the computer we want to be trusted for the guardian host service to the group and then restarted the servers:
Get-ADGroup “guarded-hosts” (<— name of security group)
(Made note of the SID)
$SID = “S-1-5-21-2056979656-3172215525-2237764365-1118”
(now back to the host guardian service server)
Add-HgsAttestationHostGroup -Name “GuardedHosts” -Identifier $SID
Get-HgsServer (to get the values for URLs to configure the guarded hosts)
Then we ran these commands on the hosts, with Hyper-V and the Host Guardian Hyper-V services installed, that we wanted to be guarded hosts:
$AttestationUrl=“http://hgs.hg.extreme2.local/Attestation”
$KeyProtectionUrl=“http://hgs.hg.extreme2.local/KeyProtection”
Set-HgsClientConfiguration -AttestationServerUrl $AttestationUrl KeyProtectionServerUrl $KeyProtectionUrl
Now that we had that set up, we could set up shielded VMs for existing VMs:
Invoke-WebRequest 'http://hgs.hg.extreme2.local/KeyProtection/service/metadata/2014-07/metadata.xml' -OutFile
‘.\ExtremeGuardian.xml’
Import-HgsGuardian -Path '.\ExtremeGuardian.xml' -Name 'GuardedHosts' -AllowUntrustedRoot
New-VM -Generation 2 -Name "Shielded-Server2016" -NewVHDPath .\Shielded-Server2016.vhdx -NewVHDSizeBytes 20GB
$guardian = Get-HgsGuardian -Name 'GuardedHosts'
$owner = New-HgsGuardian -Name '<adminuser>' -GenerateCertificates
$keyp = New-HgsKeyProtector -Owner $owner -Guardian $guardian -AllowUntrustedRoot
Set-VMKeyProtector -VMName $vmname -KeyProtector $keyp.RawData
Set-VMSecurityPolicy -VMName $vmname -Shielded $true
Enable-VMTPM -VMName $vmname
At this point, the VM should be shielded and we could move the VM vhdx file and config files to guardian host to run the VM after enabling BitLocker on the partitions in the vhdx file.
We could also do this for the supplied template VMs. The docs were fairly clear about the process.
Windows containers
Containers have finally come to Windows Server 2016 using (but not necessarily limited to) Docker and Docker container components. There are currently two ways to run containers. One is directly on Windows Server 2016, called Windows Server Containers. The other one is through Hyper-V in a kind of isolation mode/sandbox, called Hyper-V containers.
The HyperV isolation mode requires the HyperV role to be installed on the server, then we could start the container (and its app payloads) using various Docker commands (an example can be seen below).
docker run --isolation=hyperv microsoft/nanoserver
Docker isn’t supplied and must be installed separately, and currently there are issues running it under remote PowerShell sessions. It was very frustrating trying to work around the bugs in this build. After much testing, we recommend waiting until the kinks are worked out and its production use is currently dubious.
NET RESULTS
First, you can’t use Linux containers unless they’re especially built to run in a highly-confined context. Developers then have two sets of containers if they’re already developing containers: one that runs on Linux/elsewhere and Windows-specific containers.
Second, the number of “off the shelf” containers is dramatically small and a quick check at press time revealed perhaps 100:1 or higher Linux vs Windows-capable containers in the repositories we checked.
Third, we found a dearth of PowerShell commands available to do container management work, and this forced us to use docker specifically — not that we minded — rather docker is an isolate and almost a curiosity. A cohesive management plane for the Windows environment doesn’t quite exist yet that we could find.
Finally: We cratered our specially ordered Lenovo ThinkServers numerous times doing things by the book in our attempts to use just the sample provided of a simple .Net server. Kaboom, even running as admin, with the latest updates, we ended up with just a smoking hole in our rack at Expedient. It did not give us confidence. We tried on Server core and Nano core and still were not pleased.
Inexplicably, we were subsequently able to get the VM running without crashing on another physical, non-hypervised host.