These new and improved 'supersaver' features offer the biggest return on your Windows Server 2012 investment
Windows Server 2012 is a monumental release packed with new features that touch every facet of the operating system. You'll see changes ranging from how data is stored on disk to the protocol for moving data between client and server and much more in between. The major design themes of the new server OS, which center on continuous availability, reduced cost, and lower management overhead, show up in many ways.
One of the principal architects on the Windows Server 2012 team is Jeffery Snover, who made an observation about this version of Windows Server that's worth repeating: "Microsoft has traditionally taken a few versions of major software projects to get them to the point of full maturity. It's significant that Windows Server 2012 has quite a few version 3.0 pieces including Hyper-V, PowerShell, SMB, and more."
[ Also on InfoWorld: Review: Weighing Windows Server 2012 | Windows Server 2012: All the coolest features | Get ready for Windows 8 with the Windows 8 Deep Dive PDF special report | Stay atop key Microsoft technologies in our Technology: Microsoft newsletter. ]
Sure enough, these 3.0 charms alone are worth the price of admission, but a few newer features further sweeten Microsoft's server offering. In fact, I've identified seven Windows Server 2012 features you might call "supersavers." Listed in order of impact below, these features commodify high-end functionality, eliminate the need to purchase third-party software, reduce OS care and feeding, or in the case of PowerShell, offer the potential to save vast numbers of man-hours.
Any one of these features could make Windows Server 2012 a compelling upgrade for you. Perhaps the best part is they're all available in the Standard edition.
Windows Server 2012 supersaver No. 1: Storage SpacesOne of the main themes of Windows Server 2012 is the resiliency of all resources. For disk-related resources, the two new features are the Resilient File System (ReFS) and Storage Spaces. ReFS is the heir apparent to the venerable NTFS originally introduced with the release of Windows NT 3.1 in 1993. NTFS has obviously stood the test of time over the last 19 years with untold numbers of systems still using it today. Windows Server 2012 continues to support NTFS and undoubtedly will for years to come.
ReFS changes the way data gets written to disk. NTFS was susceptible to corruption of the file metadata -- the information the operating system uses to retrieve a file. ReFS uses an allocate-on-write method whenever any updates occur to prevent in-place corruption issues. It also uses checksums for metadata as another measure of validating saved data; you have the ability to enable checksums for the data as well. Microsoft calls this use of checksums Integrity Streams. It's a way to provide a measure of file protection even when the underlying disk system does not.
Storage Spaces has the potential of saving you significant dollars over a typical RAID-based disk array. That's because Storage Spaces works with raw disk drives arranged using JBOD, or "just a bunch of disks." Storage Spaces doesn't require any special (meaning expensive) disk controller unless you're building a cluster. Physical storage is allocated to a storage pool from which virtual disks, or spaces, get created. Virtual disks are, in turn, formatted with either NTFS or ReFS.
When you create a storage volume, Storage Spaces offers three different layout options -- simple, mirror, and parity -- that roughly equate to RAID 0, 1, and 5, although the algorithms used for distributing the data are totally different. Storage Spaces also provides the ability to "thin provision" volumes, which means you can create volumes of a virtual size larger than what is actually available in terms of physical capacity. More physical storage can be added to the pool to increase the physical capacity without affecting the virtual volume. This ability to add storage without incurring downtime is obviously a significant advantage when high-availability applications are involved.
The venerable CHKDSK utility is a major beneficiary of file system improvements. A new disk corruption scanner runs in the background on NTFS volumes, identifying correctible errors and data corruption. Most data corruption issues can be handled without the need to reboot the system and run CHKDSK to repair. If CHKDSK does become necessary, it can complete all operations in a matter of seconds -- versus the many minutes or even hours, in the case of large RAID disks, that it takes in previous Windows Server versions.
There are a few gotchas with ReFS, though none are showstoppers. You can't boot from a disk formatted with ReFS, nor is ReFS supported for removable media. More significant, you cannot convert an NTFS volume to ReFS in place, meaning you must copy the data from an NTFS volume to an ReFS volume.
Windows Server 2012 supersaver No. 2: Hyper-V 3.0Microsoft has been chasing VMware in the virtualization market ever since Hyper-V was introduced. Microsoft made inroads with the version released in conjunction with Windows Server 2008 R2, which delivered many features considered "must haves" to serious virtualization users. Hyper-V 3.0 raises that bar even further and in many ways reaches parity with the lower end of the VMware spectrum. At the high end, Microsoft still has some work to do -- primarily in the area of storage service levels and what VMware calls the "software-defined data center."
Hyper-V 3.0 extends many of the specs from the previous version, including pushing the limits to 2TB of max RAM per host, 160 logical processors per host, 64 nodes per cluster, 4,000 virtual machines per cluster, and up to 1,024 powered-on virtual machines per host. Hyper-V now supports SMB (Server Message Block) for file-level storage, along with the previously supported iSCSI and Fibre Channel. Other new features include a new virtual switch and virtual SAN. The virtual SAN includes a virtual Fibre Channel capability to connect a VM directly to a physical host bus adapter (HBA) for improved performance.
One of the most significant improvements in Hyper-V 3.0 has to be in the area of live migration. This feature supports both the migration of the virtual machine and the underlying storage. File migration can take place as long as a network SMB-shared folder on a Windows Server 2012 system is visible to both the source and destination Hyper-V hosts. You can also move a virtual machine between hosts on different cluster servers that aren't using the same storage.
Hyper-V Replica is a new capability in Hyper-V 3.0 providing an out-of-the-box failure recovery solution covering everything from an entire storage system down to a single virtual machine. Under the hood it delivers asynchronous, unlimited replication of virtual machines from one Hyper-V host to another without the need for storage arrays or other third-party tools. That's another cost savings, or cost avoidance, with a capability you get as a part of the OS.
Microsoft believes that Hyper-V 3.0 can handle any workload you want to throw at it, especially if it's a Microsoft application such as Exchange, SQL Server, or SharePoint. With that in mind, you will definitely save money on hardware by consolidating those types of applications onto a beefy server or cluster. And you don't have to purchase any VMware software to make it happen.
Windows Server 2012 supersaver No. 3: PowerShell 3.0 Automating the management of everything related to Windows Server 2012 is the key driver behind PowerShell 3.0. There is no management task in Windows Server 2012 that can't be accomplished using PowerShell. When you bring in the PowerShell remoting capability, you now have the ability to run any PowerShell script on any server you have rights to access. While the new Server Manager with the slick graphical user interface (GUI) may be the pretty face of systems management, PowerShell is the workhorse that gets the job done.
Windows Server 2012 includes on the order of 2,430 cmdlets. Add to that the ability to create workflows using PowerShell, and Windows Workflow Foundation (WWF) introduces a totally new dimension to systems management. For time-based or scheduled jobs there is direct integration with Task Scheduler and a number of PowerShell job scheduling cmdlets. To see a list of these commands, type the following into a PowerShell command window:
PS> Get-Command -Module PSScheduledJob
For some of the administration tools, like the new Active Directory Administrative Center, you get a PowerShell history window in which you can see the exact commands executed to accomplish your tasks. You can save these commands for later use to automate repetitive tasks and build a library of Active Directory scripts tailored to your specific environment.
It's no coincidence that Microsoft's recommended installation for unattended servers is to use Server Core. In fact, it's the default installation method unless you specifically change it. The idea here is to deploy only the functionality necessary to implement your server roles and remove any and all extraneous code that could pose a potential risk to security or availability. All management is then accomplished remotely using either the Server Manager GUI or through PowerShell automation. That represents another cost savings from both a security and patching perspective.
The PowerShell Integrated Scripting Environment (ISE) in Windows Server 2012 is a tool for the development and testing of PowerShell scripts. It includes a comprehensive help capability with a fill-in-the-blank approach to help you out with creating and testing new automation scripts. You can filter through the long list of available cmdlets, then use the -WhatIF qualifier to see what the results of running a command would be without actually executing the command.
Windows Server 2012 supersavers No. 4: Failover clustersWith previous versions of Windows Server, clustering was confined primarily to the realms of high-performance computing and high-availability services such as SQL Server. It required a special license and additional installation for the necessary components. Windows Server 2012 includes clustering in the Standard edition, making it possible to build a fault-tolerant, two-node cluster for a very modest price.
"Continuous availability" is Microsoft's new buzz phrase for providing fault-tolerant resources, and clustering is the key piece that makes it possible. For continuous file resources, there's version 2 of cluster shared volumes (CSV), which define a single name space that presents clients with a consistent path to connect to. CSV volumes look like directories and subdirectories underneath a ClusterStorage root directory. CSV v2 includes support for Volume Shadow Services (VSS) for hardware and software failover of CSV volumes.
A new feature called cluster-aware updating (CAU) allows you to perform patches and updates to running cluster nodes without interrupting or rebooting the cluster. Each node will receive an update, then restart, if necessary. You would need more than two cluster nodes for CAU to work without breaking the cluster continuity. That said, you'll definitely save on downtime and administration costs with the CAU feature.
Previous versions of the OS had limitations for virtualizing Domain Controllers (DCs). This issue has totally gone away with Windows Server 2012. Hyper-V 3.0 now supports the cloning of virtualized Domain Controllers. You can also do a snapshot restoration of a DC to get back to a known state. This is especially helpful in a development or lab setting where you need to build an environment from scratch or start over from a known point.
Windows Server 2012 supersaver No. 5: Data deduplicationIt's easy to see how duplicate copies of the same data files can cost you time and money in backups and primary storage. Data deduplication is not a new technology, of course. It has been available from both backup and storage vendors for some time. But with Windows Server 2012, deduplication is now a part of the base OS.
Heavy users of virtualization or virtual desktop infrastructure (VDI) implementations stand to see the biggest gains here. Microsoft quotes numbers of 2:1 for general file server storage and 20:1 for virtualization (VHD) libraries. Individual files are replaced with stubs pointing to data blocks stored within a common "chunk" store. Data compression can also be applied to further reduce the total storage footprint. All data processing is done in the background with a minimal impact to CPU and memory.
The data deduplication feature is also tightly integrated with BranchCache, helping to save on overall bandwidth consumption when distributing data over a WAN. In addition to dramatically speeding up file transfers, deduping data that travels the WAN can greatly reduce costs for dedicated or metered network circuits.
Apple's iPad Mini may soon be eliminated from Apple's product lineup.
A review of 18 companies that offer free cloud storage
A review of 18 companies that offer free cloud storage
In theory, virtual machines (VMs) are more secure than containers. In practice, they probably are. It...
Google’s recently announced Spanner cloud database is ushering in a wave of so-called NewSQL databases...
CSO examines risky network ports based on related applications, vulnerabilities, and attacks, providing...
To live in a Linux-only world, you have to build the devices yourself. In step one of his journey,...