July 2025
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28293031  

Categories

July 2025
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28293031  

Cluster-Aware Updating Overview

CAU is an automated feature that enables you to update clustered servers with little or no loss of availability during the update process. During an Updating Run, CAU transparently performs the following tasks:
• Puts each node of the cluster into node maintenance mode • Moves the clustered roles off the node • Installs the updates and any dependent updates • Performs a restart if necessary • Brings the node out of maintenance mode • Restores the clustered roles on the node • Moves to update the next node
For many clustered roles (formerly called clustered applications and services) in the cluster, the automatic update process triggers a planned failover, and it can cause a transient service interruption for connected clients. However, in the case of continuously available workloads such as Hyper-V with live migration or file server with SMB Transparent Failover, CAU can coordinate cluster updates with no impact to the service availability.

CAU is an automated feature that enables you to update clustered servers with little or no loss of availability during the update process. During an Updating Run, CAU transparently performs the following tasks:
• Puts each node of the cluster into node maintenance mode • Moves the clustered roles off the node • Installs the updates and any dependent updates • Performs a restart if necessary • Brings the node out of maintenance mode • Restores the clustered roles on the node • Moves to update the next node
For many clustered roles (formerly called clustered applications and services) in the cluster, the automatic update process triggers a planned failover, and it can cause a transient service interruption for connected clients. However, in the case of continuously available workloads such as Hyper-V with live migration or file server with SMB Transparent Failover, CAU can coordinate cluster updates with no impact to the service availability.

This topic provides an overview of Cluster-Aware Updating (CAU), a feature for failover clusters that was introduced in Windows Server 2012. CAU automates the software updating process on clustered servers while maintaining availability. This topic describes scenarios and applications for using CAU, and provides links to content that describes how to integrate CAU into other IT automation and management processes.

Did you mean…

CAU is related to but is distinct from the following foundational technologies:

•Failover Clustering Overview

•Windows Update/Microsoft Update

•Windows Server Update Services

•Windows Management Instrumentation

•Windows Remote Management

Feature description

CAU is an automated feature that enables you to update clustered servers with little or no loss of availability during the update process. During an Updating Run, CAU transparently performs the following tasks:

•Puts each node of the cluster into node maintenance mode

•Moves the clustered roles off the node

•Installs the updates and any dependent updates

•Performs a restart if necessary

•Brings the node out of maintenance mode

•Restores the clustered roles on the node

•Moves to update the next node

For many clustered roles (formerly called clustered applications and services) in the cluster, the automatic update process triggers a planned failover, and it can cause a transient service interruption for connected clients. However, in the case of continuously available workloads such as Hyper-V with live migration or file server with SMB Transparent Failover, CAU can coordinate cluster updates with no impact to the service availability.

System_CAPS_noteNote

The CAU feature is only compatible with Windows Server 2012 R2 and Windows Server 2012 failover clusters and the clustered roles that are supported on those versions.

Practical applications

•CAU reduces service outages in clustered services, reduces the need for manual updating workarounds, and makes the end-to-end cluster updating process more reliable for the administrator. When the CAU feature is used together with continuously available cluster workloads, such as continuously available file servers (file server workload with SMB Transparent Failover) or Hyper-V, the cluster updates can be performed with zero impact to service availability for clients.

•CAU facilitates the adoption of consistent IT processes across the enterprise. You can create Updating Run Profiles for different classes of failover clusters and then manage them centrally on a file share to ensure that CAU deployments throughout the IT organization apply updates consistently, even if the clusters are managed by different lines-of-business or administrators.

•CAU can schedule Updating Runs on regular daily, weekly, or monthly intervals to help coordinate cluster updates with other IT management processes.

•CAU provides an extensible architecture to update the cluster software inventory in a cluster-aware fashion. This can be used by publishers to coordinate the installation of software updates that are not published to Windows Update or Microsoft Update or that are not available from Microsoft, for example, updates for non-Microsoft device drivers.

•CAU self-updating mode enables a “cluster in a box” appliance (a set of clustered physical machines running Windows Server 2012, typically packaged in one chassis) to update itself. Typically, such appliances are deployed in branch offices with minimal local IT support to manage the clusters. Self-updating mode offers great value in these deployment scenarios.

Important functionality

Following is a description of important CAU functionality:

•A user interface (UI) and a set of Windows PowerShell cmdlets that you can use to preview, apply, monitor, and report on the updates

•An end-to-end automation of the cluster-updating operation (an Updating Run), orchestrated by one or more Update Coordinator computers

•A default plug-in that integrates with the existing Windows Update Agent (WUA) and Windows Server Update Services (WSUS) infrastructure in Windows Server 2012 R2 or Windows Server 2012 to apply important Microsoft updates

•A second plug-in that can be used to apply Microsoft hotfixes, and that can be customized to apply non-Microsoft updates

•Updating Run Profiles that you configure with settings for Updating Run options, such as the maximum number of times that the update will be retried per node. Updating Run Profiles enable you to rapidly reuse the same settings across Updating Runs and easily share the update settings with other failover clusters.

•An extensible architecture that supports new plug-in development to coordinate other node-updating tools across the cluster, such as custom software installers, BIOS updating tools, and network adapter or host bus adapter (HBA) updating tools.

CAU can coordinate the complete cluster updating operation in two modes:

•Self-updating mode For this mode, the CAU clustered role is configured as a workload on the failover cluster that is to be updated, and an associated update schedule is defined. The cluster updates itself at scheduled times by using a default or custom Updating Run profile. During the Updating Run, the CAU Update Coordinator process starts on the node that currently owns the CAU clustered role, and the process sequentially performs updates on each cluster node. To update the current cluster node, the CAU clustered role fails over to another cluster node, and a new Update Coordinator process on that node assumes control of the Updating Run. In self-updating mode, CAU can update the failover cluster by using a fully automated, end-to-end updating process. An administrator can also trigger updates on-demand in this mode, or simply use the remote-updating approach if desired. In self-updating mode, an administrator can get summary information about an Updating Run in progress by connecting to the cluster and running the Get-CauRun Windows PowerShell cmdlet.

•Remote-updating mode For this mode, a remote computer that is running Windows Server 2012 R2, Windows Server 2012, Windows 8.1 or Windows 8, which is called an Update Coordinator, is configured with the CAU tools. The Update Coordinator is not a member of the cluster that is updated during the Updating Run. From the remote computer, the administrator triggers an on-demand Updating Run by using a default or custom Updating Run profile. Remote-updating mode is useful for monitoring real-time progress during the Updating Run, and for clusters that are running on Server Core installations of Windows Server 2012 R2 or Windows Server 2012.

Hardware and software requirements

CAU can be used on all editions of Windows Server 2012 R2 and Windows Server 2012, including Server Core installations. For detailed requirements information, see Requirements and Best Practices for Cluster-Aware Updating.

To use CAU, you must install the Failover Clustering feature in Windows Server 2012 R2 or Windows Server 2012 and create a failover cluster. The components that support CAU functionality are automatically installed on each cluster node.

To install the Failover Clustering feature, you can use the following tools:

•Add Roles and Features Wizard in Server Manager

•Add-WindowsFeature Windows PowerShell cmdlet

•Deployment Image Servicing and Management (DISM) command-line tool

For more information, see Install or Uninstall Roles, Role Services, or Features.

You must also install the CAU tools, which are included in the Failover Clustering Tools (which are also part of the Remote Server Administration Tools, or RSAT). The CAU tools consist of the CAU UI and the CAU Windows PowerShell cmdlets. You must install the Failover Clustering Tools as follows to support the different CAU updating modes:

•To use CAU in self-updating mode, the Failover Clustering Tools must be installed on each cluster node. (This is the default installation.)

•To enable remote-updating mode, you must install the Failover Clustering Tools from the RSAT on a local or a remote computer that is running Windows Server 2012 R2, Windows Server 2012, Windows 8.1 or Windows 8 and that has network connectivity to the failover cluster.

System_CAPS_noteNote

?You must use the Failover Clustering Tools from the Windows Server 2012 R2 RSAT to remotely manage updates for a Windows Server 2012 R2 failover cluster. You can also use the Windows Server 2012 R2 RSAT to remotely manage updates on a Windows Server 2012 failover cluster.

?To use CAU only in remote-updating mode, installation of the Failover Clustering Tools on the cluster nodes is not required. However, certain CAU features will not be available. For more information, see Requirements and Best Practices for Cluster-Aware Updating.

?Unless you are using CAU only in self-updating mode, the computer on which the CAU tools are installed and that coordinates the updates cannot be a member of the failover cluster.

For more information about installing the Failover Clustering feature, see Installing the Failover Clustering Feature and Tools.

For more information about deploying RSAT, see Deploy Remote Server Administration Tools.

To enable self-updating mode, the CAU clustered role must also be added to the failover cluster. To do this by using the CAU UI, under Cluster Actions, use the Configure Self-Updating Options action. Alternatively, run the Add-CauClusterRole Windows PowerShell cmdlet.

To uninstall CAU, uninstall the Failover Clustering feature or Failover Clustering Tools by using Server Manager, Windows PowerShell cmdlets, or the DISM command-line tools.

Deploy an Active Directory-Detached Cluster

To deploy an Active Directory-detached cluster, you must use Windows PowerShell. You cannot use Failover
Cluster Manager. To create the failover cluster, start Windows PowerShell as an administrator, and then use the New-Cluster cmdlet with the –AdministrativeAccessPoint parameter set to a value of DNS.

In Windows Server 2012 R2, you can deploy a failover cluster without dependencies in Active Directory Domain Services (AD DS) for network names. This is referred to as an Active Directory-detached cluster. Using this deployment method enables you to create a failover cluster without the previously required permissions for creating computer objects in AD DS or the need to request that computer objects are prestaged in AD DS.

When you create an Active Directory-detached cluster, the cluster network name (also known as the administrative access point) and network names for any clustered roles with client access points are registered in Domain Name System (DNS). However, no computer objects are created for the cluster in AD DS. This includes the computer object for the cluster (also known as the cluster name object or CNO) and computer objects for any clustered roles that would typically have client access points in AD DS (also known as virtual computer objects or VCOs).

System_CAPS_noteNote

This deployment method still requires that the failover cluster nodes are joined to an Active Directory domain.

Deployment considerations

An Active Directory-detached cluster uses Kerberos authentication for intracluster communication. However, when authentication against the cluster network name is required, the cluster uses NTLM authentication. Therefore, we do not recommend this deployment method for any scenario that requires Kerberos authentication.

The following table summarizes whether this deployment method is supported for a specific cluster workload.

Cluster Workload

Supported/Not Supported

More Information

SQL Server

Supported

We recommend that you use SQL Server Authentication for an Active Directory-detached cluster deployment.

File server

Supported, but not recommended

Kerberos authentication is the preferred authentication protocol for Server Message Block (SMB) traffic.

Hyper-V

Supported, but not recommended

Live migration is not supported because it has a dependency on Kerberos authentication.

Quick migration is supported.

Message Queuing (also known as MSMQ)

Not supported

Message Queuing stores properties in AD DS.

In addition, be aware of the following issues for this type of cluster deployment:

•BitLocker Drive Encryption is not supported.

•Cluster-Aware Updating (CAU) in self-updating mode is not supported.

System_CAPS_noteNote

CAU is supported in remote-updating mode.

•You cannot copy a clustered role between failover clusters that use different types of administrative access points.

•You can set the type of administrative access point only when you create the cluster. You cannot change it after the cluster is deployed.

•If you deploy a highly available file server by using this deployment method, you cannot use Server Manager to manage the file server. Instead, you must use Windows PowerShell or Failover Cluster Manager.

To use Failover Cluster Manager, after you deploy the highly available file server, you must add the fully qualified domain name (FQDN) of the File Server clustered role to the trusted hosts list on each node of the cluster. For example, start Windows PowerShell as an administrator, and then enter the following command, where FileServerRole1.contoso.com and FileServerRole2.contoso.com represent the names of two File Server clustered roles:

winrm set winrm/config/client ‘@{TrustedHosts=”FileServerRole1.contoso.com,FileServerRole2.contoso.com”}’

System_CAPS_noteNote

You must run this command on each node of the cluster.

How to deploy an Active Directory-detached cluster

Before you create the failover cluster, make sure that all servers that you want to add as cluster nodes meet the following prerequisites:

•All servers must be running Windows Server 2012 R2.

•All servers must be joined to the same Active Directory domain.

•All servers must have the Failover Clustering feature installed.

•All servers must use supported hardware and the collection of servers must pass all cluster validation tests. For more information, see Failover Clustering Hardware Requirements and Storage Options and Validate Hardware for a Failover Cluster.

To deploy an Active Directory-detached cluster, you must use Windows PowerShell. You cannot use Failover Cluster Manager. To create the failover cluster, start Windows PowerShell as an administrator, and then use the New-Cluster cmdlet with the –AdministrativeAccessPoint parameter set to a value of Dns.

The following example creates a failover cluster (Cluster1) from two nodes (Node1 and Node2), with an administrative access point of type DNS.

New-Cluster Cluster1 –Node Node1,Node2 –StaticAddress 192.168.1.16 -NoStorage –AdministrativeAccessPoint Dns

In this case, the cluster network name Cluster1 will be created without a computer object in AD DS. In addition, all subsequent network names for clustered roles will be created without computer objects in AD DS.

You can run the following Windows PowerShell command to verify the type of administrative access point for a failover cluster:

(Get-Cluster).AdministrativeAccessPoint

For an Active Directory-detached cluster, the expected output value is Dns.

Storage Quality of Service for Hyper-V

Starting in Windows Server® 2012 R2, Hyper-V includes the ability to set certain quality-of-service (QoS) parameters for storage on the virtual machines. For more information about configuring storage QoS for Hyper-V, see Configure Storage Quality of Service.

Storage QoS provides storage performance isolation in a multitenant environment and mechanisms to notify you when the storage I/O performance does not meet the defined threshold to efficiently run your virtual machine workloads.

Key benefits

Storage QoS provides the ability to specify a maximum input/output operations per second (IOPS) value for your virtual hard disk. An administrator can throttle the storage I/O to stop a tenant from consuming excessive storage resources that may impact another tenant.

An administrator can also set a minimum IOPS value. They will be notified when the IOPS to a specified virtual hard disk is below a threshold that is needed for its optimal performance.

The virtual machine metrics infrastructure is also updated, with storage related parameters to allow the administrator to monitor the performance and chargeback related parameters.

Maximum and minimum values are specified in terms of normalized IOPS where every 8 K of data is counted as an I/O.

Key features

Storage QoS allows administrators to plan for and gain acceptable performance from their investment in storage resources Administrators can:

•Specify the maximum IOPS allowed for a virtual hard disk that is associated with a virtual machine.

•Receive a notification when the specified minimum IOPS for a virtual hard disk is not met.

•Monitor storage-related metrics through the virtual machine metrics interface.

Requirements

Storage QoS requires that the Hyper-V role is installed. The Storage QoS feature cannot be installed separately. When you install Hyper-V, the infrastructure is enabled for defining QoS parameters associated with your virtual hard disks.

System_CAPS_noteNote

Storage QoS is not available if you are using shared virtual hard disks.

Technical overview

Virtual hard disk maximum IOPS

Storage QoS provides the following features for setting maximum IOPS values (or limits) on virtual hard disks for virtual machines:

•You can specify a maximum setting that is enforced on the virtual hard disks of your virtual machines. You can define a maximum setting for each virtual hard disk.

•Virtual disk maximum IOPS settings are specified in terms of normalized IOPS. IOPS are measured in 8 KB increments.

•You can use the WMI interface to control and query the maximum IOPS value you set on your virtual hard disks for each virtual machine.

•Windows PowerShell enables you to control and query the maximum IOPS values you set for the virtual hard disks in your virtual machines.

•Any virtual hard disk that does not have a maximum IOPS limit defined defaults to 0.

•The Hyper-V Manager user interface is available to configure maximum IOPS values for Storage QoS.

Virtual hard disk minimum IOPS threshold notifications

Storage QoS provides the following features for setting minimum values (or reserves) on virtual hard disks for virtual machines:

•You can define a minimum IOPS value for each virtual hard disk, and an event-based notification is generated when the minimum IOPS value is not met.

•Virtual hard disk minimum values are specified in terms of normalized IOPS. IOPS are measured in 8 KB increments.

•You can use the WMI interface to query the minimum IOPS value you set on your virtual hard disks for each virtual machine.

•Windows PowerShell enables you to control and query the minimum IOPS values you set for the virtual hard disks in your virtual machines.

•Any virtual hard disk that does not have a minimum IOPS value defined will default to 0.

•The Hyper-V Manager user interface is available to configure minimum IOPS settings for Storage QoS.

Hyper-V Automatic Virtual Machine Activation 2012

Automatic Virtual Machine Activation (AVMA) acts as a proof-of-purchase mechanism, helping to ensure that Windows products are used in accordance with the Product Use Rights and Microsoft Software License Terms. AVMA lets you install virtual machines on a properly activated Windows server without
having to manage product keys for each individual virtual machine, even in disconnected environments. AVMA binds the virtual machine activation to the licensed virtualization server and activates the virtual machine when it starts up. AVMA also provides real-time reporting on usage and historical data on the license state of the virtual machine. Reporting and tracking data is available on the virtualization server.

Hyper-V Resource Metering

IT organizations need tools to charge back business units that they support while providing the business units with the right amount of resources to match their needs. For hosting providers, it is equally important to issue chargebacks based on the amount of usage by each customer.

To implement advanced billing strategies that measure both the assigned capacity of a resource and its actual usage, earlier versions of Hyper-V required users to develop their own chargeback solutions that polled and aggregated performance counters. These solutions could be expensive to develop and sometimes led to loss of historical data.

To assist with more accurate, streamlined chargebacks while protecting historical information, Hyper-V in Windows Server 2012 introduces Resource Metering, a feature that allows customers to create cost-effective, usage-based billing solutions. With this feature, service providers can choose the best billing strategy for their business model, and independent software vendors can develop more reliable, end-to-end chargeback solutions on top of Hyper-V.

Key benefits

Hyper-V Resource Metering in Windows Server 2012 allows organizations to avoid the expense and complexity associated with building in-house metering solutions to track usage within specific business units. It enables hosting providers to quickly and cost-efficiently create a more advanced, reliable, usage-based billing solution that adjusts to the provider’s business model and strategy.

Use of network metering port ACLs

Enterprises pay for the Internet traffic in and out of their data centers, but not for the network traffic within their data centers. For this reason, providers generally consider Internet and intranet traffic separately for the purposes of billing. To differentiate between Internet and intranet traffic, providers can measure incoming and outgoing network traffic for any IP address range, by using network metering port ACLs.

Virtual machine metrics

Windows Server 2012 provides two options for administrators to obtain historical data on a client’s use of virtual machine resources: Hyper-V cmdlets in Windows PowerShell and the new APIs in the Virtualization WMI provider. These tools expose the metrics for the following resources used by a virtual machine during a specific period of time:

•Average CPU usage, measured in megahertz over a period of time.

•Average physical memory usage, measured in megabytes.

•Minimum memory usage (lowest amount of physical memory).

•Maximum memory usage (highest amount of physical memory).

•Maximum amount of disk space allocated to a virtual machine.

•Total incoming network traffic, measured in megabytes, for a virtual network adapter.

•Total outgoing network traffic, measured in megabytes, for a virtual network adapter.

Movement of virtual machines between Hyper-V hosts—for example, through live, offline, or storage migrations—does not affect the collected data.

New in Hyper-V for Windows Server 2012 R2

What’s New in Hyper-V for Windows Server 2012 R2

Applies To: Windows Server 2012 R2

This topic explains the new and changed functionality of the Hyper-V role on Windows Server 2012 R2. For information about Hyper-V on Windows Server® 2016 Technical Preview, see What’s new in Hyper-V on Windows Server 2016 Technical Preview.


Role description

The Hyper-V role enables you to create and manage a virtualized computing environment by using virtualization technology that is built in to Windows Server 2012 R2. Hyper-V virtualizes hardware to provide an environment in which you can run multiple operating systems at the same time on one physical computer, by running each operating system on its own virtual machine. For more information about Hyper-V, see the Hyper-V overview.

New and changed functionality

The following table lists functionality in Hyper-V that is new for this release or has been changed.

Feature or functionality

New or updated

Shared virtual hard disk

New

Resize virtual hard disk

Updated

Storage Quality of Service

New

Live migrations

Updated

Virtual machine generation

New

Integration services

Updated

Export

Updated

Failover Clustering and Hyper-V

Updated

Enhanced session mode

New

Hyper-V Replica

Updated

Linux support

Updated

Management

Updated

Automatic Virtual Machine Activation

New

Hyper-V Networking

Updated

Shared virtual hard disk

Hyper-V in Windows Server 2012 R2 enables clustering virtual machines by using shared virtual hard disk (VHDX) files.

What value does this change add?

This feature is used to build a high availability infrastructure, and it is especially important for private cloud deployments and cloud-hosted environments that manage large workloads. Shared virtual hard disks enable multiple virtual machines to access the same virtual hard disk (VHDX) file, which provides shared storage for use by Windows Failover Clustering.. The shared virtual hard disk files can be hosted on Cluster Shared Volumes (CSV) or on Server Message Block (SMB)-based Scale-Out File Server file shares.

What works differently?

This feature is new in Windows Server 2012 R2. It was not possible to cluster virtual machines by using a shared virtual hard disk in previous releases of Windows Server.

For more information, see Virtual Hard Disk Sharing Overview,

Resize virtual hard disk

Hyper-V storage has been updated to support resizing virtual hard disks while the virtual machine is running.

What value does this change add?

Resizing virtual hard disks while the virtual machine is running enables an administrator to perform configuration and maintenance operations on the virtual hard disks while the associated virtual machine is online or the virtual hard disk data disk is in use.

What works differently?

Online virtual hard disk resizing is only available for VHDX files that are attached to a SCSI controller. The virtual hard disk size can be increased or decreased through the user interface while virtual hard disk is in use.

For more information, see Online Virtual Hard Disk Resizing Overview.

Storage Quality of Service

Hyper-V in Windows Server 2012 R2 includes storage Quality of Service (QoS). Storage QoS enables you to manage storage throughput for virtual hard disks that are accessed by your virtual machines.

What value does this change add?

Storage QoS enables you to specify the maximum and minimum I/O loads in terms of I/O operations per second (IOPS) for each virtual disk in your virtual machines. Storage QoS ensures that the storage throughput of one virtual hard disk does not impact the performance of another virtual hard disk on the same host.

What works differently?

This feature is new in Windows Server 2012 R2. It was not possible to configure storage QoS parameters for your virtual hard disks in previous releases of Windows Server.

For more information, see Storage Quality of Service for Hyper-V.

Live migrations

Hyper-V live migration has been updated with the following capabilities.

Improved performance

Hyper-V live migration has been updated to allow the administrator to select the optimal performance options when moving virtual machines to a different server.

What value does this change add?

In larger scale deployments, such as private cloud deployments or cloud hosting providers, this update can reduce overhead on the network and CPU usage in addition to reducing the amount of time for a live migration. Hyper-V administrators can configure the appropriate live migration performance options based on their environment and requirements.The following live migrations options are now available.

Option

Description

TCP/IP

The memory of the virtual machine is copied to the destination server over a TCP/IP connection. This is the same method that is used in Hyper-V in Windows Server 2012.

Compression

The memory content of the virtual machine that is being migrated is compressed and then copied to the destination server over a TCP/IP connection. This is the default setting in Hyper-V in Windows Server 2012 R2.

SMB 3.0 protocol

The memory content of the virtual machine is copied to the destination server over a SMB 3.0 connection.

•SMB Direct is used when the network adapters on the source and destination servers have Remote Direct Memory Access (RDMA) capabilities enabled.

•SMB Multichannel automatically detects and uses multiple connections when a proper SMB Multichannel configuration is identified.

For more information, see Improve Performance of a File Server with SMB Direct.

Cross-version live migrations

Hyper-V live migration has been updated to support migrating Hyper-V virtual machines in Windows Server 2012 to Hyper-V in Windows Server 2012 R2.

What value does this change add?

Upgrading to a new version of Windows Server no longer requires downtime to the virtual machines.

Hyper-V administrators can move Hyper-V virtual machines in Windows Server 2012 to Hyper-V in Windows Server 2012 R2. Moving a virtual machine to a down-level server running Hyper-V is not supported.

What works differently?

When moving a virtual machine, the specified destination server can now be a computer running Windows Server 2012 R2. This applies to a move that is initiated in Hyper-V Manager or when using the Move-VM Windows PowerShell cmdlet.

Virtual machine generation

Virtual machine generation determines the virtual hardware and functionality that is presented to the virtual machine.

What value does this change add?

Hyper-V in Windows Server 2012 R2 includes two supported virtual machine generations.

•Generation 1 Provides the same virtual hardware to the virtual machine as in previous versions of Hyper-V.

•Generation 2 Provides the following new functionality on a virtual machine:

?Secure Boot (enabled by default)

?Boot from a SCSI virtual hard disk

?Boot from a SCSI virtual DVD

?PXE boot by using a standard network adapter

?UEFI firmware support

System_CAPS_noteNote

IDE drives and legacy network adapter support has been removed.

The following guest operating systems are supported as a generation 2 virtual machine.

•Windows Server 2012

•Windows Server 2012 R2

•64-bit versions of Windows 8

•64-bit versions of Windows 8.1

What works differently?

When creating a new virtual machine in Hyper-V Manager or by using the New-VM Windows PowerShell cmdlet, you need to specify a virtual machine generation.

System_CAPS_noteNote

After a virtual machine has been created, you cannot change its generation.

For more information, see Generation 2 Virtual Machine Overview.

Integration services

Hyper-V integration services are updated with a new service that allows Hyper-V administrators to copy files to the virtual machine while the virtual machine is running without using a network connection.

What value does this change add?

In previous versions of Hyper-V, a Hyper-V administrator may have needed to shut down a virtual machine to copy files to it. A new Hyper-V integration service has been added that allows the Hyper-V administrator to copy files to a running virtual machine without using a network connection.

What works differently?

A Windows PowerShell cmdlet, Copy-VMFile, also has been added for this new feature. The following services must be enabled for this feature to work.

•Guest services on the Integration Services property page of the virtual machine needs to be selected. By default this setting is not selected.

Or you can enable the Guest services by using the Enable-VMIntegrationService Windows PowerShell cmdlet.

•The Hyper-V Guest Service Interface service in the guest operating system must be running.

System_CAPS_noteNote

The Hyper-V Guest Service Interface service enters a running state when the Guest services service is selected on the Integration Services property page of the virtual machine. To disable this feature in the guest operating system, the guest operating system administrator can set the Hyper-V Guest Service Interface service startup type to Disabled.

Export

Hyper-V is updated to support exporting a virtual machine or a virtual machine checkpoint while the virtual machine is running. You no longer need to shut down a virtual machine before exporting.

What value does this change add?

Exporting a virtual machine while the virtual machine is running allows the administrator to export the virtual machine without incurring any downtime.

This assists in the following scenarios:

•Duplicating an existing production environment or part of an environment to a test lab.

•Testing a planned move to a cloud hosting provider or to a private cloud.

•Troubleshooting an application issue.

What works differently?

The Export option is now available as an action for a running virtual machine from Hyper-V Manager. The following Windows PowerShell cmdlets can be used on a running virtual machine, Export-VM and Export-VMSnapshot.

Failover Clustering and Hyper-V

Using Windows Failover Clustering with Hyper-V enables virtual network adapter protection and virtual machine storage protection.

What value does this change add?

Hyper-V has been enhanced to detect physical storage failures on storage devices that are not managed by Windows Failover Clustering (SMB 3.0 file shares). Storage failure detection can detect the failure of a virtual machine boot disk or any additional data disks associated with the virtual machine. If such an event occurs, Windows Failover Clustering ensures that the virtual machine is relocated and restarted on another node in the cluster. This eliminates situations where unmanaged storage failures would not be detected and where virtual machine resources may become unavailable.

Hyper-V and Windows Failover Clustering are enhanced to detect network connectivity issues for virtual machines. If the physical network assigned to the virtual machine suffers a failure (such as a faulty switch port or network adapter, or a disconnected network cable), the Windows Failover Cluster will move the virtual machine to another node in the cluster to restore network connectivity.

Enhanced session mode

Virtual Machine Connection in Hyper-V now allows redirection of local resources in a Virtual Machine Connection session.

What value does this change add?

Virtual Machine Connection enhances the interactive session experience provided for Hyper-V administrators who want to connect to their virtual machines. It provides functionality that is similar to a remote desktop connection when you are interacting with a virtual machine.

In previous versions of Hyper-V, Virtual Machine Connection provided redirection of only the virtual machine screen, keyboard, and mouse with limited copy functionality. To get additional redirection abilities, a remote desktop connection to the virtual machine could be initiated, but this required a network path to the virtual machine.

The following local resources can be redirected when using Virtual Machine Connection.

•Display configuration

•Audio

•Printers

•Clipboard

•Smart cards

•Drives

•USB devices

•Supported Plug and Play devices

What works differently?

This feature is enabled by default in Client Hyper-V, and it is disabled by default on Hyper-V in Windows Server.

The following guest operating systems support enhanced session mode connections:

•Windows Server 2012 R2

•Windows 8.1

For additional information, see Use local resources on Hyper-V virtual machine with VMConnect.

Hyper-V Replica

Hyper-V Replica adds the following new features in Windows Server 2012 R2:

•You can configure extended replication. In extended replication, your Replica server forwards information about changes that occur on the primary virtual machines to a third server (the extended Replica server). After a planned or unplanned failover from the primary server to the Replica server, the extended Replica server provides further business continuity protection. As with ordinary replication, you configure extended replication by using Hyper-V Manager, Windows PowerShell, or WMI.

•The frequency of replication, which previously was a fixed value, is now configurable. You can also access recovery points for 24 hours. Previous versions had access to recovery points for only 15 hours.

Linux support

As part of Microsoft’s continuing commitment to making Hyper-V the best all-around virtual platform for hosting providers, there are now more built-in Linux Integration Services for newer distributions and more Hyper-V features are supported for Linux virtual machines.

What value does this change add?

Linux support for Hyper-V in Windows Server 2012 R2 has now been enhanced in the following ways:

•Improved video – a Hyper-V-specific video driver is now included for Linux virtual machines to provide an enhanced video experience with better mouse support.

•Dynamic Memory – Dynamic Memory is now fully supported for Linux virtual machines, including both hot-add and remove functionality. This means you can now run Windows and Linux virtual machines side-by-side on the same host machine while using Dynamic Memory to ensure fair allocation of memory resources to each virtual machine on the host.

•Online VHDX resize – virtual hard disks attached to Linux virtual machines can be resized while the virtual machine is running.

•Online backup – you can now back up running Linux virtual machines to Windows Azure using the Windows Azure Online Backup capabilities of the in-box Windows Server Backup utility, System Center Data Protection Manager, or any third-party backup solution that supports backing up Hyper-V virtual machines.

What works differently?

The Linux Integration Services are built into many distributions now, so you do not have to download and install LIS separately. For more information, see: Linux and FreeBSD Virtual Machines on Hyper-V.

Management

You can manage Hyper-V in Windows Server 2012 from a computer running Windows Server 2012 R2 or Windows 8.1. In previous releases, you could not connect to and manage a down-level version of Hyper-V. A solution was to create a remote desktop session to a down-level server running Hyper-V and run the Hyper-V management operating system from within the remote desktop session. This solution required that Remote Desktop Services was running and properly configured, and the solution was not viable when Hyper-V was installed on Server Core installation.

What value does this change add?

You can manage Hyper-V in Windows Server 2012 from Hyper-V Manager in Windows Server 2012 R2 or Windows 8.1. This enables you to upgrade your management workstation to the latest version of the operating system and to connect and manage Hyper-V in Windows Server 2012.

You can deploy the latest version of Hyper-V without upgrading the management workstation immediately.

System_CAPS_noteNote

When connecting to Hyper-V in Windows Server 2012 R2 from a computer running Windows Server 2012 or Windows 8, you can only perform actions that are supported by Hyper-V in Windows Server 2012.

Automatic Virtual Machine Activation

Automatic Virtual Machine Activation (AVMA) lets you install virtual machines on a computer where Windows Server 2012 R2 is properly activated without having to manage product keys for each individual virtual machine, even in disconnected environments. AVMA binds the virtual machine activation to the licensed virtualization server and activates the virtual machine when it starts. AVMA also provides real-time reporting on usage, and historical data on the license state of the virtual machine. Reporting and tracking data is available on the virtualization server.

What value does this change add?

AVMA requires a virtualization server running Windows Server 2012 R2 Datacenter. The operating system on the guest virtual machine must be Windows Server 2012 R2 Datacenter, Windows Server 2012 R2 Standard, or Windows Server 2012 R2 Essentials.

Datacenter managers can use AVMA to do the following:

•Activate virtual machines in remote locations

•Activate virtual machines with or without an Internet connection

•Track virtual machine usage and licenses from the virtualization server, without requiring any access rights on the virtual machines

What works differently?

There are no product keys to manage and no stickers to read on the servers. The virtual machine is activated and continues to work even when it is migrated across an array of virtualization servers.

Service Provider License Agreement (SPLA) partners and other hosting providers do not have to share product keys with tenants or access a tenant’s virtual machine to activate it. Virtual machine activation is transparent to the tenant when AVMA is used. Hosting providers can use the server logs to verify license compliance and to track client usage history.

For more information, see Automatic Virtual Machine Activation.

Use local resources on Hyper-V virtual machine with VMConnect

 

When you use Virtual Machine Connection (VMConnect), generation 2 virtual machines that run a Windows operating system can access a computer’s local resources, like a removable USB flash drive. To make this happen, turn on enhanced session mode on the Hyper-V host, use VMConnect to connect to the virtual machine, and before you connect, choose the local resource that you want to use. When you turn on enhanced session mode, you can also resize the VMConnect window.

Enhanced session mode isn’t available for generation 1 virtual machines or for virtual machines that run non-Windows operating systems. For virtual machines that run Ubuntu, see

Turn on enhanced session mode on Hyper-V host

If your Hyper-V host runs Windows 10, Windows 8, or Windows 8.1, you might not have to go through the following steps to turn on enhanced session mode. It’s turn on by default. But if your host runs Windows Server 2016, Windows Server 2012, or Windows Server 2012 R2, you must turn on enhanced session mode to use it. It is turned off by default for those operating systems.

To turn on enhanced session mode,

  1. Connect to the computer that hosts the virtual machine.
  2. In Hyper-V Manager, select the host’s computer name

hyperv-actionshypervsettings3. Select Hyper-V settings.

 

hyper-v-hypervmanager-hostnameselected

Under Server, select Enhanced session mode policy

  1. hyper-v-settings-serverenhancedsessionmodepolicy
  2. Select the Allow enhanced session mode check boxhyper-v-settings-enhancedsessionmodepolicycheckbox
  3. Under User, select Enhanced session mode.
  4.  

Select the Allow enhanced session mode check box.

8.Click Ok.

Choose the local resource that you want to use

You can choose a local resource like a printer, the clipboard or a local drive that’s on the computer that you’re using to connect to the VM.

To select a local resource like a drive,

1.Open VMConnect.

2.Select the virtual machine that you want to connect to.

3.Click Show options.

  1. hyperv-vmconnect-displayconfig hyperv-vmconnect-displayconfig-localresources hyperv-vmconnect-savesettings hyperv-vmconnect-settings-localresourcesdrives

virtual hard disk: Hyper-v

required for resizing a virtual hard disk: Hyper-v

• A server capable of running Hyper-V. The server must have processor support for hardware virtualization. The Hyper-V role must be installed.

• A user account that is a member of the local Hyper-V Administrators group or the Administrators group. The following functionality is required for resizing a virtual hard disk:

• VHDX – the ability to expand and shrink virtual hard disks is exclusive to virtual hard disks that are using the .vhdx file format. Online resizing is supported for VHDX disk types, including fixed, differencing, and dynamic disks. Virtual hard disks that use the .vhd file format are not supported for resizing operations.

• SCSI controller – the ability to expand or shrink the capacity of a virtual hard disk is exclusive to .vhdx files that are attached to a SCSI controller. VHDX files that are attached to an IDE controller are not supported.

 

 

Virtual Disk Types There are considerations for using virtual disks, and what types of virtual disks are available:

• Fixed—The VHD image file is pre-allocated on the backing store for the maximum size requested.

• Expandable—Also known as “dynamic”, “dynamically expandable”, and “sparse”, the VHD image file uses only as much space on the backing store as needed to store the actual data the virtual disk currently contains. When creating this type of virtual disk, the VHD API does not test for free space on the physical disk based on the maximum size requested, therefore it is possible to successfully create a dynamic virtual disk with a maximum size larger than the available physical disk free space.
Note The maximum size of a dynamic virtual disk is 2,040 GB.

• Differencing—A parent virtual disk is used as the basis of this type, with any subsequent writes written to the virtual disk as differences to the new differencing VHD image file, and the parent VHD image file is not modified. For example, if you have a clean-install system boot operating system virtual disk as a parent and designate the differencing virtual disk as the current virtual disk for the system to use. then the operating system on the parent virtual disk stays in its original state for quick recovery or for quickly creating more boot images based on additional differencing virtual disks.
Note The maximum size of a differencing virtual disk is 2,040 GB. All virtual disk types have a minimum size of 3 MB.
With Pass-through disks, you lose all of the benefits of VHD files such as portability, snapshotting and thin provisioning. Performance is marginally better than that of VHD files

HYPER-V 2012 R2

HYPER-V 2012 R2

With the release of Windows 2012R2 and with it Hyper-V 2012R2 many of you are probably wondering how to migrate your existing environments to 2012R2.
For the first time Microsoft has issued that migration (from 2012) can be achieved with zero downtime. Now while this is possible there are certain requirements to be able to achieve this.
There are basically three ways in wich you can migrate to Hyper-V 2012R2 so I’ll outline them for you.

The first is and easiest way is you can live migrate the VM’s to the new server. However live migration is only possible between VM’s using shared Storage. Now you can’t add a windows 2012R2 server to a 2012 cluster so the only way you can achieve this is if you’re using SMB share as your storage.

Pros:

  • Extremely fast migration
  • No downtime

Cons:

  • requires you to be using SMB storage

The second option is to perform a shared nothing live migration. In this scenario all VM’s are basically replicated the new server and after replication they are “live migrated” in the background.

Pros:

  • No downtime

Cons:

  • Time consuming as all VM’s have to be replicated to new server.
  • If using Block Level Replication, requires a new Lun for the new server, existing Lun can’t be connected concurrently to new and old servers running different version of hyper-v.
  • If using block level shared storage (Iscsi, Fiber) each VM is temporarily going to require double the space during initial replication.

The third and final option is to perform a cluster migration. This option is of course only valid if you have a cluster and are moving to a new cluster.
You basically perform a copy cluster role operation using Windows failover clustering and run the wizard.

Pros:

  • All Cluster resources and settings are transferred to new cluster.
  • Does not require additional Luns, or space provisioning

Cons:

  • Requires downtime of the VM’s while transferring the Lun to the new server.

These are the three options for migrating VM’s to a new server. There is also a fourth option where you could just manually copy (or export) all VM’s to the new server and then import them via Hyper-V manager, but that isn’t really a migration.

Hope this helps you understand the basics of how to migrate your VM’s to Windows Hyper-V 2012R2 and again we see the benefits of using SMB shares as Hyper-V storage.
Happy Migrating.

WHAT IS STORAGE SPACES DIRECT IN WINDOWS 2016?

Windows 2016 will continue to focus on Software Defined Storage. In Windows 2012 Storage spaces was introduced as a tool that would allow pooling together disk resources to create a large and redundant pool of disk space (Similar to Raid but without certain limitations-Such as all disks must be of the same size).  Storage spaces could also be used in a cluster environment as long as the Storage space as based on a JBOD with direct SAS connectivity to both nodes in the Cluster.

In Windows 2016 we’re receiving storage spaces direct. This technology will allow us to pool multiple local DAS disks from Multiple servers in to one pool. That’s correct local disks from multiple servers into one large shared pool. The pool can be used in a failover cluster for storing your Hyper-V VM’s.

just think, you can have 3 servers all with 3TB of local disk space all pooled together to create a large pool of clustered disk space. Now that’s COOL!
The pool will be fault tolerant and the loss of a single server will not bring down the pool itself.

The possibilities are endless. Smaller environments will defiantly be able create clusters without purchasing expensive Storage appliances, data can be stretched to a remote site for DR scenarios. Yes this is also totally supported.

NEW DE-DUPE FEATURES COMING TO WINDOWS 2016

In its current beta Windows 2016 offers new Dew-Dupe features and rumors say that more are to come.

What we currently know is the following:

1. Volume size of up to 64 TB will be supported.

In Windows 2016 the recommended limit was 10 TB mainly due to processing rates. The new De-Dupe has a new engine with multiple threads supported to improve performance.

2. File sizes up to 1 TB are good.

Although supported in Windows 2012, again not recommended because of performance issues. In Windows 2016 1TB file sizes are good to go (DE-Dupe).

3. New type of DE-Dupe scenario – Backup.

Windows 2012R2 supported general File Server & virtualization (VDI) De-Dupe.

Not sure exactly what the improvement here is, but we’re promised better performance for De-Duping backup files. Can’t wait to try it out with Veeam.