Home > Hyper-V, Virtualization, Windows 2008 R2 > Top Five Hyper-V Best Practices

Top Five Hyper-V Best Practices

NetApp team had a very interesting article on Hyper-V best practice (Source)

Microsoft® Hyper-V™ virtualization technology has been shipping for more than a year. Tech OnTap profiled the use of Hyper-V with NetApp® technology in several past articles, including an overview article and a detailed case study of one customer’s experiences.

NetApp has been involved with hundreds of Hyper-V deployments and has developed a detailed body of best practices for Hyper-V deployments on NetApp. Tech OnTap asked me to highlight the top five best practices for Hyper-V on NetApp, with special attention to the recently released Hyper-V Server 2008 R2.

  • Network configuration
  • Setting the correct iGroup and LUN protocol type
  • Virtual machine disk alignment
  • Using cluster shared volumes (CSVs)
  • Getting the most from NetApp storage software and tools

You can find full details on these items and much more in NetApp Storage Best Practices for Microsoft Virtualization which has been updated to include Hyper-V R2.

BP #1: Network Configuration in Hyper-V Environments

There are two important best practices to mention when it comes to network configuration:

  • Be sure to provide the right number of physical network adapters on Hyper-V servers.
  • Take advantage of the new network features that Hyper-V R2 supports if at all possible.

Physical network adapters. Failure to configure enough network connections can make it appear as though you have a storage problem, particularly when using iSCSI. Smaller environments require a minimum of two or three network adapters, while larger environments require at least four or five. You may require far more. Here’s why:

  • Management. Microsoft recommends a dedicated network adapter for Hyper-V server management.
  • Virtual machines. Virtual network configurations of the external type require a minimum of one network adapter.
  • IP storage. Microsoft recommends that IP storage communication have a dedicated network, so one adapter is required and two or more are necessary to support multipathing.
  • Windows failover cluster. Windows® failover cluster requires a private network.
  • Live migration. This new Hyper-V R2 feature supports the migration of running virtual machines between Hyper-V servers. Microsoft recommends configuring a dedicated physical network adapter for live migration traffic.
  • Cluster shared volumes. Microsoft recommends a dedicated network to support the communications traffic created by this new Hyper-V R2 feature.

The following tables will help you choose the right number of physical adapters.

Table 1) Standalone Hyper-V servers.

Front and rear views of the DS4243

Table 2) Clustered Hyper-V servers.

Front and rear views of the DS4243

Table 3) Clustered Hyper-V servers using live migration.

Front and rear views of the DS4243

Table 4) Clustered Hyper-V servers using live migration and CSV.

Front and rear views of the DS4243

New network features. Windows Server® 2008 R2 supports a number of new networking features. NetApp recommends configuring these features on your Hyper-V servers and taking advantage of them whenever possible. Be aware that some or all of them may not be supported by your server and network hardware. (See sidebar for details.)

BP #2: Selecting the Correct iGroup and LUN Protocol Type

When provisioning a NetApp LUN for use with Hyper-V, you must specify specific initiator groups (iGroups) and the correct LUN type. Incorrect settings can make deployment difficult and performance can suffer.

Initiator groups. FCP and iSCSI storage must be masked so that the appropriate Hyper-V server and virtual machines (VMs) can connect to them. With NetApp storage, LUN masking is handled by iGroups.

  • When dealing with individual Hyper-V servers or VMs, you should create an iGroup for each system and for each protocol (FC and iSCSI) that system uses to connect to the NetApp storage system.
  • When dealing with a cluster of Hyper-V servers or VMs, you should create an individual iGroup for each protocol that the cluster of systems uses to connect to the NetApp storage system.

It’s easier to manage iGroups by using NetApp SnapDrive®. SnapDrive cuts down on the confusion because it knows which OS you are using and automatically configures that setting for your iGroups.

LUN types. The LUN Protocol Type setting determines the on-disk layout of the LUN. It is important to specify the correct LUN type to make sure that the LUN aligns properly with the file system it contains. (See the following tip for an explanation.) This issue is not unique to NetApp storage. Any storage vendor or host platform may exhibit this problem.

Tip: The LUN type you specify depends on your OS, OS version, disk type, and Data ONTAP® version. For complete information on LUN types for different operating systems, refer to the Block Access Management Guide for your version of Data ONTAP.

The following tables will help you choose the correct LUN type.

Table 5) LUN types for use with Data ONTAP 7.3.1 and later.

Front and rear views of the DS4243

Table 6) LUN types for use with Data ONTAP 7.2.5 through 7.3.0.

Front and rear views of the DS4243

BP #3: Virtual Machine Disk Alignment

Tip: This tip is closely tied to the previous one, since failure to follow the previous tip will result in misalignment. The problem of virtual machine disk alignment is not unique to Hyper-V, nor is it unique to NetApp storage. This problem exists in any virtual environment on any storage platform.

This problem occurs because, by default, many guest operating systems, including Windows 2000 and 2003 and various Linux® distributions, start the first primary partition at sector (logical block) 63. This behavior leads to misaligned file systems because the partition does not begin at a block boundary. As a result, every time the virtual machine wants to read a block, two blocks have to be read from the underlying LUN, doubling the I/O burden.

Front and rear views of the DS4243

Figure 1) Virtual disk misalignment.

The situation becomes even more complicated when virtual machines are managed as files within the Hyper-V server’s file system, because it introduces another layer that must be properly aligned. This is why selecting the LUN type is so critical.

  • NetApp strongly recommends correcting the offset for all VM templates, as well as any existing VMs that are misaligned and are experiencing an I/O performance issue. (Misaligned VMs with low I/O requirements may not benefit from the effort to correct the misalignment.)
  • When using virtual hard disks (VHDs), NetApp recommends using fixed-size VHDs in your Microsoft Hyper-V virtual environment wherever possible, especially in production environments, because proper file system alignment can be reliably achieved only on fixed-size VHDs. Avoid the use of dynamically expanding and differencing VHDs where possible, because file system alignment can never be reliably achieved with these VHD types.

The best practices guide provides complete procedures for identifying and correcting alignment problems.

BP #4: Using Cluster Shared Volumes

Cluster shared volumes are a completely new feature in Hyper-V R2. If you’re familiar with VMware®, you can think of a CSV as being somewhat akin to VMFS (although there are significant differences).

A CSV is a “disk” that is connected to the Hyper-V parent partition and shared between multiple Hyper-V server nodes configured as part of a Windows failover cluster. A CSV can be created only from shared storage, such as a LUN provisioned on a NetApp storage system. All Hyper-V server nodes in the failover cluster must be connected to the shared storage system.

CSVs have many advantages, including:

  • Shared namespace. CSVs do not need to be assigned a drive letter, reducing restrictions and eliminating the need to manage GUIDs and mount points.
  • Simplified storage management. More VMs share fewer LUNs.
  • Storage efficiency. Pooling VMs on the same LUN simplifies capacity planning and reduces the amount of space reserved for future growth, because it is no longer set aside on a per-VM basis.

CSV Dynamic I/O Redirection allows storage and network I/O to be redirected within a failover cluster if a primary pathway is interrupted. The following recommendations apply specifically to the use of CSVs and are intended to minimize the impact of I/O redirection:

  • In addition to the NICs installed in the Hyper-V server for management, VMs, IP storage, and more (see Best Practice #1), NetApp recommends that you dedicate a physical network adapter to CSV traffic only. The physical network adapter should be a gigabit Ethernet (GbE) adapter at a minimum. If you are running large servers (16 LCPUs+, 64GB+), planning to use CSVs extensively, planning to dynamically balance VMs across the cluster by using SCVMM, and/or planning to use live migration extensively; you should consider 10 Gigabit Ethernet for CSV traffic.
  • NetApp strongly recommends that you configure MPIO on all Hyper-V cluster nodes, to minimize the opportunity for CSV I/O redirection to occur. CSV I/O Redirection is not a substitute for multipathing or for proper planning of storage layout and networking, which will minimize single points of failure in production environments.
  • Once you recognize that I/O redirection is occurring on a CSV, you may want to live migrate all affected VMs on the affected cluster node to another Hyper-V cluster node to restore optimal performance until any I/O pathway problems are diagnosed and repaired.

The best practices guide describes additional best practices that pertain specifically to backup and VM provisioning with CSVs.

BP #5: NetApp Storage Software and Tools

NetApp provides a variety of storage software and tools that can simplify operations in a Hyper-V environment. With the release of Hyper-V R2, minimum requirements have changed for many software elements:

  • As a minimum, NetApp recommends using Data ONTAP 7.3 or later with Hyper-V virtual environments.
  • The Windows Host Utilities Kit modifies system settings so that the Hyper-V parent or child OS operates with the highest reliability possible when connected to NetApp storage. NetApp strongly recommends that the Windows Host Utilities Kit be installed on all Hyper-V servers. Windows Server 2008 requires Windows Host Utilities Kit 5.1 or later. Windows Server 2008 R2 (Hyper-V R2) requires Windows Host Utilities Kit 5.2 or later.
  • Highly available storage configurations require the appropriate version of the Data ONTAP DSM for Windows MPIO. Windows Server 2008 requires Data ONTAP DSM 3.2R1 or later. Windows Server 2008 R2 requires Data ONTAP DSM 3.3.1 or later. You should set the least queue depth policy when using MPIO. (This is the default setting.)
  • NetApp recommends NetApp SnapDrive on all Hyper-V and SCVMM servers to enable maximum functionality and support of key features. For Microsoft Windows Server 2008 installations where the Hyper-V role is enabled and for Microsoft Hyper-V Server 2008, install NetApp SnapDrive for Windows 6.0 or later. For Microsoft Windows Server 2008 R2 installations where the Hyper-V role is enabled and for Microsoft Hyper-V Server 2008 R2 to support:
    • Existing features (no new R2 features), install NetApp SnapDrive for Windows 6.1P2 or later.
    • New features (all new R2 features), install NetApp SnapDrive for Windows 6.2 or later.
  • NetApp SnapDrive for Windows 6.0 or later can also be installed in supported child operating systems that include Microsoft Windows Server 2003, Microsoft Windows Server 2008, and Microsoft Windows Server 2008 R2.

For the latest information on supported software versions, refer to the NetApp Interoperability Matrix. (You must have a NOW™ (NetApp on the Web) account to access this resource.)


If you pay attention to the best practices I’ve outlined here, you can avoid most of the pitfalls of configuring your Hyper-V environment. For complete details on these procedures and much more, refer to the Hyper-V best practices guide and Hyper-V implementation guide.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: