Archive

Posts Tagged ‘networking’

CTP of System Center 2012 Service Pack 1 (SP1) Documentation

Today while I found that Microsoft VMM team published new interesting document “CTP of System Center 2012 Service Pack 1 (SP1) Documentation”

This download provides technical documentation for the new VMM features in the community technology preview (CTP) of System Center 2012 SP1.

This guide provides a step-by-step walkthrough that enables you to test the new features of Virtual Machine Manager (VMM) in the community technology preview (CTP) of System Center 2012 Service Pack 1 (SP1). This CTP is designed to be used with Windows Server® "8" Beta and to take advantage of new functionality provided by Windows Server "8" Beta.

What’s New in System Center 2012 SP1 – Virtual Machine Manager
Virtual Machine Manager (VMM) in the community technology preview (CTP) of System Center 2012 SP1 provides the following new features:
•    Network virtualization
•    VHDX support
•    Support for file shares that use the Server Message Block (SMB) 2.2 protocol
•    Live migration enhancements

Network Virtualization
VMM in the CTP release of System Center 2012 SP1 provides support for the network virtualization capabilities available in Windows Server "8" Beta.
Network virtualization provides the ability to run multiple virtual network infrastructures, potentially with overlapping IP addresses, on the same physical network. With network virtualization, each virtual network infrastructure operates as if it is the only one running on the shared network infrastructure. This will allow two different business groups using VMM to use the same IP addressing scheme without conflict. In addition, network virtualization provides isolation, so that only those virtual machines on a specific virtual network infrastructure can communicate with each other.

Network virtualization in Windows Server "8" Beta is designed to remove the constraints of VLAN and hierarchical IP address assignment for virtual machine provisioning. This enables flexibility in virtual machine placement, because the virtual machine can keep its IP address regardless of which host it is placed on. Placement is not necessarily limited by physical IP subnet hierarchies or VLAN configurations.
To virtualize the network in Windows Server "8" Beta, each virtual machine is assigned two IP addresses:

•    A customer address, which is visible to the virtual machine and is used by customers to communicate with the virtual machine.
•    A provider address, which is used by the Hyper-V computer that is hosting the virtual machine, but is not visible to the virtual machine.

VMM in the CTP release of System Center 2012 SP1 creates the necessary IP address mappings for virtual machines to take advantage of the network virtualization capabilities in Windows Server "8" Beta. VMM uses an IP address pool associated with a logical network to assign provider addresses and uses an IP address pool associated with a VM network to assign customer addresses. VM networks are a new addition to VMM in the CTP release of System Center 2012 SP1.

VHDX Support
VMM in the CTP release of System Center 2012 SP1 supports the new version of the virtual hard disk (VHD) format that is introduced in Windows Server "8" Beta. This new format is referred to as VHDX. VHDX has a much larger storage capacity (up to 64 TB) than the older VHD format. It also provides data corruption protection during power failures. Additionally, it offers improved alignment of the virtual hard disk format to work well on large-sector physical disks.
By default, VMM in the CTP release of System Center 2012 SP1 uses the VHDX format when you create a new virtual machine with a blank virtual hard disk. The VMM library automatically indexes .vhdx files. In addition to the small and large blank .vhd files that were available in previous versions of VMM, the VMM library in System Center 2012 SP1 also contains both a small (16 GB) and large (60 GB) blank .vhdx file.
 

SMB 2.2 File Shares
VMM in the CTP release of System Center 2012 SP1 adds support for designating network file shares on Windows Server "8" Beta computers as the storage location for virtual machine files, such as configuration, virtual hard disk (.vhd/.vhdx) files and checkpoints. This functionality leverages the new 2.2 version of the Server Message Block (SMB) protocol that is introduced in Windows Server "8" Beta.
SMB 2.2 file shares provide the following benefits when used with VMM in the CTP release of System Center 2012 SP1:
•    Hyper-V over SMB supports file servers and storage at a reduced cost compared to traditional storage area networks (SANs).
•    If you use SMB 2.2 file shares as the storage location for virtual machine files, you can live migrate running virtual machines between two stand-alone Hyper-V hosts or between two stand-alone Hyper-V host clusters. Because the storage location is a shared location that is available from both source and destination hosts, only the virtual machine state must transfer between hosts.

You can create SMB 2.2 file shares on both stand-alone Windows Server "8" Beta file servers and on clustered Windows Server "8" Beta file servers. In this step-by-step guide, only SMB 2.2 file shares on a stand-alone file server are used to demonstrate the concepts. If you use a stand-alone file server, you can designate an SMB 2.2 file share as the virtual machine storage location on a Windows Server "8" Beta Hyper-V host cluster. However, this is not a highly-available solution.
 

Live Migration Enhancements
VMM in the CTP release of System Center 2012 SP1 includes several live migration enhancements that enable the migration of a running virtual machine with no downtime. The following table summarizes the live migration options that are available.

Transfer Type

Description

Live

During live migration, only the virtual machine state is transferred to the destination server.

VMM in System Center 2012 SP1 supports the following new live migration options:

·    Live migration between two stand-alone Windows Server "8" Beta Hyper-V hosts.

·    Live migration between two Windows Server "8" Beta Hyper-V host clusters.

Note

This includes both highly available virtual machines and non-highly available virtual machines that are running on a cluster node.

To live migrate a virtual machine between two stand-alone hosts or two separate host clusters, the virtual machine (including virtual hard disks, checkpoints, and configuration files) must reside on an SMB 2.2 file share that is accessible from both the source and destination stand-alone hosts or host clusters.

Note

VMM in System Center 2012 SP1 also supports the live migration of a highly available virtual machine between two nodes in the same host cluster. Support for this exists in System Center 2012 – Virtual Machine Manager, when the virtual machine resides on available storage or on a cluster shared volume (CSV). In System Center 2012 SP1, the virtual machine can also reside on an SMB 2.2 file share.

Live (VSM)

Live virtual machine and storage migration (live VSM) is new in System Center 2012 SP1. During live VSM, both the virtual machine state and the virtual machine storage are transferred. For the live VSM option to be available, the virtual machine must reside on storage that is not visible to the destination host.

VMM in the CTP release of System Center 2012 SP1 supports the following:

·    Live VSM between two stand-alone Windows Server "8" Beta Hyper-V hosts. This transfer can occur between local disks or SMB 2.2 file shares.

·    Live VSM between two Windows Server "8" Beta Hyper-V host clusters. The virtual machine can be transferred to either a CSV or an SMB 2.2 file share on the destination host cluster.

Live Storage

Live storage migration is new in VMM in System Center 2012 SP1. During live storage migration, only the virtual machine storage is transferred.

VMM in the CTP release of System Center 2012 SP1 supports the following:

·    Live storage migration within the same Windows Server "8" Beta stand-alone host. Storage can be transferred between two SMB 2.2 file share, between an SMB 2.2 file share and a local disk, or between two local disk locations.

·    Live storage migration on a cluster node from a CSV or SMB 2.2 file share to a different CSV or SMB 2.2 file share that is accessible from the cluster node.

Go and check the step-by-step guide that walks you through the new Virtual Machine Manager (VMM) features in the CTP of System Center 2012 SP1.

Categories: IaaS, VMM, VMM2012, windows 8 Tags:

Hyper-V tricks:Increase VMBus buffer sizes to increase network throughput to guest VMs

February 6, 2010 Leave a comment

The Windows Server Performance team have done a really interesting post on how to optimize network performance inside of virtual machines by increasing the size of the VMBus buffers used by our network adapters.

You can found it there

http://blogs.technet.com/winserverperformance/archive/2010/02/02/increase-vmbus-buffer-sizes-to-increase-network-throughput-to-guest-vms.aspx

Under load, the default buffer size used the by the virtual switch may provide inadequate buffer and result in packet loss. We recommend increasing the VM bus receive buffer from 1Mb to 2Mb.

Traffic jams happen every day, all across the world. Too many vehicles competing for the same stretch of road, gated by flow control devices like stop signs and traffic lights, conspire to ensnare drivers in a vicious web of metal and plastic and cell phones. In the technology world, networking traffic is notoriously plagued by traffic jams, resulting in all sorts of havoc, including delayed web pages, slow email downloads, robotic VOIP and choppy YouTube videos. (Oh, the humanity!)

Virtualized networking can be complicated, what with the root and child partitions relaying packets across the VM bus to reach the physical NIC. The VM bus, anticipating contention, uses buffers to queue data while the recipient VM is swapped out or otherwise not keeping up with the traffic. The default buffer size for WS08 R2 is 1Mb, which provides 655 packet buffers (1,600 bytes per buffer).

The hypervisor, meanwhile, calculates a scheduling interval, or quantum, derived from the system’s interrupt rate. The hypervisor attempts to ensure every VM has a chance to run within that interval, at which time the VM wakes up and does whatever processing it needs to do (including reading packets from the VM bus). At very low interrupt rates, that quantum can be nearly 10ms.

Whereas the native system handles on the order of 260,000 packets/second, virtualized systems, in some scenarios, can—in the worst case scenario—begin seeing packet loss under traffic loads as low as 65,500 packets/second. This isn’t an inherent tax incurred by virtualizing or a design limit; rather, it’s the result of specific characteristics of server load requiring more VM bus buffer capacity. If the logical processors hosting the guest partitions are receiving very few hardware interrupts, then scheduling quantum grows larger, approaching 10ms. The longer scheduling quantum results in longer idle periods between VM execution slices. If the VM is going to spend almost 10ms asleep, then the VM bus’ packet buffers must be able to hold 10ms worth of data. As the idle time for a VM approaches 10ms, the maximum sustainable networking speed can be calculated as:

655 default packet buffers / ~10ms idle interval = maximum 65,500 packets / second

We can increase throughput, though, by increasing the amount of memory allocated to the buffers. How much should it be increased? On paper, 4Mb is the maximum useful size; a 4Mb buffer provides about 2600 buffers, which can handle 10ms’ worth of data flowing at approximately 260,000 packets per second (the max rate sustainable by native systems). In reality, depending on the workload, the VM’s swapped-out time probably doesn’t approach the maximum 10ms quantum. Therefore, depending on how frugal you want to be with memory, increasing to 2 Mb is probably adequate for most scenarios. If you’re living large in the land of RAM, lighting your cigars by burning 4Gb memory sticks, then go for broke, cranking the buffers up to 4Mb.

The buffers are allocated from the guest partition’s memory and updating the buffer size requires, per each guest VM, adding two registry keys. To increase the buffer size, we first need the GUID and index associated with the network adapter. In the guest VM, open the Device Manager, expand Network Adapters , right click Microsoft Virtual Machine Bus Network Adapter and choose Properties (if you have an a driver marked “(emulated)”, you should take a detour to install Integration Services from the VM’s Action menu, then add a new synthetic network driver through the VM setup. See http://technet.microsoft.com/en-us/library/cc732470(WS.10).aspx , step 3 for instructions).

On the Network Adapter Properties dialog, select the Details tab. Select Driver Key in the Property pull-down menu as shown in figure 1 (click the images to see a version that’s actually readable):

Record the GUID\index found in the Value box, as shown in figure 1, above. Open regedit and navigate to: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{GUID}\{index} as shown in figure 2:


Right click the index number and create two new DWORD values, entitled ReceiveBufferSize and SendBufferSize (see figure 3). These values measure the memory allocated to buffers in 1Kb units. So, 0x400 equates to 1,024Kb buffer space (the default, 640 buffers). In this example, we’ve doubled the buffer size to 0x800, or 2,048Kb of memory, as shown in figure 3:


Your workloads and networking traffic may not need increased buffers; however, these days, 4Mb of RAM isn’t a tremendous amount of memory to invest as an insurance policy against packet loss. Now, if only I could increase a few buffers and alleviate congestion on my daily commute!

Tom Basham

Virtualization Performance PM, Windows Fundamentals Team

Windows Server 2008 R2 Live Migration – “The devil may be in the networking details.”

December 10, 2009 3 comments

Source: Ask the Core Team

Windows Server 2008 R2 has been publicly available now for only a short period of time, but we are already seeing a good adoption rate for the new Live Migration functionality as well as the new Cluster Shared Volumes (CSV) feature. I personally have worked enough issues now where Live Migration is failing that I felt a short blog on what process I have followed to work through these may have some value.

It is important to mention right up front that there is information publicly available on the Microsoft TechNet site that discusses Live Migration and Cluster Shared Volumes. This content also includes some troubleshooting information. I acknowledge that a lot of people do not like to sit in front of a computer monitor and read a lot of text to try and figure out how to resolve an issue. I am one of those people. Having said that, let’s dive in.

It has been my experience thus far that issues that prevent Live Migration from succeeding have to do with proper network configuration. In this blog, I will address the main network related configuration items that need to be reviewed in order to be sure Live Migration has the best chance of succeeding. I begin with an initial set of assumptions which include the R2 Hyper-V Failover Cluster has been properly configured and all validation tests have passed without failure, the highly available VM(s) have been created using cluster shared storage, and the virtual machine(s) are able to start on at least one node in the cluster.

I start off by identifying the virtual machines that will not Live Migrate between nodes in the cluster. While it should not be necessary in Windows Server 2008 R2, I recommend first running a ‘refresh’ process on each virtual machine experiencing an issue with Live Migration. I say it should not be necessary because a lot of work was done by the Product Group to more tightly integrate the Failover Cluster Management interface with Hyper-V. Beginning with R2, virtual machine configuration and management can be done using the Failover Cluster Management interface. Here is a sample of some of the actions that can be executed using the Actions Pane in Failover Cluster Manager.

clip_image002

If virtual machine configuration and management is accomplished using the Failover Cluster Management interface, any configuration changes made to a virtual machine should be automatically synchronized across all nodes in the cluster. To ensure this has happened, I begin by selecting each virtual machine resource individually and executing a Refresh virtual machine configuration process as shown here –

clip_image004

The process generates a report when it completes. The desired result is shown here –

clip_image006

If the process completes with a Warning or Failure, examine the contents of the report and fix the issue(s) that was reported and run the process again until it successfully completes.

If the refresh process completes without Failure, try to Quick Migrate the virtual machine to each node in the cluster to see if it succeeds.

clip_image008

If a Quick Migration completes successfully, that confirms the Hyper-V Virtual Networks are configured correctly on each node and the processors in the Hyper-V servers themselves are compatible. The most common problem with the Hyper-V Virtual Network configuration is that the naming convention used is not the same on every node in the cluster. To determine this, open the Hyper-V Management snap-in, select the Virtual Network Manager in the Actions pane and examine the settings.

clip_image010

The information shown below (as seen in my cluster) must be the same across all the nodes in the cluster (which means each node must be checked). This includes not only spelling but ‘case’ as well (i.e. PUBLIC is not the same as Public) –

clip_image012

It is important to be able to successfully Quick Migrate all virtual machines that cannot be Live Migrated before moving forward in this process. If the virtual machine can Quick Migrate between all nodes in the cluster, we can begin taking a closer look at the networking piece.

Start verifying the network configuration on each node in the cluster by first making sure the network card binding order is correct. In each cluster node, the Network Interface Card (NIC) supporting access to the largest routable network should be listed first. The binding order can be accessed using the Network and Sharing Center, Change adapter settings. In the Menu bar, select Advanced and from the drop down list choose Advanced Settings. An example from one of my cluster nodes is shown here where the NIC (PUBLIC-HYPERV) that has access to the largest routable network is listed first.

clip_image014

Note: You may also want to review all the network connections that are listed and Disable those that are not being used by either the Hyper-V server itself or the virtual machines.

On each NIC in the cluster, ensure Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks is enabled (i.e. checked). This is a requirement for CSV which requires SMB (Server Message Block).

clip_image016

Note: Here is where people get into trouble usually because they are familiar with clusters and have been working with them for a very long time, maybe even as far back at NT 4.0 days. Because of that, they have developed a habit for configuring cluster networking which basically is outlined in KB 258750. This article does not apply to Windows Server 2008.

Note: If CSV is configured, all cluster nodes must reside on the same non-routable network. CSV (specifically for re-directed I/O) is not supported if cluster nodes reside on separate, routed networks.

Next, verify the local security policy and ensure NTLM security is not being restricted by a local or domain level policy. This can be determined by Start > Run > gpedit.msc > Computer Configuration > Windows Settings > Security Settings > Local Policies > Security Options. The default settings are shown here –

clip_image018

In the virtual machine resource properties in the Failover Cluster Management snap-in, set the Network for Live Migration ordering such that the highest speed network that is enabled for cluster communications and is not a Public network is listed first. Here is an example from my cluster. I have three networks defined in my cluster –

clip_image020

The Public network is used for client access, management for the cluster, and for cluster communications. It is configure with a Default Gateway and has the highest metric defined in the cluster for a network the cluster is allowed to use for its own internal communications. In this example, since I am also using ISCSI, the ISCSI network has been excluded from cluster use. The corresponding listing on the virtual machine resource in the Network for live migration tab looks like this –

clip_image022

Here, I have unchecked the iSCSI network as I do not want Live Migration traffic being sent over the same network that is supporting the storage connection. The Cluster network is totally dedicated to cluster communications only so I have moved that to the top as I want that to be my primary Live Migration network.

Note: Once the live migration network priorities have been set on one virtual machine, they will apply to all virtual machines in the cluster (i.e. it is a Global setting).

Once all the configuration checks have been verified and changes made on all nodes in the cluster, execute a Live Migration and see if it completes successfully.

Bonus material:

There are configurations that can be put in place that can help live migrations run faster and CSV to perform better. One thing that can be done, is to Disable NetBIOS on the NIC that will be supporting the primary network used by CSV for re-directed I/O. This should be a dedicated network and should not be supporting any other traffic other than internal cluster communications, redirected I/O for CSV and\or live migration traffic.

clip_image024

Additionally, on the same network interface supporting live migration, you can enable larger packet sizes to be transmitted between all the connected nodes in the cluster.

clip_image026

If, after making all the changes discussed here, live migration is still not succeeding, then perhaps it is time to open a case with one of our support engineers.

Thanks again fro you time, and I hope you have found this information useful. Come back again.

Additional resources:

Using Live Migration with Cluster Shared Volumes in Windows Server 2008 R2

High Availability Product Team Blog

Hyper-V and Virtualization on Microsoft TechNet

Windows Server 2008 R2 Hyper-V Forum

Windows Server 2008 R2 High Availability Forum

Chuck Timon
Senior Support Escalation Engineer
Microsoft Enterprise Platforms Support

HP ML 370 G6, Network Problem with Hyper-V Guests

June 27, 2009 1 comment

This is a new problem that  faced with HP Proliant ML 370 G6 . After Installing Windows 2008 SP2 and install Hyper-V finish all the configuration everything looks fine.

I start creating some Guest machines with Hyper-V console and joining the domain. We start see unexpected behavior like some missing PING packets and time out for DNS lookup. this looks strange as I tried a lot of HP servers and it worked fine. I made sure that HP Network Configuration is uninstalled but still facing the problem.

 I start suspecting the NIC as I saw problems like this due to NIC , After disabling IPv4 Checksum Offload J It worked fine.

 In most cases I can see that Guest machines are suffering in case of IPv4 Checksum Offload.

 Update:

While I was searching online I found the same problem there and he recommend to disable all those:

 IPv4 Checksum Offload

TCP Checksum Offload IPv4

UCP Checksum Offload IPv4

TCP Checksum Offload IPv6

UCP Checksum Offload IPv6

HP Network Teaming Software with Microsoft Windows Server 2008 Hyper-V

February 12, 2009 4 comments

HP ProLiant Network Teaming Software (ProLiant Network Teaming Software (HP Network Configuration Utility (NCU) version 9.35 or greater) that solves the problem between HP network teaming and Hyper-V networks.

Patrick Lownds published a white papers about that.

NIC teaming is the process of grouping together several physical NICs into one single logical NIC. This would provide more fault tolerence and load balance the traffic.

Hyper V roles had some problems with HP networking as I mentioned before here. before I was not recommend to install HP teaming on the servers with Hyper-V role enabled.

Check the white papers
 Download here:

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01663264/c01663264.pdf

Categories: Hyper-V Tags: , ,