Archive for the ‘Hyper-V’ Category

#Linux Integration Services Version 4.1 for Hyper-V

March 22, 2016 1 comment

New with Linux Integration Services 4.1:

•Expanded Releases: now applicable to Red Hat Enterprise Linux, CentOS, and Oracle Linux with Red Hat Compatible Kernel versions 5.2, 5.3, 5.4, and 7.2.

•Hyper-V Sockets.

•Manual Memory Hot Add.



•Uninstallation scripts.

When installed in a supported Linux virtual machine running on Hyper-V, the Linux Integration Services provide:

•Driver support: Linux Integration Services supports the network controller and the IDE and SCSI storage controllers that were developed specifically for Hyper-V.

•Fastpath Boot Support for Hyper-V: Boot devices now take advantage of the block Virtualization Service Client (VSC) to provide enhanced performance.

•Time Keeping: The clock inside the virtual machine will remain accurate by synchronizing to the clock on the virtualization server via Timesync service, and with the help of the pluggable time source device.

•Integrated Shutdown: Virtual machines running Linux can be shut down from either Hyper-V Manager or System Center Virtual Machine Manager by using the “Shut down” command.

•Symmetric Multi-Processing (SMP) Support: Supported Linux distributions can use multiple virtual processors per virtual machine. The actual number of virtual processors that can be allocated to a virtual machine is only limited by the underlying hypervisor.

•Heartbeat: This feature allows the virtualization server to detect whether the virtual machine is running and responsive.

•KVP (Key Value Pair) Exchange: Information about the running Linux virtual machine can be obtained by using the Key Value Pair exchange functionality on the Windows Server 2008 virtualization server.

•Integrated Mouse Support: Linux Integration Services provides full mouse support for Linux guest virtual machines.

•Live Migration: Linux virtual machines can undergo live migration for load balancing purposes.

•Jumbo Frames: Linux virtual machines can be configured to use Ethernet frames with more than 1500 bytes of payload.

•VLAN tagging and trunking: Administrators can attach single or multiple VLAN ids to synthetic network adapters.

•Static IP Injection: Allows migration of Linux virtual machines with static IP addresses.

•Linux VHDX resize: Allows dynamic resizing of VHDX storage attached to a Linux virtual machine.

•Synthetic Fibre Channel Support: Linux virtual machines can natively access high performance SAN networks.

•Live Linux virtual machine backup support: Facilitates zero downtime backup of running Linux virtual machines.

•Dynamic memory ballooning support: Improves Linux virtual machine density for a given Hyper-V host.

•Synthetic video device support: Provides improved graphics performance for Linux virtual machines.

•PAE kernel support: Provides drivers that are compatible with PAE enabled Linux virtual machines.

Linux Integration Services Version 3.5 for Hyper-V

December 20, 2013 Leave a comment

Hyper-V supports both emulated (“legacy”) and Hyper-V-specific (“synthetic”) devices for Linux virtual machines. When a Linux virtual machine is running with emulated devices, no additional software is required to be installed. However, emulated devices do not provide high performance and cannot leverage the rich virtual machine management infrastructure that the Hyper-V technology offers. To make full use of all benefits that Hyper-V provides, it is best to use Hyper-Vspecific devices for Linux. The collection of drivers that are required to run Hyper-V-specific devices is known as Linux Integration Services (LIS).
For certain older Linux distributions, Microsoft provides an ISO file containing installable LIS drivers for Linux virtual machines. For newer Linux distributions, LIS is built into the Linux operating system, and no separate download or installation is required. This guide discusses the installation and functionality of LIS drivers on older Linux distributions.

Blow are the main Points that comes with version 3.5

New OS with 3.5

Expands the list of supported distributions to include RHEL/CentOS 5.5-5.6.

Supported Virtualization Server Operating Systems
This version of Linux Integration Services (LIS) supports the following versions of Hyper-V:
Windows Server 2008 R2 Standard, Windows Server 2008 R2 Enterprise, and Windows Server 2008 R2 Datacenter
Microsoft Hyper-V Server 2008 R2
Windows 8 Pro
Windows 8.1 Pro
Windows Server 2012
Windows Server 2012 R2
Microsoft Hyper-V Server 2012
Microsoft Hyper-V Server 2012 R2

When installed on a virtual machine that is running a supported Linux distribution, LIS 3.5 for Hyper-V provides the functionality listed in the table below. For comparative purposes, hereunder also list the features available in LIS 3.4. This allows users to decide if they want to upgrade from LIS 3.4 to LIS 3.5.



1.  Static IP injection might not work if Network Manager has been configured for a given HyperV-specific network adapter on the virtual machine. To ensure smooth functioning of static IP injection, ensure that either Network Manager is turned off completely, or has been turned off for a specific network adapter through its Ifcfg-ethX file.

2.  When you use Virtual Fibre Channel devices, ensure that logical unit number 0 (LUN 0) has been populated. If LUN 0 has not been populated, a Linux virtual machine might not be able to mount Virtual Fibre Channel devices natively.

3.  If there are open file handles during a live virtual machine backup operation, the backed-up virtual hard disks (VHDs) might have to undergo a file system consistency check (fsck) when restored.

4.  Live backup operations can fail silently if the virtual machine has an attached iSC SI device or a physical disk that is directly attached to a virtual machine (“pass-through disk”).

5.  LIS 3.5 only provides Dynamic Memory ballooning support—it does not provide hot-add support. In such a scenario, the Dynamic Memory feature can be used by setting the Startup memory parameter to a value which is equal to the Maximum memory parameter. This  results in all the requisite memory being allocated to the virtual machine at boot time—and  then later, depending upon the memory requirements of the host, Hyper-V can freely reclaim  any memory from the guest. Also, ensure that Startup Memory and Minimum Memory are not configured below distribution recommended values.

Categories: Hyper-V Tags: ,

Book Review: Windows Server 2012 Hyper-V Deploying Hyper-V Enterprise Server Virtualization Platform

I just got the opportunity to read a new book and want to share some thought about it with you guys so you may  take a look on it.  The book is well structured and provide excellent start for anyone that want to be enterprise ready for Windows 2012 Hyper-V . I will go through main points and brief for the book. Hope it provide good start for who is interested

Windows Server 2012 Hyper-V Deploying Hyper-V Enterprise Server Virtualization Platform

The book starts with classic start ..What is Virtualization ..okay looks classic but if you want to understand more about those big world like virtualization and cloud you should go through this start.  You will understand about Server, Network and Storage virtualization… Then a good jump to the big hunt “Microsoft Hyper-V”.

This part is well defined and even target starters and advanced people giving Insight into Hyper-V architecture and Networking with Hyper-v, Performance with overview for Hyper-v features.

After covering fundamental concepts of Hyper-V Now You are ready  to start Planning, Designing, and Implementing Microsoft Hyper-V, Chapter 2 provides necessary knowledge to start deploying and building your virtual environment . 

Chapter 3 is all about Hyper-V Replica, Okay this is smart choice since it is one of the booming feature for Hyper-V in Windows 2012. You will be able to understand, Design  and build your Hyper-V replica Virtual Machine once you finish it.

Chapter 4,5,6 is about Hyper-V Networking ,PowerShell and storage. targeting more advanced users and administrators . Although the book is doing great in these chapters but I wish I could see more information in networking part.

Chapter 7 covers management for Hyper-v environment with Microsoft Virtual Machine Manager. A very good read if you need a good push to start playing with VMM.

Chapter 8,9 about mobility and Security for Hyper-V, Personally I like the content for Hyper-V security .

Last Chapter is about Backup and restore for virtual environment.

Overall I found the book interesting and would be great useful for anyone who decide to work with Windows 2012 Hyper-V. So enjoy reading it


January 9, 2013 Leave a comment

The Hyper-V momentum continues and Microsoft commitment to interoperability pays off again. Today, Red Hat announced the release of RHEL 5.9 which includes the Hyper-V Linux Integration Services built-in.

From the press release, Red Hat touts the included Hyper-V drivers:

New Virtualization Capabilities and Flexibility in Multi-vendor Environments. Red Hat Enterprise Linux 5.9 enhances the operating system’s usability in multi-vendor environments by introducing Microsoft Hyper-V drivers for improved performance. This enhances the usability of Red Hat Enterprise Linux 5 for guests in heterogeneous, multi-vendor virtualized environments and provides improved flexibility and interoperability for enterprises.

Great Big Hyper-V Survey is back

October 2, 2012 Leave a comment

Aidan Finn, Damian Flynn and Hans Vredevoort the Great Hyper-v MVPs Smile  have composed a new Great Big Hyper-V Survey of 2012 and they would kindly ask your assistance in promoting the survey.

Last year’s survey was really interesting and several hundred people answered their questions and give more deep information for how Hyper-v is doing. This year the focus is on the new features in Windows Server 2012 and System Center 2012, so it is time to collect some new data.


It will take like 15 min so do not miss it

Altaro Sofware: Hyper-V Guest Design: Fixed vs. Dynamic VHD

Altaro guys are writing another important post about Hyper-v Guest design Check there


Should you use fixed or dynamic virtual hard disks (VHDs) for your virtual machines? The basic dilemma is the balance of performance against space utilization. Sometimes, the proper choice is obvious. However, as with most decisions of this nature, there are almost always other factors to consider.

Pass-Through Disks

Although the focus of this article is on VHDs, it would be incomplete without mentioning past-through disks. These are not virtualized at all, but hand I/O from a virtual machine directly to a disk or disk array on the host machine. This could be a disk that is internal to the host machine or it could be a LUN on an external system connected by fibrechannel or iSCSI. This mechanism provides the fastest possible disk performance but has some very restrictive drawbacks.

Pass-Through Benefits

  • Fastest disk system for Hyper-V guests
  • If the underlying disk storage system grows (such as by adding a drive to the array) and the virtual machine’s operating system allows for dynamic disk growth (such as Windows 7 and Server 2008 R2), the drive can be expanded within the guest without downtime.

Pass-Through Drawbacks

  • LiveMigration of VMs that use pass-through disks are noticeably slower and often include an interruption of service. Because pass-through disks are not cluster resources, they must be temporarily taken offline during transfer of ownership.
  • Hyper-V’s VSS writer cannot process a pass-through disk. That means that any VM-level backup software will have to take the virtual machine offline while backing it up.
  • Volumes on pass-through disks are non-portable. This is most easily understood by its contrast to a VHD. You can copy a VHD from one location to another and it will work exactly the same way. Data on pass-through volumes is not encapsulated in any fashion.

Continue there

Categories: Hyper-V, Hyper-V R2

VMM Tricks: VMM 2012 error when adding new Host

June 15, 2012 2 comments

We are working on new private cloud implementation with high restrictive security boundaries, while adding new Hyper-v cluster “ Windows 2008 R2 Sp1 Core edition” we got error



We make sure that we are using administrator account, No firewall problems.

We have Parent Domain (xxx.local) where all Management servers are running "VMM, SCOM and DPM" and we have child domain ""  where all Hyper-v servers are running "10 Servers in One Hyper-v cluster using Windows 2008 R2 SP1 Server Core"
the Hyper-v hosts are from HV01 to HV10 and using IPs from XX.XXX.XX.51 to XX.XXX.XX.60 and Hyper-v cluster IP is XX.XXX.XX.61


When I am trying to ping any node of the Hyper-v hosts it is working fine, Able to resolve the Hosts names using NSLOOKUP

When I am trying to add any host to Hyper-v cluster I got this error
[2124] 084C.06D4::06/03-09:25:30.427#18:ServerConnection.cs(1229): Microsoft.VirtualManager.Utils.CarmineException: HV01 cannot resolve with DNS.

I tried to add the host name "netbios and FQDN" to hosts file and got the same error. the funny thing that when I am trying for example HV04 I got the same error that the servers HV01 cannot resolve with DNS.  !!!!!!!!!!

VMM is trying to resolve HV01 not Hv04, I tried IPs and got the same error.

After some troubleshooting it turn to be something wrong in the VMM Server

Using Network Monitor and Wirshark here what i found

When adding the cluster IP or FQDN , VMM Service check with Parent DC to resolve and it manage to get the IP "it does not matter wherever I provide IP or FQDN"

Then the VMM checks with the Child DC for the cluster "Since The cluster and Hyper-v hosts exist in the child domain" and it resolve the cluster name.

VMM start to query the cluster nodes and resolve all nodes and suddenly it start searching for any random host NetBIOS name and consider Host.Parent.domain not hots.child.parent.domain without any reasons and return error that it can not resolve the name

So for the seek of troubleshooting we added all the hosts NetBIOS and FQDN to hosts file and got the same error. Analyzing the capture I found that all Cluster name and all hosts were resolved but suddenly the VMM did the same again with Cluster name "Since it is not added to hosts file"

Adding all Hosts and Cluster names to Hosts file solved the problem.

%d bloggers like this: