Hyper-V supports both emulated (“legacy”) and Hyper-V-specific (“synthetic”) devices for Linux virtual machines. When a Linux virtual machine is running with emulated devices, no additional software is required to be installed. However, emulated devices do not provide high performance and cannot leverage the rich virtual machine management infrastructure that the Hyper-V technology offers. To make full use of all benefits that Hyper-V provides, it is best to use Hyper-Vspecific devices for Linux. The collection of drivers that are required to run Hyper-V-specific devices is known as Linux Integration Services (LIS).
For certain older Linux distributions, Microsoft provides an ISO file containing installable LIS drivers for Linux virtual machines. For newer Linux distributions, LIS is built into the Linux operating system, and no separate download or installation is required. This guide discusses the installation and functionality of LIS drivers on older Linux distributions.
Blow are the main Points that comes with version 3.5
New OS with 3.5
Expands the list of supported distributions to include RHEL/CentOS 5.5-5.6.
Supported Virtualization Server Operating Systems
This version of Linux Integration Services (LIS) supports the following versions of Hyper-V:
Windows Server 2008 R2 Standard, Windows Server 2008 R2 Enterprise, and Windows Server 2008 R2 Datacenter
Microsoft Hyper-V Server 2008 R2
Windows 8 Pro
Windows 8.1 Pro
Windows Server 2012
Windows Server 2012 R2
Microsoft Hyper-V Server 2012
Microsoft Hyper-V Server 2012 R2
When installed on a virtual machine that is running a supported Linux distribution, LIS 3.5 for Hyper-V provides the functionality listed in the table below. For comparative purposes, hereunder also list the features available in LIS 3.4. This allows users to decide if they want to upgrade from LIS 3.4 to LIS 3.5.
1. Static IP injection might not work if Network Manager has been configured for a given HyperV-specific network adapter on the virtual machine. To ensure smooth functioning of static IP injection, ensure that either Network Manager is turned off completely, or has been turned off for a specific network adapter through its Ifcfg-ethX file.
2. When you use Virtual Fibre Channel devices, ensure that logical unit number 0 (LUN 0) has been populated. If LUN 0 has not been populated, a Linux virtual machine might not be able to mount Virtual Fibre Channel devices natively.
3. If there are open file handles during a live virtual machine backup operation, the backed-up virtual hard disks (VHDs) might have to undergo a file system consistency check (fsck) when restored.
4. Live backup operations can fail silently if the virtual machine has an attached iSC SI device or a physical disk that is directly attached to a virtual machine (“pass-through disk”).
5. LIS 3.5 only provides Dynamic Memory ballooning support—it does not provide hot-add support. In such a scenario, the Dynamic Memory feature can be used by setting the Startup memory parameter to a value which is equal to the Maximum memory parameter. This results in all the requisite memory being allocated to the virtual machine at boot time—and then later, depending upon the memory requirements of the host, Hyper-V can freely reclaim any memory from the guest. Also, ensure that Startup Memory and Minimum Memory are not configured below distribution recommended values.
At a high level, here’s a list of the changes since v3.1:
- Synthetic Mouse Support: The virtualized mouse device is no longer bound to the VMConnect window, and can now be used with a RDP session.
- Merged Device Drivers: We now present a single device driver for both IDE and SCSI devices (hv_storvsc).
- Windows 8 Fix: The synthetic network device (hv_netvsc) can now be used with a Windows 8 host, eliminating the hang on boot that was previously seen.
- SCVMM Fix: This release fixes the issue as described in KB2586286.
- Improved Setup Experience: Users now only need to run install.sh (as root) to automatically detect the correct architecture and install the appropriate drivers.
In addition, I would like to mention a number of requirements and limitations on the use of this package of integration:
- The driver applied to guest virtual machines running Red Hat Enterprise Linux Server 6.1 (architecture x 86 and x 64) and CentOS 6.0 (architecture x 86 and x 64). For earlier version should be used components integration version 2.1
- In fact, it’s modified drivers in the Linux kernel 3.2, but can work with the Linux kernel 2.6.32, shipped with Red Hat and CentOS
Consider the following scenario:
- You are running Linux based virtual machines on Hyper-V with the 2.1 version of the Linux Integration Services installed.
- You apply an updated kernel in the Linux based virtual machine.
After applying the kernel update, the Linux based guest operating system fails to boot with “Unable to mount root file system”.
This problem occurs because the Linux Integration Services must be recompiled after a kernel upgrade to function.
To prevent this issue, enable Dynamic Kernel Module Support (DKMS) before applying kernel updates.
A very interesting article in network world about Microsoft and Open Source.
Ballmer is still CEO of Microsoft, but that comment occurred in 2001, a lifetime ago in the technology market. While Microsoft hasn’t formally rescinded its declaration that Linux violates its patents, at least one Microsoft executive admits that the company’s earlier battle stance was a mistake. Microsoft wants the world to understand, whatever its issues with Linux, it no longer has any gripe toward open source.
It’s a big week for Microsoft Data Protection Manager 2010 … even though it’s a month or more away from general availability. At the Microsoft Management Summit this week, DPM 2010 was released to manufacturing, and i365 and Iron Mountain both made DPM 2010-related announcements that extend its capabilities.
Microsoft continues to make strides since joining the disk-based backup and recovery space with DPM 2006, adding features that have increased its appeal to Microsoft-centric buyers. Among other things, DPM 2010 promises to:
- Increase scale. A single DPM server can protect 100 production servers (up from 30-40) and 80 TB of data, 1000 Windows clients, 2000 SQL databases, 40 TB Exchange databases, and 25 TB SharePoint farms.
- Provide a single agent for all Microsoft workloads, including support for Windows 7, MOSS 2010, Exchange 2010, and SAP running on a SQL server.
- Support Hyper-V on Windows 2008 r2, including support for LiveMigration scenarios with cluster-shared volumes, recovery of .VHDs to an alternate host, and VM-level backup with either VM-level or file-level recovery.
- Protect connected or disconnected Windows clients with continuous backup (backup is performed locally until a connection/synchronization is possible), allowing data to be recovered locally and enabling end-user self-service restore.
- Enable SharePoint farm-level protection with document-level restore, eliminating the need for a SharePoint recovery farm.
- Replicate a DPM server off site to third-party cloud providers, such as Iron Mountain or i365.
Iron Mountain and Microsoft previously teamed up to deliver a cloud storage option for DPM 2007 customers over a year ago, allowing users to extend their data protection strategies with cloud-based copies for DR. This week, Iron Mountain announced support for DPM 2010 and enhancements to Iron Mountain CloudRecovery—beefing up its scalability, streamlining DPM-CloudRecovery integration, and altering its licensing/pricing model to provide greater cost efficiency and predictability to subscribers.
i365 is partnering with Microsoft in a slightly different way. i365 is delivering an all-in-one hardware-software-cloud solution: Evault for System Center Data Protection Manager (EDPM). The Dell server ships with both Microsoft DPM and Evault backup software accessed via a single user interface and with a unified policy engine. Why both? Since DPM is limited to protecting Microsoft’s operating system, hypervisor, and applications, EDPM allows Microsoft to address a wider audience—including Linux, UNIX, NetWare, IBM i, VMware, and Oracle users. Optionally, the EDPM storage can be replicated to the i365 cloud—creating a more economically-feasible DR copy for mid-market and small enterprise companies.
Missing from Microsoft’s DPM 2010 strategy is any statement that the company will leverage its own cloud service capabilities in Windows Azure. Will DPM be offered as software as a service (SaaS)? Will Windows Azure cloud storage be used for DPM 2010 DR copies? Stay tuned.
Microsoft announces the availability of the RC release of the Linux Integration Services v2.1. This new version includes new functionality, including timesync, integrated shutdown, and SMP support.
When installed on a virtual machine that is running a supported Linux operating system, the Linux Integration Services for Hyper-V provide the following functionality:
- Driver support for synthetic devices: The Linux Integration Services support the synthetic network controller and the synthetic storage controller that were developed specifically for Hyper-V.
- Fastpath Boot Support for Hyper-V: Boot devices now take advantage of the block Virtualization Service Client (VSC) to provide enhanced performance.
- NEW: Timesync: The clock inside the virtual machine will now remain synchronized with the clock on the host.
- NEW: Integrated Shutdown: Virtual machines running Linux can now be shut down from either the Hyper-V Manager or the VMConnect application using the “Shut Down” command.
- NEW: Symmetric Multi-Processing (SMP) Support: Supported Linux distributions can now properly use up to 4 virtual processors (VP) per virtual machine.
- NEW FOR RC: Heartbeat: Allows the host to detect whether the guest is running and responsive.
- NEW FOR RC: Pluggable Time Source: A pluggable clock source module is included to provide a more accurate time source to the guest.
This version of the integration services for Hyper-V can be downloaded from here, and supports Novell SUSE Linux Enterprise Server 10 SP3, SUSE Linux Enterprise Server 11, and Red Hat Enterprise Linux 5.2 / 5.3 / 5.4 / 5.5.
Last July, Microsoft announced the drivers for Linux source code is available in the Hyper-V virtualization environment. In practice, the 2.6.32 of the Linux kernel version now contains drivers for synthetic Hyper-V, virtual machine including the VMBus, storage, and network components. In detail, it’s hv_vmbus, hv_storvsc, hv_blkvsc and hv_netvsc modules: these modules are described in this article.
Out configurations “officially supported”, I tested the activation of these modules in new Ubuntu Server 10.4, provided recently with the 2.6.32 kernel. To do this I found this article which explains how to enable these modules, and which I inspire me thus far.
Firstly it must ensure that the Hyper-V modules are loaded at startup. To do this, edit the file / etc / initramfs-tools / modules and add the following four lines:
Then, update the initramfs image:
$ sudo update-initramfs – u
Finally, configure the network by changing the/etc/network/interfaces file to configure the network interface named seth0. Indeed, a synthetic NIC would be named seth n instead of eth n for “legacy” network adapter.
For example for a DHCP configuration, add the following to / etc / network / interfaces:
iface seth0 inet dhcp
or, for a static IP address:
iface seth0 inet static
It remains only to restart, and check the proper loading of drivers using the command:
$ Lsmod | grep hv_
For my test I used Windows Server 2008 R2 Hyper-V, and 32-bit Ubuntu Server 10.4 (ubuntu-10. 04 – server – i386 .iso).
Because I set up the VM with a synthetic network adapter, it is not detected the installation. This is not serious; it will be when it has made steps outlined previously after installation.
On this error message, choose <Continue>.
Once the virtual machine installed and started, the steps outlined above are fairly simple to implement:
After a reboot (sudo reboot), we have many assets on the network map seth0 synthetic, and other drivers loaded:
That is what servers run Linux under Hyper-V with decent performance. Have to wait for integration into the Linux kernel the next features (multi-processors, clock synchronization and stop integrated), these features are currently available in the beta integration services 2.1, SUSE Linux Enterprise Server Red Hat Enterprise Linux.