Archive

Archive for the ‘Citrix’ Category

Managing XenServer with System Center Virtual Machine Manager (SCVMM) 2012

http://blogs.citrix.com/2011/06/16/managing-xenserver-with-system-center-virtual-machine-manager-scvmm-2012/

Update: Just discovered it was  released in German on Michel  form MS blog and he worked with Thomas from Citrix on the English one which has then published on his blog. So we can say this is a result form MS and Citrix geeks partnership 🙂

 German Source: http://www.server-talk.eu/2011/06/14/citrix-xenserver-fabric-management-in-system-center-virtual-machine-manager-2012/

English Source: http://blogs.citrix.com/2011/06/16/managing-xenserver-with-system-center-virtual-machine-manager-scvmm-2012/

In many of today data centers you will commonly find hypervisors from multiple vendors used in parallel because of various reasons. Typical candidates are the hypervisors from VMware, Microsoft or Citrix and -in rare cases- Red Hat (KVM).

The challenge with these kinds of heterogeneous environments is the management (i.e. operational procedures, maintenance, support). To allow efficient processes it is necessary to leverage a management platform that is common across the various technologies. With the upcoming new 2012 version of Microsoft’s System Center Virtual Machine Manager, SCVMM will be able to manage hypervisors from other vendors as well. As Microsoft and Citrix maintain close and longstanding relationship (see http://www.v-alliance.com for further information) Citrix XenServer is one of the platforms that can be managed by SCVMM.

The intention of this blog, which is a joint effort between Michel Lüscher (Consultant Datacenter – Windows Server and Virtualization) from Microsoft Consulting in Switzerland and myself, is to give you an initial idea about what’s coming in the near future. (The German version of this article can be found here)

Important: Please note that this article refers to the public beta of System Center Virtual Machine Manager 2012 and the XenServer Supplemental Pack only. The RTM (Release to Manufacturing) version might have a different features and functionalities!

SCVMM management functionalities

SCVMM will be able to perform the following tasks on XenServer hosts.

  • VM Deployment
    • VM Template and Services Deployment
    • Intelligent Placement (Host Rating)
    • Support for PV and HVM Virtual Machines
    • VMM Templates (not XenServer Templates)
  • VM Migration
    • XenMotion (within the attached Resource Pool)
    • LAN Migration between XenServer and the VMM Library
    • No V2V (use P2V instead)
  • Host Management
    • Dynamic Optimization
    • Power Optimization
    • Maintenance Mode
    • Storage
      • Support for all kinds of XenServer Repositories
  • Network

Requirements

SCVMM 2012 will support the following versions of XenServer:

  • Citrix XenServer 5.6 Feature Pack 1
  • Citrix XenServer 5.6 SP2
  • Citrix XenServer “Boston”

Microsoft is committed to constantly work on the XenServer support side, to ensure post-„Boston“ releases of XenServer can be managed by SCVMM shortly after their public appearance. Based on the close partnership Citrix supports this effort by providing the Microsoft engineering teams with early releases of XenServer as soon as they become available.

Citrix XenServer Supplemental Pack

To allow Microsoft System Center Virtual Machine Manager managing a XenServer or XenServer Resource Pool, it is necessary to install the “SCVMM Integration Suite Supplemental Pack” within the Dom0 of the respective XenServer(s). Please note that the provided supplemental pack is compatible with XenServer 5.6 Feature Pack 1 or higher only.

The installation of the supplemental pack can to be done using two different ways. The more complex way (which is required for existing XenServers) is leveraging the XenServer CLI and requires root permissions.

1. Download and mount the installation ISO:

# mkdir /tmp/scvmm
# cd /tmp
# wget http://downloadns.citrix.com.edgesuite.net/akdlm/5622/scvmm-beta-integration.iso
# mount -o loop /tmp/scvmm-beta-integration.iso /tmp/scvmm

2. Install the components:

# cd /tmp/scvmm
# cd xs#xenserver-integration-suite
# ./install.sh
# cd ../xs#xenserver-transfer-vm
# ./install.sh
# cd /

(click to enlarge)

3. Clean up.

# umount /tmp/scvmm
# rmdir /tmp/scvmm
# rm /tmp/ scvmm-beta-integration.iso

The easier way of installing the supplemental pack is during the initial XenServer setup. During the installation procedure the wizards asks if further supplemental packs should be installed. All you need to do is to insert the Supplemental Pack CD (or ISO) and follow the on-screen instructions.

1)

2)

3)

Integrating Citrix XenServer into SCVMM

After the installation completed successfully we need to switch to the SCVMM Admin Console for the final preparatory work. The first step is to create a “Run As Account” within the “Create Run As Account Wizard”, as shown on the screenshot below (Settings Workspace ⇒ Security ⇒ Create Run As Account (Ribbon)):

The next step is to actually integrate the XenServer with SCVMM. This is done using the following wizard: “Fabric Workspace ⇒ Servers ⇒ Add Resources (Ribbon) ⇒ Citrix XenServer Hosts and Clusters”, as shown on the screenshot below:

Now the XenServer should be listed as a available resource within the “Fabric Workspace”, as shown below:

Troubleshooting

In case the integration of XenServer into SCVMM is not successful, check the following items:

  • Is DNS functional and can all relevant servers be resolved?
  • Is the XenServer Certificate valid and does it correspond with the computer name specified? (Click on “View Certificate” within the wizard)
  • Can SCVMM connect to the XenServer? Run the following command on your SCVMM server to check (replace XenServer name, user name and password accordingly):
    C:\> winrm enum http://schemas.citrix.com/wbem/wscim/1/cim-schema/2/Xen_HostComputerSystem -r:https://MyXenServer:5989 -encoding:utf-8 -a:basic -u:”root” -p:”MyPassword” -skipcacheck -skipcncheck

Further information

Update 2: Michel from Microsoft Kindly informed me about 2 changes to the article:

1) in the “requirements” section:

Note: Citrix XenServer 5.6 will be supported in the beta version only. When the final version of SCVMM 2012 RTM “Boston” will be the only supported XenServer version. This is due to changes in the core platform of XenServer and its dependencies with SCVMM,

  2) In the “troubleshooting” section:

The CN field and the ElementName in the result of this command have to match exactly. The exact value of these fields must be specified as the computer name when adding the Citrix XenServer host.

Categories: Citrix, Hyper-V, Hyper-V R2, VMM2012

vSphere vs Hyper-V vs XenServer

Virtualizationmatrix.com did a good effort listing vSphere vs Hyper-V vs XenServer features.. You can compare and even change versions

http://www.virtualizationmatrix.com/matrix.php

Citrix releases first beta of XenServer codename Cowley, distributed virtual switching included

October 1, 2010 1 comment

Source

Earlier today Citrix announced the first public beta of a new XenServer version. Codename Cowley, it’s unclear if this version of the hypervisor will be the 5.7 or the 6.0. Nonetheless it’s a significant release as it finally includes the Open vSwitch technology and the distributed virtual switching capabilities that come with it.

The early bits of Open vSwitch appeared online in August 2009, along with a technology roadmap that clears the intention to compete against the VMware vNetwork Distributed Switch architecture and the Cisco Nexus 1000V software switch. It took almost an entire year to reach version 1.0. Meanwhile Open vSwitch became a key component of the Xen Cloud Platform (XCP) networking infrastructure, another project supported by Citrix.

In the last few weeks Simon Crosby, CTO of the Data Center and Cloud Computing division at Citrix, spent some time to clarify the need for a more sophisticated virtual switch:

OpenFlow based virtual switches in each server can be logically pooled into a single fabric by an external distributed virtual switch controller to build a dynamic, multi-tenant, programmable datacenter fabric that supports key innovations in cloud computing, as well as allowing us to take advantage of standard x86 CPUs to run a set of rich edge packet-processing functions to secure, direct, filter and otherwise control the delivery of cloud based applications.
With the Open vSwitch in place, the Open Stack open source cloud orchestration layer will be able to exert direct control over the data center fabric to deliver a rich, enterprise ready network layer with powerful controls for security, multi-tenancy, load balancing, monitoring, compliance, charge-back and more.

He also described the potential of the OpenFlow protocol (through 3rd party contribution):

The Host sFlow agent exports hypervisor performance statistics (both the physical server statistics, CPU, memory, I/O) as well as per-VM performance statistics (similar to libxenstat). Since switches, routers and load balancers (Open vSwitch, Vytatta, NetScaler) now exist as software entities, you cannot manage network performance without understanding the performance of the host since an apparent network performance problem could now be due to a lack of computational resources on the server. The sFlow standard provides the comprehensive, scalable monitoring solution needed to manage performance in converged environments.
In addition to providing physical and virtual server statistics, the Host sFlow agent can automatically configure sFlow in the Open vSwitch, greatly simplifying the task of coordinating performance monitoring across the data center.
The Host sFlow agent has been built and tested on XenServer 5.6. The agent is tiny (around 50K) and imposes a negligible load on the hypervisor (it spends most of its time asleep, waking up occasionally to grabs some counters, sends a UDP datagram and go back to sleep).

Recently, Crosby also revealed that following versions of Open vSwitch will support Intel Single Root I/O Virtualization (SR-IOV) technology.

The next version of XenServer anyway has more than just distributed virtual switching. The list of new features include:

  • VM Protection & Recovery
    The capability to configure scheduled snapshots and (optionally) export entire virtual machines.
  • Web-based Self-Service Provisioning Portal
    Provides browser-based access to selected virtual machines by delegated administrators.
  • Boot from SAN with multi-pathing support
    Boot XenServer hosts with Fibre channel HBAs from a SAN, with multi-pathing support.
  • HA Restart Priority
    Configure HA policies to restart specific VM(s) first, such as StorageLink Gateway VMs or the Distributed vSwitch Controller VM.
  • Support for multi-vCPU in Dom0.
  • Automatic reclamation of storage space after VM snapshots have been deleted.
  • Support for Microsoft Windows 7 SP1, Windows Server 2008 R2 SP1, Red Hat Enterprise Linux (RHEL) 6.0, CentOS 6.0, Oracle Enterprise Linux (OEL) 6.0, Debian Squeeze (32 and 64-bit), and Novell SUSE Linux Enterprise Server (SLES) 11 SP1 guest OSes
  • Support for RHEL 5.x as “generic” guest OS
  • OEM of Brocade HBA drivers and command-line tools
  • Local host caching of VM images to reduce storage TCO for XenDesktop VDI deployments (this feature will require a future version of XenDesktop to work)

The web-based self-service provisioning portal probably comes from the integration with the VMLogix technology that Citrix acquired exactly one month ago.

Categories: Citrix, Cloud Tags: , ,

The Open vSwitch – Key Ingredient of Enterprise Ready Clouds

September 15, 2010 Leave a comment
posted by Simon Crosby

I’m often asked what Citrix and the open source community are trying to achieve with the Open vSwitch Project. The Open vSwitch is an open source virtual switch for Xen (and therefore XenServer, and in future perhaps Amazon EC2 and RackSpace), and KVM based virtual infrastructure that replaces the Linux bridge code with a powerful, programmable switch forwarding capability as well as programmable per-virtual interface ACLs. The Open vSwitch supports an emerging industry standard protocol for programming the forwarding plane from an outside controller. This protocol is called OpenFlow. OpenFlow based virtual switches in each server can be logically pooled into a single fabric by an external distributed virtual switch controller to build a dynamic, multi-tenant, programmable datacenter fabric that supports key innovations in cloud computing, as well as allowing us to take advantage of standard x86 CPUs to run a set of rich edge packet-processing functions to secure, direct, filter and otherwise control the delivery of cloud based applications. With the Open vSwitch in place, the Open Stack open source cloud orchestration layer will be able to exert direct control over the data center fabric to deliver a rich, enterprise ready network layer with powerful controls for security, multi-tenancy, load balancing, monitoring, compliance, charge-back and more.

To understand the need for the Open vSwitch, you have to realize that while CPU virtualization, including hardware support, has evolved rapidly over the last decade, network virtualization has lagged behind pretty badly. The dynamism that virtualization enables is the enemy of today’s locked down enterprise networks. For example, migrating a VM between servers could mean that network based firewall and intrusion detection systems are no longer able to protect it. Moreover, many enterprise networks are administered by a different group than the servers, so VM agility challenges an organizational boundary. What we want to achieve is seamless migration of all network-related state for a workload, along with the workload. The obvious place to effect such network changes is in the last-hop switch – which now, courtesy of Moore’s Law and virtualization, is on the server itself, either in the hypervisor or (increasingly) in smart hardware associated with a 10Gb/s NIC card. The Open vSwitch enables granular control over traffic flows, with per flow admission control, the option for rich per packet processing and control over forwarding rules, granular resource guarantees and isolation between tenants or applications, and enables us to dynamically reconfigure the network state for each VM, or for each multi-VM OVF package, as it is deployed or migrated. Network state for each virtual interface becomes a property of the virtual interface, and as a VM moves about the physical infrastructure, all of the policies associated with the VIF move with it. Suddenly the network team is no longer required in order to move a VM between servers.

The Open vSwitch, answers many of the shortcomings of our original hypervisor bridge code, which grew up from the Linux bridge code, and adds powerful features traditionally found only in dedicated switching infrastructure, such as packet filtering, flow admission control and programmable forwarding. It permits us to take advantage of the incredible price/performance benefits of packet processing on standard CPUs, and the near term addition of so-called Single Root I/O Virtualization (SR-IOV) to the edge packet processing feature set will enable the most profound changes in data center and cloud networking architecture since the invention of the router. Most importantly, the Open vSwitch is open source, and will serve multiple hypervisors. I fully expect the community to make it available as a drop-in replacement for the VMware vDS, and to deliver versions of it for a future release of Hyper-V. This then raises the exciting prospect of an entirely open and programmable architecture for networking in the cloud, that is hypervisor independent. As a result, the richness of both private and public cloud networks (and hence their ability to support a greater proportion of enterprise workloads) will not be hypervisor dependent. Open vSwitch offers the ISV ecosystem an enormous opportunity to innovate in edge networking, free of the constraints of traditional network-appliance centric approaches to application delivery, with new, automated management and control plane functions that simplify, accelerate and ease the management of scalable cloud networks.

From a Citrix-specific perspective, Open vSwitch permits us to dynamically instantiate instances of NetScaler VPX, Branch Repeater VPX, or Access Gateway VPX as value-added networking functions withn cloud based networks, and it will enable us to facilitate the seamless extension of the enterprise network to service provider operated clouds. If, as we expect, the Open vSwitch is more broadly endorsed as a common element of future clouds, with open APIs for dynamic control of the data center fabric, it will catalyze an opportunity for all vendors – including those in the network infrastructure business today – to deliver powerful, secure and differentiated cloud architectures.

Many people wonder if the Open vSwitch is “competitive” with the ambitions of traditional networking vendors or with the Cisco Nexus 1000v virtual switch. The answer is “No – indeed the opposite”: The Nexus 1000v from Cisco provides Cisco customers with a powerful distributed switch architecture that brings the value of the full Cisco edge processing capability to virtualized environments, including Cisco management and toolset support. I would have no hesitation in recommending the Cisco product to Cisco customers. It delivers a value-added proposition on top of the basic concept of a dynamically controllable forwarding plane, very similar to OpenFlow and the Open vSwitch.

It would be easy to implement the Nexus 1000v both in parallel with, or on top of, the Open vSwitch. Indeed the value of OpenFlow has been recognized by one Cisco research group, and HP, Dell and NEC are active participants in the development and use of OpenFlow. Startups, such as Netronome and Solarflare are leading the way toward extensive hardware support of the Open vSwitch, permitting native multi-10Gb/s speed switching on server hardware that also hosts virtualized enterprise workloads.

Open vSwitch can be used to replace the VMware vDS, which is a proprietary, rather prosaic implementation of a modestly richer networking stack for vSphere / vCloud. Unfortunately vDS does not separate forwarding and control plane functions clearly, and therefore limits the ability of the ISV ecosystem to innovate on VMware infrastructure. It is tied to the notion of VLANs as network isolation structure, and provides little in the way of differentiated per-application flow treatment. It also has no mapping onto SR-IOV based hardware functions, and therefore has no clear value in a world where increasingly sophisticated second generation SR-IOV NICs are becoming available, with richly programmable forwarding hardware.

The Open vSwitch is a reminder of the incredible power of open source: It catalyzes the contribution of numerous aligned vendors, commoditizes legacy architectures, accelerates the pace of development, and enables a robust ecosystem of value-added providers to exist around a common core feature set. We can look forward to enabling an ecosystem of many value-added networking vendor products around the (commoditized) forwarding function found in all switches and NICs today.

Categories: Citrix, Cloud Tags: , , ,