10 things you need to know about Citrix XenServer 6.5

Daily life becomes our Zen training. — Shunryu Suzuki

Citrix XenServer isn’t as popular as ESXi or Hyper-V. But if you already use Citrix products, you should think of Xen because you already have expertise with this vendor.

1. Brief history

On 13th January 2015 Citrix released XenServer 6.5, offering a 64bit Dom0 and significant networking and disk performance increase. The XenServer control domain is now able to directly access far more memory (RAM) and address more PCIe adapters than before leading to increased scalability and performance of the overall system.

Xen first public release was in 2003, became part of the Novell SUSE 10 in 2005 (later also Red hat). In Oct 2007 Citrix acquired XenSource (the main maintainer of the Xen code) and released XenServer under the Citrix brand. Version 5.6 was released in May 2010, 5.6 SP2 released May 2011, XenServer 6 in Sep 2011, XenServer 6.1 in Sep 2012 and XenServer 6.2 in June 2013.

2. Architecture

XenServer uses paravirtualization and hardware-assisted virtualization, requiring either a modified guest OS or hardware assisted CPU (more commonly seen as less restrictive and hardware assisted CPUs like Intel VT/AMD-V have become standard). The device drivers are provided through a Linux-based guest (CentOS) running in a control virtual machine (Dom0).

Citrix XenServer 6.5

Fig. 1 – Citrix XenServer Architecture

 

3. What is the difference between XenServer and the open-source Xen Project Hypervisor?

The Xen Project hypervisor is used by XenServer. In addition to the open-source Xen Project hypervisor, Citrix XenServer includes:

  • XenCenter – A Windows client for VM management
  • VM Templates for installing popular operating systems as VMs
  • vGPU
  • Resource pools for simplified management of hosts, storage, and networking
  • Enterprise level support

4. Re-introduction and Improvements to Workload Balancing

XenServer 6.5 sees the return of the WorkLoad Balancing (WLB) virtual appliance. WLB automates the process of moving Virtual Machines between hosts to evenly spread Network, CPU, and Disk loads to maximize throughput. WLB keeps a history of the usage of CPU, Disk, and Network for all VMs in the pool so it can predict where workloads can be best located. WLB gives system administrators deep insight into system performance, allowing infrastructure optimization.

5. XenServer 6.5 includes performance, scalability, usability, and functional improvements to vGPU.

XenServer will scale as your hardware grows with support for more physical GPUs per host – it now supports up to 96 vGPU accelerated VMs per host compared to 64 vGPU accelerated VMs in XenServer 6.2 SP1, further reducing the TCO for deployments.

6. In-memory Read Caching

In scenarios where golden images are deployed and VMs share much of their data, the few specific blocks VMs write are stored in differencing-disks unique to each VM. Read caching improves a VM’s disk performance as, after the initial read from external disk, data is cached within the XenServer host’s memory. This enables all VMs to benefit from in-memory access to the contents of the golden image, reducing the amount of I/O going to and from physical storage.

7. Updated Open vSwitch

XenServer 6.5 includes the latest version, OVS 2.1.3, which supports megaflows. Megaflow reduces the number of required entries in the flow table for most common situations and improves the ability of Dom0 to handle many server VMs connected to a large number of clients.

8. Distributed Virtual Switch

XenServer 6.5 contains a new DVSC version from Nicira (DVSC-Controller-37734.1), and contains platform related security fixes (for example, OpenSSL and Bash Shellshock)

9. Lower Deployment Costs with Space Reclamation on the Array Space reclamation

This feature allows you to free up unused blocks (for example, deleted VDIs in an SR) on a LUN that has been thinly provisioned by the storage array. It enables notifications of deletions within LVM to be communicated directly to the array. Once released, the reclaimed space is then free to be reused by the array.

10. Live LUN Expansion

In order to fulfill dynamic capacity requirements, you may wish to add capacity to the storage array to increase the size of the LUN provisioned to the XenServer host. The Live LUN Expansion feature allows to you to increase the size of the LUN without any VM downtime.

 

The following two tabs change content below.

Gica Livada

Technical Consultant at IRIS Luxembourg SA
Gica is working in Luxembourg and is former member of the VMware Centre of Excellence team from IBM Delivery Center in Brno, Czech Republic. He is passionate about virtualization and cloud technologies, holds multiple industry certifications from IBM, VMware, Citrix, Microsoft and he is also vExpert 2014, 2015 and 2016.

Latest posts by Gica Livada (see all)

About Gica Livada

Gica is working in Luxembourg and is former member of the VMware Centre of Excellence team from IBM Delivery Center in Brno, Czech Republic. He is passionate about virtualization and cloud technologies, holds multiple industry certifications from IBM, VMware, Citrix, Microsoft and he is also vExpert 2014, 2015 and 2016.
Bookmark the permalink.

2 Comments

  1. Hello,
    You say :”Citrix XenServer includes: Control domain (Dom0)”, but xen project also has it in his architecure Dom0, so it’s Xen who inherited it or it is that Citrix to use the Dom0 Xen? https://wiki.xenproject.org/wiki/Xen_Project_Software_Overview#Introduction_to_Xen_Project_Architecture

Leave a Reply