Monthly Archives: March 2011

Xen 4.1 releases

After 11 months of development and 1906 commits later (6 a day !!!), is proud to present its new stable Xen 4.1 release. We also wanted to take this opportunity to thank the 102 individuals and 25 organisations who have contributed to the Xen codebase and the 60 individuals who made just over 400 commits to the Xen subsystem and drivers in the Linux kernel.

New Xen Features

Xen 4.1 sports the following new features:

  • A re-architected XL toolstack that is functionally nearly equivalent to XM/XEND
  • Prototype credit2 scheduler designed for latency-sensitive workloads and very large systems
  • CPU Pools for advanced partitioning
  • Support for large systems (>255 processors and 1GB/2MB super page support)
  • Support for x86 Advanced Vector eXtension (AVX)
  • New Memory Access API enabling integration of 3rd party security solutions into Xen virtualized environments
  • Even better stability through our new automated regression tests

Further information can be found in the release notes.

XL Toolstack: Xen 4.1 includes a re-architected toolstack, that is based on the new libxenlight library, providing a simple and robust API for toolstacks. XL is functionally equivalent and almost entirely backwards compatible with existing XM domain configuration files. The XEND toolstack remains supported in Xen 4.1 however we strongly recommend that users upgrade to XL. For more information see the Migration Guide. Projects are underway to port XCP’s xapi and libvirt to the new libxenlight library.

Credit2 Scheduler: The credit1 scheduler has served Xen well for many years.  But it has several weaknesses, including working poorly for latency-sensitive workloads, such as network traffic and audio. The credit2 scheduler is a complete rewrite, designed with latency-sensitive workloads and very large numbers of CPUs in mind. We are still calling it a prototype scheduler as the algorithm needs more work before it will be ready to become the main scheduler. However it is stable and will perform better for some workloads than credit1.

CPU pools: The default credit scheduler provides limited mechanisms (pinning VMs to CPUs and using weights) to partition a machine and allocate CPUs to VMs. CPU pools provide a more powerful and easy to use way to partition a machine: the physical CPUs of a machine are divided into pools.  Each CPU pool runs its own scheduler and each running VM is assigned to one pool.   This not only allows a more robust and user friendly way to partition a machine, but it allows using different schedulers for different pools, depending on which scheduler works best for that workload.

Large Systems: Xen 4.1 has been extended and optimized to take advantage of new hardware features, increasing performance and scalability in particular for large systems. Xen now supports the Intel x2APIC architecture and is able to support systems with more than 255 CPUs. Further, support for EPT/VTd 1GB/2MB super pages has been added to Xen, reducing the TLB overhead. EPT/VTd page table sharing simplifies the support for Intel’s IOMMU by allowing the CPU’s Enhanced Page Table to be directly utilized by the VTd IOMMU. Timer drift has been eliminated through TSC-deadline timer support that provides a per-processor timer tick.

Advanced Vector eXtension (AVX): Support for xsave and xrestor floating point instructions has been added, enabling Xen guests to utilize AVX instructions available on newer Intel processors.

Memory Access API: The mem_access API has been added to enable suitably privileged domains to intercept and handle memory faults. This extents Xen’s security features in a new direction and enables third parties to invoke malware detection software or other security solutions on demand from outside the virtual machine.


During the development cycle of Xen 4.1, the Xen community worked closely with upstream Linux distributions to ensure that Xen dom0 support and Xen guest support is available from unmodified Linux distributions. This means that using and installing Xen has become much easier than it was in the past.

  • Basic dom0 support was added to the Linux kernel and a vanilla 2.6.38 kernel is now able to boot on Xen as initial domain. There is still some work to do as the initial domain is not yet able to start any VMs, but this and other improvements have already been submitted to the kernel community or will be soon.
  • Xen developers rewrote the Xen PV-on-HVM Linux drivers in 2010 and submitted them for inclusion in upstream Linux kernel. Xen PV-on-HVM drivers were merged to upstream Linux 2.6.36, and various optimizations were added in Linux 2.6.37. This means that any Linux 2.6.36 or 2.6.37 kernel binary can now boot natively, on Xen as dom0, on Xen as PV guest and on Xen as PV on HVM guest. For a full list of supported Linux distributions see here.
  • Xen support for upstream Qemu was developed, such that upstream Qemu can be used as Xen device model. Our work has received a good feedback from the Qemu Community, but is not yet in the mainline.

The Xen development community recognizes that there is still some way to go, thus we will continue to work with upstream open source projects to ensure that Xen works out-of-the-box with all major operating systems, allowing users to get the benefits of Xen such as multi-OS support, performance, reliability, security and feature richness without incurring the burden of having to use custom builds of operating systems.

More Info

Downloads, release notes, data sheet and other information are available from the download page. Links to useful wiki pages and other resources can be found on the Xen support page.

Xen hack-a-tron day 1

After coffee, breakfast and introductions the Xen hackathon went into full swing. 26 people from a different companies, universities and countries attended the event. Lots of discussion on project ideas, working on code and on specific problems took place.

Some of my personal highlights were:

  • Demo of the Linpicker display server for Xen based on the secure virtual platform
  • Working on the libxenlight driver for libvirt
  • Developing the SSL protocol in OCaml using the Mirage framework
  • Planning new features in libxl, such as block device scripts
  • Prototyping and planning the port XAPI to libxenlight

We also had unexpected outcomes, such as a the creation of new project ideas for GSoC. There was also discussion about a new community driven project: more on this later; it is worth much more coverage than I can give today. And of course people were getting to know each other better.

After a hard day’s work, we are all looking forward to some food and beer at The Castle pub. A big thank you to Ian Campell who did most of the groundwork in terms of organizing the event. And of course there are two more days to go. accepted for GSoC 2011

A few minutes ago, Google published the list of mentoring organisations for GSoC 2011. I am pleased to let you know that has again been accepted this year.

We already had a lot of interest from students and also have a list of interesting projects. Here is some useful information for students:

  • Our project ideas list: if you are a student and are interested in an idea, please get in touch with the mentors and CC
  • The GSoC 2011 timeline and FAQ
  • GSoC Mentoring: the guide which we as a mentoring organisation use for our mentors and to administer the project.

We will publish additional  information (accessible from the ideas list), such as a student and mentor checklist. We are also considering a timed weekly IRC session (details to be announced) where prospective students can get in touch with mentors on IRC. spring clean

In the last few weeks, we have been putting a draft plan in place to rejuvenate the website. In preparation for this, I have reviewed the content on the site, sifted through Google Analytics data, run orphaned page checkers and link checkers and realized that the site contains a lot of content that is probably not of much use to you any more. Identifying legacy content and performing a spring clean first, will make it easier and faster to rejuvenate as we will need to migrate less content to a new site. But, I need your help to find out what content you feel is important.

There are a number of areas, which could benefit from a spring clean and are not accessed frequently any more:

  • Removing old mailing lists: I posted a proposal to all mailing lists and created a poll for community members to let us know whether any of the affected lists should be retained. The poll will be open until end of March.
  • Old versions of Xen including documentation: Xen 2.0.x to 3.x are still available as downloads in the archives.
  • Presentations and vidos from Xen events, several years past
  • Solution Search: it is not quite clear to me how widely used and useful this tool is.

I would like to hear from you, to see how valuable these resources are and how to approach these going forward. I created a poll for you such that you can let us know your views. The poll will be open until beginning of April.

We are also archiving old and unused repos from specifically a few private development branches, kernels/rhel3x.hg, kernels/rhel4x.hg, kernels/sles9x.hg, linux-2.6-xen.hg (old Hg mirror of the kernel) and xen-api.hg (pre 2006 era work on xen-api, up-to-date versions are prefixed with XCP).

Using XVP to manage XCP 1.0 VMs

This a guest post by Colin Dean, author of XVP, the set of free open source tools for administering VMs running on Xen Cloud Platform and XenServer.  Colin has been writing system level software, especially client-server based tools, for a variety of OS platforms, since the late 1980s.  He first got interested in OS virtualization in 2000, and for the last couple of years has been managing a XenServer installation at Durham University in the UK.

It’s nearly a year since I first blogged on about XVP.  Since then, thousands of copies of the XVP appliance VM have been downloaded, and membership of the XVP mailing list grows almost every day.

In case you hadn’t heard, XVP allows you to boot, shutdown, reboot, suspend, resume and migrate VMs, and access their consoles, from any Windows, Linux or Mac desktop that has a web browser and Java runtime.  It has a much simpler interface than XenCenter, and allows you to grant different rights to different users, so they can perform selected operations on all VMs in a pool or selected individual VMs.  It also has the concept of groups of VMs -  by assigning tags to VMs you can easily give users access to sets of VMs.

A number of Internet hosting providers have deployed XVP to give their customers access to the VMs they’re hosting for them.  Other organizations, including Universities, use XVP internally, because it provides a quick and easy way to manage VMs, especially for people whose PCs don’t run Windows.

The XVP appliance bundles together the components of XVP (a VM console proxy server, a web interface for accessing pools, and various utilities) which were originally available separately.  Using the appliance makes the whole suite very easy to use out of the box: after importing the appliance XVA file into XCP or XenServer, you just start it and answer a few questions on its console to get going. After that, you can manage the appliance (e.g. adding pools and users) via a simple menu-based interface. The appliance uses CentOS 5 as its base operating system, and is designed so that XVP and CentOS updates can be applied easily to keep it secure and up to date.  Appliances currently based on CentOS 5.5 will readily upgrade to CentOS 5.6 when the latter is released any day now.

You can manage a single physical host, a single pool, or multiple Xen Cloud Platform and/or XenServer pools with a single instance of XVP.  The current release of XVP is fully compatible with the latest XCP 1.0 release.  Enhancements to XVP in the last year include tunneling of console connections over HTTP and HTTPS, support for LDAP-based user databases (including Active Directory), and finer-grained control over what users can see and do.

To find out more, visit the XVP website, at, where you’ll find download and install instructions, screenshots, and links to join the mailing list.

XCP 1.0 released

After 16 months of development, is proud to present the first full version of the Xen Cloud Platform. We wanted to thank the project team, who made this happen.

A full feature list as well as the install image and source packages can be found on the download page.

The following new features and improvements have been added since the XCP 0.5 release of XCP last summer:

  • Includes Xen hypervisor version 3.4.2
  • Includes Linux 2.6.32 privileged domain
  • VM Protection and Recovery: configure scheduled snapshots and (optional) archive of virtual machines via snapshot or export
  • Local host storage caching of VM images to reduce load on shared storage
  • Boot from SAN with multipathing support: boots Xen Hypervisor hosts with HBAs from a SAN, with multipathing support.
  • Improved Linux guest support: Ubuntu templates, Fedora 13/Red Hat Enterprise Linux (RHEL) 6 templates, RHEL / CentOS / Oracle Enterprise Linux versions 5.0 to 5.5 support with a generic “RHEL 5” template
  • Enhanced guest OS support for Windows 7 SP1, Windows Server 2008 R2 SP1, Windows Server 2003, and Suse Linux Enterprise Server (SLES) 11 SP1
  • Improved MPP RDAC multipathing including path health reporting and alerting through XAPI
  • Snapshot improvements: improved reclamation of space after VM snapshots are deleted, even if the VM is running
  • Support for blktap2 disk backend driver rather than blktap1
  • Support for Citrix XenCenter 5.6 FP1 Windows-based GUI management tool (see here)
  • Support for Openstack Bexar release

XCP is significant for for a number of reasons: it allows the community to develop interesting new functionality against a mature, stable and scalable virtualization stack. If you do want to get involved, check out the project’s wish list and get in touch with the XCP team via the mailing list.

Although XCP can be used as a stand-alone solution to build private clouds or as an enterprise server virtualization solution, there are significant opportunities to extend, innovate and build on top of XCP. Check out the list of open source projects and commercial solutions which already do this.

XCP integrates seamlessly with the Openstack Bexar release: this means that the Xen Hypervisor and XCP are part of an end-to-end open source software stack covering all components from the bare metal to cloud orchestration software. Over the last year, you have seen the Xen community more closely working with downstream Linux and Qemu. The same is happening with upstream projects such as Openstack and OpenNebula.

Unlike the Xen Hypervisor project, XCP delivers an installable binary. This represents a step-change in usability and enables the Xen developer community to more directly engage with its users.

You can find more information on XCP on the XCP home page and on the Wiki. And thank you again, to everybody who made this release happen!