Monthly Archives: May 2012

Video: Intro to Virtualization, Xen, XCP, and the Cloud

This is a guest blog post by Patrick F. Wilbur, a long-time Xen user and active member of the Xen community.

You might know me from Xen Day and Xen training events in the past, or perhaps from the Running Xen book. I recently taught a lesson in an operating systems lab class on both personal virtualization and enterprise-grade virtualization, where the latter portion focused on Xen, Xen Cloud Platform (XCP), and even a little bit of the XenAPI (XAPI). I decided to share the video recording of the lab with the community. While by no means comprehensive of all relevant topics, it serves as a brief, high-level introduction to Xen and XCP. I hope you enjoy it!

In the full lesson, we begin by introducing virtualization in general and Type 2 personal virtualization solutions (e.g. VirtualBox), and their usefulness for sandboxing, testing, and checkpointing. Where the video (above) picks up, we then contrast those solutions with Xen (a Type 1 hypervisor), and boot XCP out-of-the-box to demonstrate a convenient and fully-featured way to get an enterprise-grade virtualization solution up and running. We conclude with a simple XenAPI scripting example coded in Python, and briefly discuss how such a fully-featured API makes Xen ready for your cloud computing needs.

The virtual machine disk images that were used in this video are available online for download. The example Python script is also available.

Much of the material is taken from the 2011 Xen Day Boston complete slides, which go into much more detail and are available online at

Xen Document Day: May 28th

We have another Xen document day come up next Monday. Xen Document Days are for people who care about Xen Documentation and want to improve it. We introduced Documentation Days, because working on documentation in parallel with like minded-people, is just more fun than working alone! Everybody who can contribute is welcome to join!

For a list of items that need work, check out the community maintained TODO and wishlist. Of course, you can work on anything you like: the list just provides suggestions.
Continue reading

libxl event API improvements

Over the past few months we have been working on improving the API for the libxl library. libxl is to become the base layer for all Xen toolstacks. We intend the version of libxl in Xen 4.2 to have a stable interface, with which we will maintain backward compatibility for some time to come.

The Xen 4.1 libxl API had some awkward features. Particularly, dealing with long-running operations, and getting information about events such as domain death, was difficult to do correctly in daemons such as libvirt’s virtd and XCP/XenServer. For example, the wait for domain death facility did not tell you which domain had died! And many of the functions would block a whole event-loop-based process while a long-running operation completed. The new arrangements are intended to support everything from the simple xl command line utility, to event-callback-based daemons such as virtd, and also to be convenient for use in multithreaded programs.

This has required a lot of behind-the-scenes infrastructure, which insulates libxl code implementing specific VM operations from the need to know about the calling toolstack’s concurrency model. As I write this, the changes are already in the xen-unstable.hg tree undergoing testing, and we are putting the finishing touches to the APIs.

Continue reading

Update on the new site

A few months ago we started developing the new website. It’s time for an update! Most of the framework and look & feel are now in place. The main outstanding tasks are to migrate content, update the site content where updates are needed, to put the finishing touches on the new site and to handle migration tasks (such that links to existing articles will be redirected to the new site). This means that we should be able to launch the new site in July.

Major new features

Besides updating the content, the site will feature the following major new functionality:

  • User registration, including social functionality. You don’t have to register to make use of the site though!
  • A Question and Answer system similar to stackoverflow
  • A self-service directory for solutions (services, consulting, products, open source products and research papers) that are related to Xen hosted projects or make use of them
  • The blog will be integrated into the new site
  • A download system

Continue reading

NUMA and Xen: Part II, Scheduling and Placement

Where were we?

So, here it is what we said up to now. Basically:

  1. NUMA is becoming increasingly common;
  2. properly dealing with NUMA is important for performance;
  3. one can tweak Xen for NUMA, but it would be nice for that to happen automagically!

So, let’s tackle some automatic NUMA handling mechanisms this time!

NUMA Scheduling, whatsit

Suppose we have a VM with all its memory allocated on NODE#0 and NODE#2 of our NUMA host. As already said, the best thing to do would be to pin the VM’s vCPUs on the pCPUs related to the two nodes. However, pinning is quite unflexible: what if those pCPUs get very busy while there are completely idle pCPUs on other nodes? It will depend on the workload, but it is not hard to imagine that having some chance to run –even if on a remote node– would be better than not running at all! It is therefore preferable to give the scheduler some hints about where a VM’s vCPUs should be executed. It then can try at its best to honor these requests of ours, but not at the cost of subverting its own algorithm. From now on, we’ll call this hinting mechanism node affinity (don’t confuse it with CPU affinity, which is about to static CPU pinning).

Continue reading

Benchmarking the new PV ticketlock implementation

This post written collaboratively by Attilio Rao and George Dunlap

Operating systems are generally written assuming that they are in direct control of the hardware. So when we run operating systems in virtual machines, where they share the hardware with other operating systems, this can sometimes cause problems. One of the areas addressed by a recently proposed patch series is the problem of spinlocks on a virtualized system. So what exactly is the problem here, and how does the patch solve it? And what is the effect of the patch when the kernel is running natively?

Spinlocks and virtualization

Multiprocessor systems need to be able to coordinate access to important data, to make sure that two processors don’t attempt to modify things at the same time. The most basic way to do this is with a spinlock. Before accessing data, the code will attempt to grab the spinlock. If code running on another processor is holding the spinlock, the code on this processor will “spin” waiting for the lock to be free, at which point it will continue. Because those waiting for the spinlock are doing “busy-waiting”, code should try to hold the spinlock only for relatively short periods of time.

Continue reading