Some time ago Konrad Rzeszutek Wilk (the Xen Linux maintainer) came up with a list of possible improvements to the Xen PV block protocol, which is used by Xen guests to reduce the overhead of emulated disks.
This document is quite long, and the list of possible improvements is also not small, so we decided to go implementing them step by step. One of the first items that seemed like a good candidate for performance improvement was what is called “indirect descriptors”. This idea is borrowed from the VirtIO protocol, and to put it clear consists on fitting more data in a request. I am going to expand how is this done later in the blog post, but first I would like to speak a little bit about the Xen PV disk protocol in order to understand it better.
Xen PV disk protocol
The Xen PV disk protocol is very simple, it only requires a shared memory page between the guest and the driver domain in order to queue requests and read responses. This shared memory page has a size of 4KB, and is mapped by both the guest and the driver domain. The structure used by the fronted and the backend to exchange data is also very simple, and is usually known as a ring, which is a circular queue and some counters pointing to the last produced requests and responses.
My name is Julien Grall. I joined the Citrix Open Source team few months ago to work on Xen on ARM with Ian Campbell and Stefano Stabellini. Since Citrix has joined the Linaro Enterprise Group (LEG), I’m also part of the virtualization team which takes care of Xen, KVM and QEMU within Linaro.
A couple of weeks ago, I have attended my first Linaro Connect Europe, held in Dublin from 8th to 12th of July. All the major players in the ARM world came together to discuss the future of the industry and build an healthy Open Source ecosystem for ARM.
So, how many of you use Debian? I bet a lot. Well, here it is what the Debian Xen package maintainers told The Xen Project, when asked a few questions. We are talking about Bastian Blank and Guido Trotter. In fact, they share the burden, with Bastian doing “most of the work nowadays” (Guido’s words) and Guido “starting packaging Xen many years ago, while assisting with stable security updates lately” (ditto).
You’ll discover that they particularly like the Xen architecture, and this makes us really really proud. It also look like a shorter release cycle for Xen is in the wishlist. Well, Xen 4.3 cycle has already been way shorter than its predecessors, and the feeling is the future will be even better!
However, the most surprising thing is that coffee is quite unpopular with them too, as it was already the case for Maarten from Mageia… I am honestly starting to think whether this could be a ‘package maintainers’ thing’!
Anyway, sincere thanks to both Bastian and Guido for finding the time for this interview, and let’s get straight to their answers!
Xenproject.org is pleased to announce the release of Xen 4.3.0. The release is available from the download page:
Xen 4.3 is the work of just over 9 months of development, with 1362 changesets containing changes to over 136128 lines of code, made by 90 individuals from 27 different organizations and 25 unaffiliated individual developers.
Xen 4.3 is also the first release made with using the roadmap to track what people were doing and aim for what to try to get into this release, as well as the first release to have consistent Xen test days. This, combined with the increased number of contributors, should make this one of the best Xen releases so far. Read on for more information.
Probably the biggest single feature of this release is the experimental support for ARM virtualization, both 32-bit and 64-bit variants. The 32-bit port of Xen boots on ARMv7 Fast Models, the Cortex A15 Versatile Express platform and the Arndale board (equipped with the Exynos5 SoC by Samsung). It can boot dom0, create other virtual machines and it supports all the basic virtual machine lifecycle operations. Hardware is not yet available for 64-bit ARM processors yet, but Xen is running well in 64-bit mode on AEMv8 Real-time System Models by ARM.
As many of you might have (inevitably) noticed, Xen frontend / backend network drivers in Linux suffered from regression several months back after the XSA-39 fix (various reports here, here and here). Fortunately that’s now fixed (see the most important patch of that series) and the back-porting process to stable kernels is on-going. Now that we’ve put everything back into stable-ish state, it’s time to look into the future to prepare Xen network drivers for the next stage. I mainly work on Linux drivers, but some of the backend improvements ideas should benefit all frontends.
The goal is to improve network performance and scalability without giving up the advanced security feature Xen offers. Just to name a few items:
Split event channels: In the old network drivers there’s only one event channel between frontend and backend. That event channel is used by frontend to do TX notification and RX buffer allocation notification to backend. It is also used by backend to do TX completion and RX notification to frontend. So this is definitely not ideal as TX and RX interferes with each other. So with a little change to the protocol we can split TX and RX notifications into two event channels. This work is now in David Miller’s tree (patch for backend, frontend and document).
As it is widely know, really tough Open Source users –the ones that wear sandals, colored hats of various kind, and are equipped with long enough UNIX beards– always install software via tarballs and some good old
./configure-make-make-install-fu! Then there are the developers, who couldn’t care less about installing: all that matters is from where you can checkout –well, actually, this days it’d better be
git clone– the code. Once you got it, and you compile it with no errors, what else is remaining and what on Earth you want to install it for, right?
(Un?)Fortunately, there exist different kind of people too. They, whiskered or not, are usually very happy every time they can avoid dealing with either tar or git, and can start using some software by only sending a couple of directives to their favorite distribution’s package manager. That, usually, means a loot of cool things, like automatic dependency tracking, cleanup upon uninstall, smooth update to new versions, and all this kind of stuff. However, for this to work, it is required that someone has stepped up to act as the package maintainer of that particular software for the specific distribution. Package maintainers are in a very peculiar spot. In fact, wrt the software they package, they’re not regular users, nor they (not necessarily, at least) act as core developers for it, and yet they play an important role in determining the degree of success of a project.