I’ve recently returned from Debconf 13, in Vaumarcus in Switzerland. My colleague Ian Campbell joined me there.
Debconf is the annual conference for contributors to Debian, with a few hundred attendees. There’s a fairly standard conference format with a programme of talks and BoF sessions, but the best part of of a Debconf is usually the ad-hoc conversations with other developers. Often thorny design problems involving multiple parts of the system can be tackled much more effectively in person, so there’s quite a bit of vigorous handwaving and the odd whiteboard/flipchart session.
We had an excellent time and spent rather too much of it staring at the amazing view of Lake Neuchatel. Debian’s 20th birthday party was not to be missed either.
This year’s Debconf found a substantial offering of cloudy topics on the schedule. One major theme was the ways in which Debian are working on better integration with the big public clouds, for example by providing ready-to-use images and by better packing of cloud-related software.
Of particular interest for Xen was Thomas Goirand’s talk on the integration between OpenStack’s various components. OpenStack is a complicated piece of software which has been difficult to install and get running. Thomas, who runs a Xen-based public cloud provider, has been working to make the installation process smoother using Debian’s configuration management systems.
For me, an interesting topic was the continuing difficulty of integration between the Debian archive (Debian’s primary software repository) and git, and after a session in the bar with Joey Hess and others I wrote a tool to help with that.
Debconf is always a highlight of my year and I look forward to next year’s in Portland.
If you are testing Xen in your environment, you probably already have images native to other hypervisors which you might like to test.Â So a common question is, “How can I convert these images so I can use them in Xen?”
Conversion requires two steps: first, convert the foreign hypervisor image to a Xen-native format, and, second, adjust the contents of the image to accept the new virtual machine environment.
The second step depends largely on the guest operating system involved.Â Windows, for example, can require a number of steps to backup the registry, introduce the new virtual hardware, and install the new device drivers.Â The first step, on the other hand, is often straightforward.
The following monologue explains how Linux drivers are able to program a device when running in a Xen virtual machine on ARM.
The problem that needs to be solved is that Xen on ARM guests run with second stage translation in hardware enabled. That means that what the Linux kernel sees as a physical address doesn’t actually correspond to a machine address. An additional translation layer is set by the hypervisor to do the conversion in hardware.
Many devices use DMA to read or write buffers in main memory and they need to be programmed with the addresses of the buffers. In the absence of an IOMMU, DMA requests don’t go through the same physical to machine translation set by the hypervisor for virtual machines, devices need to be programmed with machine addresses rather than physical addresses. Hence the problem we are trying to solve.
Definitions of some of the technical terms used in this article are available at the bottom of the page.
Given the complexity of the topic, we decided to ask for help to somebody with hands-on experience with teaching the recognition of the differences between “virtual” and “real”.
Please. Come. Sit.
Do you realize that everything running on Xen is a virtual machine — that Dom0, the OS from which you control the rest of the system, is just the first virtual machine created by the hypervisor? Usually Xen assigns all the devices on the platform to Dom0, which runs the drivers for them.
I imagine, right now, you must be feeling a bit like Alice, tumbling down the rabbit hole?
Let me tell you why you are here.
You are here because you want to know how to program a device in Linux on Xen on ARM.
Linux stable tree maintainer Greg Kroah-Hartman to give Featured Talk
The Xen Project is pleased to announce the sessions which will be presented at the 2013 Xen Project User Summit.Â Scheduled for September 18 in New Orleans, the Xen Project User Summit is the first major event focused entirely on users of the Xen Project software.Â While there have been other Xen Summits in the past, they have always consisted of a mixture of User and Developer sessions.Â This year, we have the opportunity to present two different events, a User Summit in September and a Developer Summit in October.
The Xen Project User Summit session lineup includes some excellent topics and speakers:
Keynote Address: Xen: This is not your Dadâ€™s hypervisor!
Demetrious Coulis, Senior Principal Product Manager for CA AppLogic at CA Technologies, will deliver the keynote address.Â He will explain why Xen’s strengths are critical for powering CA AppLogic and platforms like OpenStack.
Featured Talk: Free yourself from the tyranny of your cloud provider!
Greg Kroah-Hartman, maintainer of the stable branch of the Linux kernel (among a mass of other things), will discuss how using kexec in a paravirtualized user domain, with no changes to the control Domain or Xen itself, can allow you to boot your own kernel, no matter what the hostingÂ provider is forcing you to run.
And a whole lot more…
Some time ago Konrad Rzeszutek Wilk (the Xen Linux maintainer) came up with a list of possible improvements to the Xen PV block protocol, which is used by Xen guests to reduce the overhead of emulated disks.
This document is quite long, and the list of possible improvements is also not small, so we decided to go implementing them step by step. One of the first items that seemed like a good candidate for performance improvement was what is called “indirect descriptors”. This idea is borrowed from the VirtIO protocol, and to put it clear consists on fitting more data in a request. I am going to expand how is this done later in the blog post, but first I would like to speak a little bit about the Xen PV disk protocol in order to understand it better.
Xen PV disk protocol
The Xen PV disk protocol is very simple, it only requires a shared memory page between the guest and the driver domain in order to queue requests and read responses. This shared memory page has a size of 4KB, and is mapped by both the guest and the driver domain. The structure used by the fronted and the backend to exchange data is also very simple, and is usually known as a ring, which is a circular queue and some counters pointing to the last produced requests and responses.