Author Archives: Stephen Spector

About Stephen Spector

Stephen Spector is the Program Manager for responsible for the website, events (Xen Summits), mailing lists, and all things non-technical related to Weekly Newsletter Vol 10 No 32

Welcome to the weekly newsletter with a variety of information to keep you updated on all things Xen. Please feel free to contact me with suggestions for the newsletter.

Xen News

Xen Events

Xen Products

Xen Members in Action

  • Mailing Lists SuperStars –   Gianni Tedesco, Tapas Mishra, Stefano Stabellini
  • Welcome to some new members to mailing lists – Benjamin Knoth, Eric Laflamme, Ted Lin, Ng Dennis

Xen Weekly Stats

  • Mailing Lists Stats: Xen-Devel (89 Patches, 23 Questions, 248 Responses) ; Xen-Users (49 Questions, 112 Responses); Xen-api (9 Patches)

The complete newsletter with all data including the summary of all xen-users/xen-devel/xen-api mailing lists can be found at

[RFC] Removing libxc xc_ptrace interface

From Ian Campbell –

The last in-tree user of the xc_ptrace functionality was removed in
changeset:   21732:eb34666befcc
user:        Ian Jackson <>
date:        Fri Jul 02 17:46:01 2010 +0100
files:       […]
tools/debugger/gdb: Remove gdb

This code is not maintained, does not work properly, and no-one is
using it.  Delete it, following discussion on xen-devel.

and has now been replaced with gdbsx.

We are having trouble tracking down all of the contributors to the ptrace code in libxc (as part of the effort to relicense libxc under the LGPL, see [0] or [1]) and if this functionality is no longer required it would be simplest to remove it. I will follow up shortly with a patch to disable the build by default and the code will be removed shortly unless anyone objects. If you have an out-of-tree project which is using this functionality (specifically the functions xc_register_event_handler()
xc_ptrace() and xc_waitdomain()) then please tell us ASAP.



RFC xen device model support

From Stefano Stabellini –

this is the long awaited patch series to add xen device model support in qemu; the main author is Anthony Perard.
Developing this series we tried to come up with the cleanest possible solution from the qemu point of view, limiting the amount of changes to common code as much as possible. The end result still requires a couple of hooks in piix_pci but overall the impact should be very limited.
The current series gives you an upstream qemu device model able to boot a Linux or a Windows HVM guest; some features are still missing compared to the current qemu-xen, among which vga dirty bits, pci passthrough and stubdomain support.

For any of you that want to try it, this is the step by step guide:

– clone a fresh copy of xen-unstable.hg, make and install; note that the xen-unstable make system will clone a linux tree and a qemu-xen tree by default: you can avoid the former just executing ‘make xen’ and ‘make tools’ instead of ‘make world';

– configure qemu using xen-dm-softmmu as target and extra-ldflags and extra-cflags pointing at the xen-unstable build directory, something like this should work:

./configure –target-list=xen-dm-softmmu –extra-cflags=”-I$HOME/xen-unstable/dist/install/usr/include” –extra-ldflags=”-L$HOME/xen-unstable/dist/install/usr/lib” –enable-xen

– build qemu and install the newly compiled binary (xen-dm-softmmu/qemu-system-xen);

– edit your VM config file and modify device_model to point at it.

Currently only xl (not xend) knows how to spawn the new qemu device model with the right command line options.
As you can see the build and test procedures are not straightforward yet, but in the near future we plan to provide a way to select an upstream qemu tree for use as xen device model directly from the xen-unstable build system.

The patch series adds a new target with the whole xen device model machinery; each patch contains a detailed description.

[RFC] Credit1: Make weight per-vcpu

From George Dunlap  –

At the moment, the “weight” parameter for a VM is set on a per-VM basis.  This means that when cpu time is scarce, two VMs with the same weight will be given the same amount of total cpu time, no matter how many vcpus it has.  I.e., if a VM has 1 vcpu, that vcpu will get x% of cpu time; if a VM has 2 vcpus, each vcpu will get (x/2)% of the cpu time.

I believe this is a counter-intuitive interface.  Users often choose to add vcpus; when they do so, it’s with the expectation that a VM will need and use more cpu time.  In my experience, however, users rarely change the weight parameter.  So the normal course of events is for a user to decide a VM needs more processing power, add more cpus, but doesn’t change the weight.  The VM still gets the same amount of cpu time, but less efficiently allocated (because it’s divided).

The attached patch changes the meaning of the “weight” parameter, to be per-vcpu.  Each vcpu is given the weight.  So if you add an extra vcpu, your VM will get more cpu time as well.

This patch has been in Citrix XenServer for several releases now (checked in June 2008), and seems to fit more with customer expectations.

[XCP] RFC: cross-host VDI.copy

From Dave Scott –

I’ve written a proposal on the wiki for “Cross-host VDI.copy” i.e. the ability to copy VDIs (disks) and therefore VMs between storage repositories on separate XCP hosts:

Currently XCP only supports copying disks between Storage Repositories where at least one host in the pool can see both bits of storage at once i.e. from shared -> local; local -> shared; and local -> local on the same host. This adds generic support for local -> local disk copying by using the existing “Raw VDI import” functionality.

This ought to be especially useful for people who have pools but who don’t have shared storage.

Comments welcome.