Tag Archives: CES

Xen Related Talks @ FOSDEM 2014

Going to FOSDEM’14? Well, you want to check out the schedule of the Virtualization & IaaS devroom then, and make sure you do not miss the talks about Xen. There are 4 of them, and they will provide some details about new and interesting usecases for virtualization, like in embedded systems of various kind (from phones and tablets to network middleboxes), and about new features in the upcoming Xen release, such as PVH, and how to use them with profit.

Here they are the talks, in some more details:
Dual-Android on Nexus 10 using XEN, on Saturday morning
High Performance Network Function Virtualization with ClickOS, on Saturday afternoon
Virtualization in Android based and embedded systems, on Sunday morning
How we ported FreeBSD to PVH, on Sunday afternoon

There actually is more: one called Porting FreeBSD on Xen on ARM, in the BSD devroom, and one about MirageOS one in the miscellaneous Main track, but the schedule for them has not been announced yet.

Last but certainly not least, there will be a Xen-Project booth, where you can meet the members of the Xen community as well as enjoying some other, soon to be revealed, activities. I and some of my colleagues from Citrix will be in Brussels, and will definitely spend some time at the booth, so come and visit us. The booth will be in building K, on level 1.

Read more here: http://xenproject.org/about/events.html

Edit:

The schedule for the FreeBSD and MirageOS talks have been announced. Here it comes:
Porting FreeBSD on Xen on ARM, will be given on Saturday early afternoon (15:00), in the BSD devroom
MirageOS: compiling functional library operating systems, will happen on Sunday late morning (13:00), in the misc main track

Also, there is another Xen related talk, in the Automotive development devroom: Xen on ARM: Virtualization for the Automotive industry, on Sunday morning (11:45).

Xen on ARM and the Device Tree vs. ACPI debate

ACPI vs. Device Tree on ARM

Some of you may have seen the recent discussions on the linux-arm-kernel mailing list (and others) about the use of ACPI vs DT on the ARM platform. As always LWN have a pretty good summary (currently subscribers only, becomes freely available on 5 December) of the situation with ACPI on ARM.

Device Tree (or DT) and Advanced Configuration & Power Interface (or ACPI) are both standards which are used for describing a hardware platform e.g. to an operating system kernel. At their core both technologies provide a tree like data structure containing a hierarchy of devices and specifying what type they are and a set of “bindings” for that device. A binding is essentially a schema for specifying I/O regions, interrupt mappings, GPIOs and clocks etc.

For the last few years Linux on ARM has been moving away from hardcoded “board files” (a bunch of C code for each platform) towards using Device Tree instead. In the ARM space ACPI is the new kid on the block and has many unknowns. Given this the approach to ACPI which appears to have been reached by the Linux kernel maintainers, which is essentially to wait and see how the market pans out, seems sensible.

On the Xen side we started the port to ARM around the time that Linux’s transition from board files to Device Tree was starting and made the decision early on to go directly to device tree (ACPI wasn’t even on the table at this point, at least not publicly). Xen DT to discover all of the hardware on the system, both that which it intends to use itself and that which it intends to pass to domain 0. As well as consuming DT itself Xen also creates a filleted version of the host DT which it passes to the domain 0 kernel. DT is simple and yet powerful enough to allow us to do this relatively easily.

DT is also used by some of the BSD variants in their ARM ports as well.

My Position as Xen on ARM Maintainer

The platform configuration mechanism supported by Xen on ARM today is Device Tree. Device Tree is a good fit for our requirements and we will continue to support it as our primary hardware description mechanism.

Given that a number of operating system vendors and hardware vendors care about ACPI on ARM and are pushing hard for it, especially in the ARM server space, it is possible, perhaps even likely, that we will eventually find ourselves needing support ACPI as well. On systems which support both ACPI and DT we will continue to prefer Device Tree. Once ARM hardware platforms that only support ACPI are available, we will obviously need to support ACPI.

The Xen Project works closely with the Linux kernel and other open source upstreams as well as organisations such as Linaro. Before Xen on ARM can support ACPI I would like see it gaining some actual traction on ARM. In particular I would like to see it get to the point where it has been accepted by the Linux kernel maintainers. It is clearly not wise for Xen to be pioneering the use of ACPI before to it becoming clear whether or not it is going to gain any traction in the wider ecosystem.

So if you are an ARM silicon or platform vendor and you care about virtualization and Xen in particular, I encourage you to provide a complete device tree for your platform.

Note that this only applies to Xen on ARM. I cannot speak for Xen on x86 but I think it is pretty clear that it will continue to support ACPI so long as it remains the dominant hardware description on that platform.

It should also be noted that ACPI on ARM is primarily a server space thing at this stage. Of course Xen and Linux are not just about servers: both communities have sizable communities of embedded vendors (on the Xen side we had several interesting presentations at the recent Xen Developer Summit on embedded uses of Xen on ARM). Essentially no one is suggesting that the embedded use cases should move from DT to ACPI and so, irrespective of what happens with ACPI, DT has a strong future on ARM.

ACPI and Type I Hypervisors

Our experience on x86 has shown that the ACPI model is not a good fit for Type I hypervisors such as Xen, and the same is true on ARM. ACPI essentially enforces a model where the hypervisor, the kernel, the OSPM (the ACPI term for the bit of an OS which speaks ACPI) and the device drivers all must reside in the same privileged entity. In other words it effectively mandates a single monolithic entity which controls everything about the system. This obviously precludes such things as dividing hardware into that which is owned and controlled by the hypervisor and that which is owned and controlled by a virtual machine such as dom0. This impedance mismatch is probably not insurmountable but experience with ACPI on x86 Xen suggests that the resulting architecture is not going to be very agreeable.

UEFI

Due to their history on x86 ACPI and UEFI are often lumped together as a single thing when in reality they are mostly independent. There is no reason why UEFI cannot also be used with Device Tree. We would expect Xen to support UEFI sooner rather than later.

Fedora 20 Virtualization Test Day is today!

Fedora Logo

Yes, today (Tuesday, October 8th) is one of the Fedora 20 Test Days, more specifically, Virtualization Test Day.

Specific information regarding testing Xen on the new Fedora can be found in this Wiki page. For attending and participating, join us now on IRC at #fedora-test-day (Freenode)!

Fedora 20 will be one of the first mainstream distros shipping Xen 4.3, so come and help us making sure it will work great for you and all Fedora users!!

Indirect descriptors for Xen PV disks

Some time ago Konrad Rzeszutek Wilk (the Xen Linux maintainer) came up with a list of possible improvements to the Xen PV block protocol, which is used by Xen guests to reduce the overhead of emulated disks.

This document is quite long, and the list of possible improvements is also not small, so we decided to go implementing them step by step. One of the first items that seemed like a good candidate for performance improvement was what is called “indirect descriptors”. This idea is borrowed from the VirtIO protocol, and to put it clear consists on fitting more data in a request. I am going to expand how is this done later in the blog post, but first I would like to speak a little bit about the Xen PV disk protocol in order to understand it better.

Xen PV disk protocol

The Xen PV disk protocol is very simple, it only requires a shared memory page between the guest and the driver domain in order to queue requests and read responses. This shared memory page has a size of 4KB, and is mapped by both the guest and the driver domain. The structure used by the fronted and the backend to exchange data is also very simple, and is usually known as a ring, which is a circular queue and some counters pointing to the last produced requests and responses.

Continue reading

Schrödinger’s Cat in a (Xen) Virtualzed ‘Box’

Fedora Logo

Fedora Logo

Yes, apparently Schrödinger’s cat is alive, as the latest release of Fedora — Fedora 19, codename Schrödinger’s cat– as been released on July 2nd, and that even happened quite on time.

So, apparently, putting the cat “in a box” and all the stuff was way too easy, and that’s why we are bringing the challenge to the next level: do you dare putting Schrödinger’s cat “in a virtual box”?

In other words, do you dare install Fedora 19 within a Xen virtual machine? And if yes, how about doing that using Fedora 19 itself as Dom0?
Continue reading

Xen network: the future plan

As many of you might have (inevitably) noticed, Xen frontend / backend network drivers in Linux suffered from regression several months back after the XSA-39 fix (various reports here, here and here). Fortunately that’s now fixed (see the most important patch of that series) and the back-porting process to stable kernels is on-going. Now that we’ve put everything back into stable-ish state, it’s time to look into the future to prepare Xen network drivers for the next stage. I mainly work on Linux drivers, but some of the backend improvements ideas should benefit all frontends.

The goal is to improve network performance and scalability without giving up the advanced security feature Xen offers. Just to name a few items:

Split event channels: In the old network drivers there’s only one event channel between frontend and backend. That event channel is used by frontend to do TX notification and RX buffer allocation notification to backend. It is also used by backend to do TX completion and RX notification to frontend. So this is definitely not ideal as TX and RX interferes with each other. So with a little change to the protocol we can split TX and RX notifications into two event channels. This work is now in David Miller’s tree (patch for backend, frontend and document).

Continue reading