Xen.org is happy to announce that XCP 1.6 has been released! The release is available from the download page:
New features and Improvements
XCP 1.6 has many new features and improvements, most notably Storage XenMotion, Live VDI Migration, improved integration with XenCenter, many Networking Enhancements including massive VLAN scalability improvements. Please read the release notes for details.
The following XenSummit video provides a good overview over XCP and also explains, how Storage XenMotion works.
Also see: Presentation
Information for XCP Beta Users
Note that the final XCP 1.6 build contains fixes to recent security vulnerabilities. This means, if you have tested the XCP 1.6 beta, you will need to upgrade from an earlier XCP beta install. Automatic upgrades from XCP 1.1 (as well as XCP 1.6 betas) are possible. Please consult the XCP 1.6 Release Notes for more information on upgrades and information on fixed security vulnerabilities.
The blkback/blkfront drivers developed by the original Xen team was lightweight and fast zero-copy protocol that has served well for many years. However, as the number of physical cores and the number of guests have increased on systems, we have begun to identify some bottlenecks to scalability. This prompted a recent improvement to the protocol, called “persistent grants”. This blog post will describe the original protocol and what the source of the bottlenecks are, and then will describe persistent grants, along with experiments demonstrating the scalability improvements.
How PV Driver protocol works
Xen PV drivers for block devices use a special protocol to transfer data from the guest to the devices that act as storage. This protocol is based on a shared ring that allows the exchange of data between the guest and the host. This ring is shared between the guest and the driver domain, which is usually Dom0. The driver domain has access to the physical hardware (a disk) or to the virtual image of the guest, and the guest issues requests to perform operations on this storage.
The capacity of the shared ring is limited, as also are the maximum number of requests in flight at a certain point. To avoid having to allocate a very large amounts of shared memory at start, Xen shared ring allocates at start only the minimum amount of memory to be able to queue the requests and responses (that is a single memory page), but not the data itself. The data is allocated using grants, which are memory areas shared on request, and references to those grants are passed along with the request, so the driver domain can map those areas of memory and perform the requested operations.
A couple of weeks ago I went to Copenhagen to attend Linaro Connect and Ubuntu Developer Summit for the first time. I was really impressed by the size of the conference, I wasn’t expecting so many people, it certainly rivals LinuxCon in terms of attendance.
All the best minds in the ARM world together in the same hotel for a week: the list of attendees includes Arnd Bergmann (Linaro), Olof Johansson (Google), Grant Likely (Linaro), David Rusling (Linaro), Jon Masters (Red Hat) and many others. You can imagine the level of technical discussions that was going on.