Author Archives: Liu Wei

Xen Project 4.7 Planning Opens

With Xen 4.6 released in October, we are already one month into the new cycle. Which means it is time to start planning for the next release. You may remember that one of the goals of the 4.6 release planning was to create smoother developer experience and to release Xen 4.6 on time. Both goals were achieved, so it was time to think where to go from here. Thus, the Xen community underwent a thorough discussion on how to manage future releases from xen-unstable and its impact on stable releases. The takeaway message of those lengthy threads is that we should continue to work on making the release cycle shorter and more predictable.

As such, the timeline for 4.7 is:

  • Development starts: October 13, 2015
  • Last posting date: March 18, 2016
  • Hard code freeze: April 1, 2016
  • Release date: June 3, 2016

After the 4.7 release, we will start to release Xen every 6 months: at the beginning of June and December. A regular 6 monthly release schedule has worked well for Ubuntu, OpenStack and many other projects. The idea behind it is a simple one: set a hard date and modify your goals to match that timeline. Which is also, why we dropped feature freeze exceptions, which create overheads and introduce unnecessary risk and debate. In addition, the new fixed release schedule will help open source projects and commercial vendors who consume Xen to plan their own releases better. And it allows us to set a schedule that ensures that every single release cycle is only affected by a single holiday period and that we have a Xen Project developer event (be it a Hackathon or Xen Project Developer Summit) during each release cycle. The stable release scheme is unchanged: 18 months full support, plus 18 months security fixes afterwards.

For more information, check out the slides that explain our release process and how it is changing for Xen 4.7 and beyond. To follow the roadmap in the coming months, be sure to check the Xen 4.7 Roadmap page on our wiki. Get involved on xen-devel@ and happy hacking!

For more updates, follow on Twitter.

Best Quality and Quantity of Contributions in the New Xen Project 4.6 Release

I’m pleased to announce the release of Xen Project Hypervisor 4.6. This release focused on improving code quality, security hardening, enablement of security appliances, and release cycle predictability — this is the most punctual release we have ever had. We had a significant amount of contributions from cloud providers, software vendors, hardware vendors, academic researchers and individuals to help with this release. We continue to strive to make Xen Project Hypervisor the most secure open source hypervisor to match the security challenges in cloud computing, and for embedded and IoT use-cases. We are also continuing to improve upon the performance and scalability for our users, and aim to continuously bring many new features to our users in a timely manor.

Despite an increase of new features compared to previous releases, the Xen Project Hypervisor codebase has only increased by 6KLOC compared to Xen 4.5. In addition, we were able to increase the number of changesets that we integrated into Xen from 178/month (1812 in total) for Xen 4.5 to 259/month (2247 in total). In addition, the quality of Xen 4.6 was higher than in the past, enabling the CentOS 7 Virtualization SIG and XenServer to include Xen into their upcoming releases.

To make it easier to understand the major changes during this release cycle I have grouped the major updates into several categories:

  • Hypervisor
  • Toolstack
  • Xen Project Test Lab
  • Linux, FreeBSD and other OSes that utilise the new features
  • Greater Ecosystem

General Hypervisor Updates

  • The memory event subsystem has been reworked and extended to a new VM event subsystem. The new VM event subsystems supports both the ARM and x86 architectures. It can be used to intercept all sorts of VM events, such as memory access, register access and more. This enables security applications such as zero-footprint guest introspection, host-wide monitoring and many others. Have a look at Tamas’s presentations and Steve’s presentations on this topic to get more insights.
  • The Xen Security Modules (XSM) now have a default policy that is regularly tested in the Xen Project Test Lab to make sure it is not broken by mistake. This will enable us to switch on XSM by default in the future.
  • vTPM 2.0 support has been contributed by Intel and BitDefender [ 1 ]. To learn more about how to use vTPM and how it can make your host more secure, go to our wiki.
  • Grant table scalability has been improvement significantly by using finer-grained locks in grant tables. In some scenarios aggregate intrahost network throughput has been shown to improve by 100%. Other I/O drivers in Xen should potentially show significant performance improvements as well.
  • We introduced ticket lock to improve fairness, which provides better support of massive workloads from up to hundreds or thousands of VMs on a single host.
  • The unused SEDF scheduler has been removed from the hypervisor and toolstack. The Xen Project is committed to actively remove unused code to keep the code base small and to minimize security risks.
  • We removed Mini-OS from the Xen code base into its own tree. Mini-OS started as a demonstration OS, but received significant contributions in recent years (e.g. it is used by many Unikernels). We decide to treat it as a separately maintained independent project with it’s own mailing list and code tree to make it easier to consume. We hope this will help unikernel communities to more easily consume and contribute to Mini-OS, while reducing the Xen Project Hypervisor footprint.

x86-specific Hypervisor Updates

  • The Intel alternate P2M framework is a new capability for VM Introspection, Security and Privacy in Xen that gives Xen the ability to host up to 10 alternate guest to physical memory domains mappings for a specific guest-domain. It is one of the key technologies to enable zero-footprint VM introspection. It can also help Xen to implement faster NFV applications.
  • Intel Page Modification Logging Technology offloads the page dirty logging duty to hardware. Microbenchmark shows about 7% improvement in SPECJbb and should be particularly beneficial for Live Migration.
  • Intel Cache Allocation Technology allows system administrators to assign more L3 cache capacity to individual VMs, resulting in lower latency and higher performance for high-priority workloads such as NFV, real-time and video-on-demand applications.
  • Intel Memory Bandwidth Monitoring allows system administrators to identify memory bandwidth saturation on a Xen host that may be caused by several memory-intensive VMs running on the same host. Taking corrective actions, such as migrating VMs to a different Xen host, increases scalability and performance in the data center.
  • Intel Reserve Memory Region reporting provides a mechanism to report and reserve memory regions for legacy devices to allow for safe device passthrough.
  • Virtual Performance Monitoring Unit support makes it possible to profile the Xen Project Hypervisor with the Linux perf tool. Note that some work still needs to be completed within Linux to make perf fully functional.
  • Virtual NUMA for HVM guest is a continuation of the NUMA work performed in Xen 4.5 and previous releases. In this release, we exposed the functionality through the XL toolstack and added firmware changes to make the feature fully functional.

ARM-specific Hypervisor Updates

  • The supported number of VCPUs has been increased from 8 to 128 VCPUs on ARM64 platforms.
  • Passthrough for non-PCI devices allows users to passthrough devices via partial device trees. Full support for PCI device passthrough is currently being worked on.
  • ARM GICv2 on GICv3 support.
  • 32 bit userspace in 64 bit guest support.
  • OVMF for ARM contributed by Linaro.
  • 64K page ARM guest support.
  • Support for the following new Hardware Platforms has been added: Renesas R-Car Gen2, Thunder X, Huawei hip04-d04 and Xilinx ZynqMP SoC.

Toolstack Updates

  • Live Migration support in libxc / libxl and has been replaced with a completely new implementation (Migration v2). The new version respects different layers in the Xen Software stack and has been designed to be more robust and extensible to better support next-generation infrastructures and work planned in subsequent hypervisor releases.
  • Remus – our High Availability solution – has been reworked and is now based on Migration v2.
  • Libxl asynchronous operations can now be cancelled. This allows libxl users to cancel long-running asynchronous operations and benefits tool stacks such as libvirt and is beneficial for integration with cloud orchestration stacks.
  • Improved SPICE/QXL support.
  • AHCI disk controller support.
  • A new host I/O topology query interface gives upper layer in the software stack the ability to identify the I/O topology of underlying hardware platform.
  • Addition of Xenalyze, which is a tool for analyzing Hypervisor trace buffers and can be used for debugging and optimization, has been added to the Xen Project codebase as a maintained feature.

Xen Project Test Lab Updates

During the Xen 4.6 release cycle, the Xen Project created an Advisory Board funded Continuous Integration Test Lab. It currently has 24 hosts and is going to expanded in the future. This has led to significant improvements in Xen code quality and has allowed the project to expand automated test coverage. The number of test cases doubled during the 4.6 cycle. Some interesting new test cases that have been added are:

  • XSM
  • Stub Domain
  • VM migration using libvirt between two hosts is now tested.
  • Live Migration between hosts of different Xen versions is now tested and will help identify any breakage in our migration code or specification.
  • Test with different disk formats such as QCOW2, VHD and raw format.

More test cases are in the pipeline, including test case for OpenStack’s devstack, performance and scalability tests, FreeBSD Dom0 etc.

Linux, FreeBSD and other OSes

During the Xen 4.6 release cycle, we made significant improvements to major operating systems we rely on to improve interoperability. Some highlights on Linux kernel development spanning from Linux 3.18 to 4.3 were:

  • Xen blkfront multiqueue and multipage ring support.
  • Xen SCSI frontend and backend support.
  • VPMU kernel support.
  • Performance improvement in mmap call.
  • P2M in PV guest can address 512GB or more.

For FreeBSD there were these improvements:

  • Experimental PVH Dom0/DomU support.
  • Removal of classic i386 PV port by FreeBSD developer John Baldwin.
  • Blkfront indirect descriptor support by FreeBSD developer Colin Percival.
  • Removal of broken FreeBSD specific blkfront/back extensions.
  • ARM32 and ARM64 guest support are underway.

Greater Ecosystem


With dozens of major improvements, many more bug fixes and small improvements, efforts in other projects as well as a greater ecosystem, Xen 4.6 reflects a thriving community around the Xen Project Hypervisor. We are extremely proud of achieving the highest quality of the release while increasing development velocity. In particular, our latest security related features enable Xen to compete in the security appliance market and help answer some of the difficult questions regarding security in the cloud era.

We set out at the beginning of this release cycle to foster greater collaboration among vendors, individual developers, upstream maintainers, other projects and distributions. During this release cycle we continued to see an increasing influx of patches and newcomers. As the release manager, I would like to thank everyone for their contributions (either in the form of patches, bug reports or packaging efforts) to Xen. This release wouldn’t have happened without contributions from so many people around the world. Please check out our 4.6 contributor acknowledgement page.

The source can be located in the xen.git tree (tag RELEASE-4.6.0) or can be downloaded tarball from our website. More information can be found at

[ 1 ] Note that when this article was published, the contribution was mistakenly attributed to the US National Security Agency, instead of BitDefender.

Xen Project 4.6 RC2 Test Day is September 1, 2015

Join 4.6 Release Candidate Testing on September 1, 2015

39833137_mAlthough the Xen Project performs automated testing through the project’s Test Lab, we also depend on manual testing of release candidates by our users. Our Test Days help insure that upcoming releases are ready for production. It is particularly important that our users test out the upcoming release in their own environment. In addition, functional testing of features (in particular those which can’t be automated), stress-testing, edge case testing and performance testing are important for a new release.

Xen 4.6 Release Candidate Testing

A few weeks ago, Xen 4.6 went into code freeze and Xen 4.6 RC2 is now ready for testing. With this in mind the Test Day for Xen 4.6 RC2 has been set for next Tuesday, September 1, 2015.

Subsequent Test Days are expected to be scheduled roughly ever other week until Xen 4.6 is ready for release.

Test Day Information

General Information about Test Days can be found here:

Join us on Tuesday in #xentest on Freenode IRC!
Test a Release Candidate! Help others, get help! And have fun!
If you can’t make Tuesday, remember that Test and Issue Reports are welcome any time.

Xen Project 4.6 Planning Opens

With Xen Project 4.5 released in January, we are now one month into 4.6 development window!

My name is Wei Liu and I have been working on various areas in the Xen Project community, including Linux kernel, hypervisor, QEMU and toolstack. Now I’m a co-maintainer of Xen hypervisor’s toolstack and the netback driver in Linux. I was elected release manager for 4.6 release. Thanks everybody for your trust.

I sent an email to xen-devel to kick off a discussion with regard to tweaking the release process for 4.6. The goal is to create smoother developer experience.

My proposed time frame for the Xen 4.6 release is:

  • Development start: 6 Jan 2015
  • Feature freeze: 10 Jul 2015
  • Release date: 9 Oct 2015 (could release earlier)

Below are some slides that explain our release process and how it is changing for Xen 4.6. Get involved and happy hacking!

OSSTest Standalone Mode Step by Step

Xen has long history and many features. Sometimes even experienced developers cannot be sure whether their new code is regression-free. To make sure new code doesn’t cause regression, Ian Jackson developed a test framework called OSSTest. In order to make this framework usable for individual ad-hoc testing, standalone mode was introduced. Recently I played with it and found it useful to share my experience with the community.

Basic Requirements

To run OSSTest in standalone mode, you need to have at least two machines: one is controller, which runs OSSTest itself and various other services, the other is test host, which is used to run test jobs.

For the controller you need to have several services setup:

  • DNS / DHCP, use to translate IP < -> hostname
  • Webserver, provide test box accessible web space, OSSTest will expose configurations / overlays via HTTP and detect test host status via webserver’s access log.
  • PXEboot / Tftp, used to provide Debian installers to test host.
  • NTP, optional

For the test host you need to:

  • enable PXE boot
  • connect to PDU if necessary

Step by step setup

git clone OSSTest tree, master branch, we assume you run everything inside osstest.git directory.

Have a look at README / TODO which contains useful information. You can create a global config file in ~/.xen-osstest/config. In standalone mode you can also create standalone.config in osstest.git directory to override global configurations.

In the configuration you can specify DNS name, name server etc.

HostProp_DhcpWatchMethod leases dhcp3
TftpPath /usr/groups/netboot/

DebianNonfreeFirmware firmware-bnx2
DebianSuite squeeze
DebianPreseed= < <‘END’
d-i clock-setup/ntp-server string

Debian is the de-facto OS in OSSTest, you can find more info in Osstest/ Here we use Squeeze to build binaries and run test jobs, because there’s a bug in Wheezy’s Xen-tools which breaks xen-image-create, which eventually breaks ts-guest-start. Patches to fix that problem have been posted but not yet merged. Hopefully some day in near future we can use Wheezy to run build jobs and test jobs.

Configure test host in OSSTest config file. There can be multiple test hosts in the config file, but I only have one. Here is what I have:

TestHost kodo4
HostProp_kodo4_DIFrontend newt
HostProp_kodo4_PowerMethod xenuse
HostProp_kodo4_Build_Make_Flags -j16
HostFlags_kodo4 need-firmware-deb-firmware-bnx2

There is detailed explanation of what those paramters mean in README. An interesting one is PowerMethod, which in fact points to a module in OSSTest that controls power cycle of the test box. Have a look at Osstest/PDU for all supported modules. Use if your test host is not capable of doing power cycle automatically.

Before actually running OSSTest, you need to make some directories yourself.

  • logs: used to store tarballs from build-* jobs
  • public_html: expose this directory via HTTP server to the test host
  • $TFTPROOT/osstest: OSSTest will write PXE boot information for test hosts here

Make sure test host is able to access contents in public_html, then you can have “WebspaceUrl http://YOUR.CONTROLLER/public_html” in your OSSTest config. Test host will try to fetch all sort of things from there.

Next step will be setting “WebspaceLog /PATCH/TO/WEBSERVER/ACCESS.LOG”. OSSTest watches webserver access log. When test host / guest go to fetch things via HTTP OSSTest gets to know their status. I use Apache2 so I’ve set WebspaceLog to /var/log/apache2/access.log which just works.

Have Debian PXE installers ready. Remember the “DebianSuite” option in your config file? In order to make OSSTest fully functional you will also need to place Debian PXE installers in the right place. You can grab Debian’s PXE installers from any of the Debian archives around the world. And put them under the TFTP you just set up. I would recommend having at least amd64 and i386 in place. Make sure installers are accessible from the test host before proceeding.

By now we’re all set! Next step:


This will reset everything in standalone mode and create standalone.db, which includes test jobs and runtime variables. You can peek what’s inside that database with sqlite3.

The first job to run should be a build-* job. That can: 1) verify your setup is correct; 2) generate a bunch of runtime variables for subsequent test-* jobs.

./sg-run-job build-amd64 # WARNING: this will wipe out your test box

If the job pass, you’re all set. You can play around with other jobs. The default setting of jobs is to wipe out test box on every run. If you don’t want that you need to specify OSSTEST_HOST_REUSE=1 as stated in README.

An example of what I run:

./sg-run-job build-amd64-pvops
OSSTEST_HOST_REUSE=1 ./sg-run-job test-amd64-amd64-xl

If you only want to run a specific testcase, you can try OSSTEST_JOB=$JOBNAME ./ts-XXX host=$TESTHOSTNAME.

Customized tree / revisions

By default OSSTest always fetches trees and revisions from Xenbits. You can easily override them with standalone.config.

Say if I want to test a specific revision of Xen, I have:

export REVISION_XEN=c5e9596cd095e3b96a090002d9e6629a980904eb

in my standalone.confg.

You can look at make-flight to know all the interesting environment variables. (Sorry, no document yet)

Writing new testcases

If you’re interested in writing new testcase, you can do that in two simple steps:

  1. write ts-my-test-case script, you can use any existing testcase as template (they are prefixed with “ts-“)
  2. modify sg-run-job, which has the information for which testcases to run for a specific job

Do have a look as osstest/, in which you can find lots of helpers to accomplish your task.

Writing new test job

The script responsible for making jobs is cs-job-create. When you run OSSTest in standalone mode, it is probably more useful to modify make-flight. You also need to modify sg-run-job to link your new job with testcases.

Hope the above information can help you get started with OSSTest. If you have any problem, do come to Xen-devel and ask.

Some readers may recall the recent announcement of open-sourcing XenRT, and wondering about the relationship between OSSTest and XenRT. Long-term, we expect that XenRT will mature as an open development project and eventually displace OSSTest. But that is not likely to happen for another six months to a year. Developing OSSTest has benefits for the current development, and we hope that getting people involved in writing test cases for OSSTest will be of service in helping people write test cases for XenRT when it becomes available. So we have decided to continue to develop OSSTest until XenRT is ready to replace it.

Have fun! :-)

Xen network: the future plan

As many of you might have (inevitably) noticed, Xen frontend / backend network drivers in Linux suffered from regression several months back after the XSA-39 fix (various reports here, here and here). Fortunately that’s now fixed (see the most important patch of that series) and the back-porting process to stable kernels is on-going. Now that we’ve put everything back into stable-ish state, it’s time to look into the future to prepare Xen network drivers for the next stage. I mainly work on Linux drivers, but some of the backend improvements ideas should benefit all frontends.

The goal is to improve network performance and scalability without giving up the advanced security feature Xen offers. Just to name a few items:

Split event channels: In the old network drivers there’s only one event channel between frontend and backend. That event channel is used by frontend to do TX notification and RX buffer allocation notification to backend. It is also used by backend to do TX completion and RX notification to frontend. So this is definitely not ideal as TX and RX interferes with each other. So with a little change to the protocol we can split TX and RX notifications into two event channels. This work is now in David Miller’s tree (patch for backend, frontend and document).

Continue reading