Tag Archives: ARM

Xen on ARM interrupt latency

Xen on ARM is becoming more and more widespread in embedded environments. In these contexts, Xen is employed as a single solution to partition the system into multiple domains, fully isolated from each other, and with different levels of trust.

Every embedded scenario is different, but many require real-time guarantees. It comes down to interrupt latency: the hypervisor has to be able to deliver interrupts to virtual machines within a very small timeframe. The maximum tolerable lag changes on a case by case basis, but it should be in the realm of nanoseconds and microseconds, not milliseconds.

Xen on ARM meets these requirements in a few different ways. Firstly, Xen comes with a flexible scheduler architecture. Xen includes a set of virtual machine schedulers, including RTDS, a soft real-time scheduler, and ARINC653, a hard real-time scheduler. Users can pick the ones that perform best for their use-cases. However, if they really care about latency, the best option is to have no schedulers at all and use a static assignment for virtual cpus to physical cpus instead. There are no automatic ways to do that today, but it is quite easy to achieve with the vcpu-pin command:

Usage: xl vcpu-pin [domain-id] [vcpu-id] [pcpu-id]

For example, in a system with four physical cpus and two domains with two vcpus each, a user can get a static configuration with the following commands:

xl vcpu-pin 0 0 0
xl vcpu-pin 0 1 1
xl vcpu-pin 1 0 2
xl vcpu-pin 1 1 3

As a result, all vcpus are pinned to different physical cpus. In such a static configuration, the latency overhead introduced by Xen is minimal. Xen always configures interrupts to target the cpu that is running the virtual cpu that should receive the interrupt. Thus, the overhead is down to just the time that it takes to execute the code in Xen to handle the physical interrupt and inject the corresponding virtual interrupt to the vcpu.

For my measurements, I used a Xilinx Zynq Ultrascale+ MPSoC, an excellent board with four Cortex A53 cores and a GICv2 interrupt controller. I installed Xen 4.9 unstable (changeset 55a04feaa1f8ab6ef7d723fbb1d39c6b96ad184a) and Linux 4.6.0 as Dom0. I ran tbm as a guest, which is a tiny baremetal application that programs timer events in the future, then, after receiving them, checks the current time again to measure the latency. tbm uses the virtual timer for measurements, however, the virtual timer interrupt is handled differently compared to any other interrupts in Xen. Thus, to make the results more generally applicable, I modified tbm to use the physical timer interrupt instead. I also modified Xen to forward physical timer interrupts to guests.

Keeping in mind that the native interrupt latency is about 300ns on this board, these are the results on Xen in nanoseconds:

4850 4810 7030 4980

AVG is the average latency, MIN is the minimum, MAX is the maximum and WARM_MAX is the maximum latency observed after discarding the first few interrupts to warm the caches. For real-time considerations, the number to keep in mind is WARM_MAX, which is 5000ns (when using static vcpu assignments).

This excellent result is small enough for most use cases, including piloting a flying drone. However, it can be further improved by using the new vwfi Xen command line option. Specifically, when vcpus are statically assigned to physical cpus using vcpu-pin, it makes sense to pass vwfi=native to Xen: it tells the hypervisor not to trap wfi and wfe commands, which are ARM instructions for sleeping. If no other vcpus can ever be scheduled on a given physical cpu, then we might as well let the guest put the cpu to sleep. Passing vwfi=native, the results are:

1850 1680 2650 1950

With this configuration, the latency is only 2 microseconds, which is extremely close to the hardware limits, and should be small enough for the vast majority of use cases. vwfi was introduced recently, but it has been backported to all the Xen stable trees.

In addition to vcpu pinning and vwfi, the third key parameter to reduce interrupt latency is unexpectedly simple: the DEBUG kconfig option in Xen. DEBUG is enabled by default in all cases except for releases. It adds many useful debug messages and checks, at the cost of increased latency. Make sure to disable it in production and when doing measurements.

Xen Project Hackathon 16 : Event Report

We just wrapped another successful Xen Project Hackathon, which is an annual event, hosted by Xen Project member companies, typically at their corporate offices. This year’s event was hosted by ARM at their Cambridge HQ. 42 delegates descended on Cambridge from Aporeto, ARM, Assured Information Security, Automotive Electrical Systems, BAE Systems, Bromium, Citrix, GlobalLogic, OnApp, Onets, Oracle, StarLab, SUSE and Vates to attend. A big thank you (!) to ARM and in particular to Thomas Molgaard for organising the event and the social activities afterwards.

Here are a few images that helped capture the event:

Taking a breather and photo opp outside of ARM headquarters in Cambridge

Taking a breather and photo opp outside of ARM headquarters in Cambridge

Working on solving the mysteries of the world.

Working on solving the mysteries of the world

Continuing to work hard on solving the mysteries of the world

Continuing to work hard on solving the mysteries of the world

Xen Project Hackathons have evolved in format into a series of structured problem solving sessions that scale up to 50 people. We combine this with a more traditional hackathon approach where programmers (and others involved in software development) collaborate intensively on software projects.

This year’s event was particularly productive because all our core developers and the project’s leadership were present. We focused on a lot of topics, but two of our main themes this year evolved around security and community development. We’ll cover these topics in more detail and how they fit within our next release 4.7 and development going forward, but below is a little taste of some of the other themes of this year’s Hackathon sessions:

  • Security improvements: A trimmed down QEMU to reduce attack surface, de-privileging QEMU and the x86 emulator to reduce the impact of security vulnerabilities in those components, XSplice, KConfig support which allows to remove parts of Xen at compile time, run-time disablement of Xen features to reduce the attack surface, vulnerabilities, disaggregation and enabling XSM (Xen’s equivalent of the Linux Security Modules which are also known as SELinux) by default.
  • Security features: We had two sessions on the future of XSplice (first version to be released in Xen 4.7), which allows users of Xen to apply security fixes on a running Xen instance (aka no need to reboot).
  • Robustness: A session on restartable Dom0 and driver domains, which again will significantly reduce the overhead of applying security patches.
  • Community and code review: A couple of sessions on optimising our working practices: most notably some clarifications to the maintainer role and how we can make code reviews more efficient.
  • Virtualization Modes: The next stage of PVH, which combines the best of HVM and PV. We also had discussions around some functionality that is currently developed in Linux on which PVH has dependencies.
  • Making Development more Scalable: A number of sessions to improve the toolstack and libxl. We covered topics such as making storage support pluggable via a plug-in architecture, making it easier to develop new PV drivers to support automotive and embedded vendors, and improvements to our build system, testing, stub domains and xenstored.
  • ARM support: There were a number of planning sessions for Xen ARM support. We covered the future roadmap, how to implement PCI passthrough, and how we can improve testing for the increasing range of ARM HW with support for virtualization, also applicable outside the server space.

There were many more sessions covering performance, scalability and other topics. The session’s host(s) post meeting notes on xen-devel@ (search for Hackathon in the subject line), if you want to explore any topic in more detail. To make it easier for people who do not follow our development lists, we also posted links to Hackathon related xen-devel@ discussions on our wiki.

Besides providing an opportunity to meet face-to-face, build bridges and solve problems, we always make sure that we have social events. After all Hackathons should be fun and bring people together. This year we had a dinner in Cambridge and of course the obligatory punting trip, which is part of every Cambridge trip.

Embarking on the punting journey

Embarking on the punting journey

Continued exploration of discovering the mysteries of the universe, while on a boat

Continued exploration of discovering the mysteries of the universe, while on a boat

Again, a big thanks to ARM for hosting the event! Also, a reminder that we’ll be hosting our Xen Project Developer Summit next August in Toronto, Canada. This event will happen directly after LinuxCon North America and is a great opportunity to learn more about Xen Project development and what’s happening within the Linux Foundation ecosystem at large. CFPs are still open until May 6th!

ARM hosts Xen Project Hackathon, April 18-19 in Cambridge, UK

I am pleased to announce the next Xen Project Hackathon. The Hackathon will be hosted by ARM in their Cambridge Headquarters from April 18 and 19. I wanted to thank Philippe Robin and Thomas Molgaard from ARM for hosting the Hackathon.

ARMARM designs technology that is at the heart of advanced digital products and has built a broad partner community that increasingly embraces an open source and collaborative development model to keep pace with transitions in the industry. Enabling developer collaboration on open source projects, like Xen, is key to help optimize support for system virtualization. ARM is pleased to host and support this event.

What to expect at a Xen Project Hackathon?

The aim of the Hackathon is to give developers the opportunity to meet face to face, to discuss development, coordinate, write code, and collaborate with other developers. And, of course, the event will allow everyone to meet in person and build relationships. To facilitate this, we will have a social event on the evening of the 18th. We will cover many hot topics such as the latest Xen Project Hypervisor 4.7 features, planning for the next Xen Project Hypervisor release, Cloud Integration, Cloud Operating Systems, Mirage OS as well as Xen Project in emerging segments such as embedded, mobile, automotive and NFV. But, at the end of the day, the community will chose the topics that are covered — more on our process here below.

To ensure that the event runs efficiently, we adhere to the following process: Each day is divided into several segments. We will have a number of work areas that are labelled with numbers (or other unique identifiers). Each morning starts with a plenary and scheduling session. Every attendee who cares about a topic can announce a topic, which we will map against a work area and time-slot. This makes it easy for other attendees to participate in projects and discussions they care about. We also encourage attendees to highlight projects they plan to share before the event by adding them to our wiki.

We will wrap up each day with another short plenary session: the aim of this session is to summarize what was done, show brief demos and make improvements to the process.

To give you a sense of the venue, we attached a few pictures of the venue:

ARM Cambridge Arm Cambridge Panorama ARM Cambridge Atrium

How to Register?

As spaces at the Xen Project Hackathon are limited, we are asking attendees to request an invitation. You will need to cover your own travel, accommodation and other costs such as evening meals, etc. We do have a very limited number of travel stipends available for individuals who cannot afford to travel. Please contact community dot manager at xenproject dot org if you need to make use of it.

Reports from Previous Hackathons

More Information

Getting Started With FreeRTOS for Xen on ARM

One of the challenges of using Xen in embedded environments is the need for core components to meet critical timing requirements. In traditional implementations engineers use real-time operating systems (RTOS) to ensure, for example, that an automobile’s brakes engage within a reasonable amount of time after the driver presses the brake pedal. It would clearly be bad if such a command were to be delayed unduly due to the car’s navigation or entertainment systems.

Over the last year, Galois has been trying to simplify one aspect of this challenge by porting an open-source RTOS, FreeRTOS, to Xen. This allows engineers to implement Xen domains that meet their own independent timing requirements. To be fully effective, this must be combined with a real-time scheduler for Xen itself, allowing for a top-to-bottom real time system. For now, we have had some success getting real-time behavior by simply pinning real-time domains to one CPU on a multi-CPU system.

In this article I’ll show you how you can get the code and how you can build your own FreeRTOS-based kernels for use on Xen/ARM systems. I’ll also provide links to technical documentation for those interested in learning more about the implementation.

As part of the community’s commitment to improving Xen for non-datacenter uses, Galois is also a member of the Xen project’s Embedded and Automotive team. Because of its key isolation and security features, flexible virtualization mode and architecture, driver disaggregation and ARM support (only 90K lines of code), Xen is a perfect fit for embedded applications.

Getting the code

The source is on GitHub and can be obtained with git:

git clone https://github.com/GaloisInc/FreeRTOS-Xen.git

Setting up

For the purposes of this article I’ll assume an Ubuntu Linux development system. I’ll cover how to cross-compile FreeRTOS but I’ll assume you already have Xen deployed on an ARM system. To build FreeRTOS, you’ll need:

  • An installed collection of Xen development headers corresponding to the version of Xen running on your ARM system
  • An ARM cross compiler toolchain (installable on Ubuntu under the package name arm-none-eabi-gcc)

Building FreeRTOS

The repository includes an example application in the Example/ directory. This application starts up some simple tasks and shuts down, but most of what happens in the application takes place before main() runs while the Xen services such as the console are configured.

Before we can build the application itself, we’ll need to build the FreeRTOS library. To do that, from the FreeRTOS-Xen repository, run

$ make -C Demo/CORTEX_A15_Xen_GCC

This builds FreeRTOS.a, which is everything but your application – which we’ll build and link in the next step.

NOTE: if your cross compiler name prefix isn’t arm-none-eabi-, adjust the CROSS_COMPILE variable in the Makefile accordingly. The same goes for the location of your Xen headers, which is configured by XEN_PREFIX.

Building the example application

Now we can build the application and link it against the FreeRTOS library. To do so, from the FreeRTOS-Xen repository, run

$ make -C Example/

This will build two binaries:

  • Example/Example.elf: the ELF version of your FreeRTOS application, suitable for debugging and disassembly with GDB and other tools
  • Example/Example.bin: the binary version of your FreeRTOS application, suitable for deploying as the kernel image of a Xen VM

Starting a FreeRTOS Xen guest

Once we’ve built Example.bin, we’ll need to place it somewhere on the filesystem for our ARM system and create a Xen guest configuration file, say Example.cfg, as follows (with the path to the kernel adjusted accordingly):

kernel = ".../path/to/Example.bin"
memory = 16
name = "Example"
vcpus = 1
disk = []

This port of FreeRTOS does not use multiple CPUs, so we assign only one. The application we’ve built could probably run with even less memory; the default configurable heap size is 1 MB, so even smaller memory sizes are realistic.

We can launch the VM with xl as usual:

xl create -f Example.cfg

Once the domain is up and has established a Xen console connection, we should see the application’s example tasks running and printing messages to the Xen emergency console:

Output from the example program included in the distribution of FreeRTOS for Xen on ARM

Output from the example program included in the distribution of FreeRTOS for Xen on ARM

(See the README for information about using the standard Xen console.)

Future Work & Contributions

This port is intended for research use, and we would love help with some of its missing or incomplete features. You can read a list of those in the readme in the repository and submit GitHub pull requests to contribute!

Learning More

Many more details of the port, including source code layout, Xen services, configuration parameters, memory layout, etc., can be found in the readme file in the repository. For information on FreeRTOS itself, the FreeRTOS web site provides excellent documentation on both FreeRTOS concepts and APIs.

Need help?

Contact me (Jonathan Daugherty) at jtd AT galois DOT com if you have questions and feel free to open tickets or submit pull requests on GitHub!


The Xen of Static Checking, Part 1: bug-free code without the effort

OK, maybe the title of this post is a slight exaggeration but it’s good to have goals for the future!

It’s a goal which many would argue will be unreachable without the genesis of Strong AI. It’s also a goal where we can achieve very useful results just by trying to get there. I’m going to write a series of articles about my current work on static checking the Xen codebase. The goal here is to find errors before they occur, spot bugs that aren’t caught by human reviewers and improve the overall quality of codebase. Unfortunately, global harmony and toast which doesn’t fall butter-side-down are probably still outside the scope of this work – sorry.

This first article gives an overview of the historical background of static code checking. Future articles in this series will describe what I’m doing to apply static checking to the Xen codebase and the possibilities for Xen in the future.

Continue reading