Xen Project Celebrates Unikraft Unikernel Project’s One Year Anniversary

It has been one year since the Xen Project introduced Unikraft as an incubator project. In that

Photo by rawpixel on Unsplash

time, the team has made great strides in simplifying the process of building unikernels through a unified and customizable code base.

Unikraft is an incubation project under the Xen Project, hosted by the Linux Foundation, focused on easing the creation of building unikernels, which compile source code into a lean operating system that only includes the functionality required by the application logic. As containers increasingly become the way cloud applications are built, there is a need to drive even more efficiency into the way these workloads run. The ultra lightweight and small trusted compute base nature of unikernels make them ideal not only for cloud applications, but also for fields where resources may be constrained or safety is critical.

Unikraft tackles one of the fundamental downsides of unikernels: despite their clear potential, building them is often manual, time-consuming work carried out by experts. Worse, the work, or at least chunks of it, often needs to be redone for each target applications. Unikraft’s goal is to provide an automated build system where non-experts can easily and quickly generate extremely efficient and secure unikernels without having to touch a single line of code. Further, Unikraft explicitly supports multiple target platforms: not only virtual machines for Xen and KVM, but also OCI-compliant containers and bare metal images for various CPU architectures.

Over the last year the lead team at NEC Laboratories Europe along with external contributors from companies like ARM and universities such as University Politehnica of Bucharest have made great strides in developing and testing Unikraft’s base functionality, including support for a number of CPU architectures, platforms, and operating system primitives. Notable updates include support for ARM64.

The Unikraft community continues to grow. Over the last year, we’ve seen impressive momentum in terms of community support and involvement:

  • Contributions from outside the project founders (NEC) now make up 25% of all contributions.
  • Active contributors rose 91%, from 2 contributors to 23.

The initial NEC code contribution was around 86KLOC: since then around 34KLOC of code have been added and/or modified.
An upcoming milestone for the project is the Unikraft v0.3 release, which will ship in February.

This release includes:

  • Xenstore and Xen bus support
  • ARM32 support for Xen
  • ARM64 support for QEMU/KVM
  • X86_64 bare metal support
  • Networking support, including an API that allows for high-speed I/O frameworks (e.g., DPDK, netmap)
  • A lightweight network stack (lwip)

Initial VFS support along with an a simple but performant in-RAM filesystem
We are very excited about this coming year, where the focus will be on automating the build process and supporting higher-layer functionality and applications:

  • External standard libraries: musl, libuv, zlib, openssl, libunwind, libaxtls (TLS), etc.
  • Language environments: Javascript (v8), Python, Ruby, C++
  • Frameworks: Node.js, PyTorch, Intel DPDK

Applications: lighttpd, nginx, SQLite, Redis, etc.
Looking forward, in the first half of 2019 Unikraft will be concentrating its efforts towards supporting an increasing number of programming languages and applications and towards actively creating links to other unikernel projects in order to ensure that the project delivers on its promise. Stay tuned for what’s in store. If you want to take Unikraft out for a spin, to contribute or to simply find out more information about Unikraft please head over to the project’s website

Also, if you are attending FOSDEM, February 2nd and 3rd, please stop by room AW1.121 for the talk “Unikraft: Unikernels Made Easy,” given by Simon Kuenzer. Simon, a senior systems researcher at NEC Labs and the lead maintainer of Unikraft, will be speaking all about Unikraft and giving a comprehensive overview of the project, where it’s been and what’s in store.

Want to learn more about Unikraft and connect with the Xen community at large? Registration for the annual Xen Project Developer and Design Summit is open now! Check out information on sponsorships, speaking opportunities and more here.

Xen Project Developer and Design Summit: Registration Open Now

Starting today, registration officially opens for The Xen Project Developer & Design Summit. This year’s Summit, taking place from July 9 through 11 in Chicago, will bring together the Xen Project community of developers and power users to share ideas, latest developments, and experiences, as well as offer opportunities to plan and collaborate on all things Xen Project. 

If you’d like to present at the Summit and have a topic that you’d like to submit, the Call For Proposals is open now and will close April 12, 2019.

Last but not least, we have many opportunities to support the Summit via sponsorships. For information regarding registration, speaking opportunities and sponsorships, head over the event website and learn more!

Xen Project 4.11.1 and 4.8.5 are available

I am pleased to announce the release of Xen 4.11.1 and 4.8.5. Xen Project Maintenance releases are released in line with our Maintenance Release Policy. We recommend that all users of the 4.11 and 4.8 stable series update to the latest point release.

These releases are available from their git repositories

xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.11 (tag RELEASE-4.11.1)

xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.8 (tag RELEASE-4.8.5)

or from the Xen Project download pages



These releases contain many bug fixes and improvements. For a complete list of changes, please check the lists of changes on the download pages.

Celebrating 15 Years of the Xen Project and Our Future

In the 1990s, Xen was a part of a research project to build a public computing infrastructure on the Internet led by Ian Pratt and Keir Fraser at The University of Cambridge Computer Laboratory. The Xen Project is now one of the most popular open source hypervisors and amasses more than 10 million users, and this October marks our 15th anniversary.

From its beginnings, Xen technology focused on building a modular and flexible architecture, a high degree of customizability, and security. This security mindset from the outset led to inclusion of non-core security technologies, which eventually allowed the Xen Project to excel outside of the data center and be a trusted source for security and embedded vendors (ex. Qubes, Bromium, Bitdefender, Star Labs, Zentific, Dornerworks, Bosch, BAE systems), and also
a leading hypervisor contender for the automotive space.

As the Xen Project looks to a future of virtualization everywhere, we reflect back on some of our major achievements over the last 15 years. To celebrate, we’ve created an infographic that captures some of our key milestones share it on social.

A few community members also weighed in on some of their favorite Xen Project moments and what’s to come:

“Xen offers best-in-class isolation and separation while preserving nearly bare-metal performance on x86 and ARM platforms. The growing market for a secure hypervisor ensures Xen will continue to grow in multiple markets to meet users demands.”
– Doug Goldstein, Software Developer V, Hypervisors at Rackspace

“Xen started life at the University of Cambridge Computer Laboratory, as part of the XenoServers research project to build a public computing infrastructure on the Internet. It’s been fantastic to see the impact of Xen, and the role it’s played at the heart of what we now call Infrastructure as a Service Cloud Computing. It’s been an incredible journey from Xen’s early beginnings in the University, to making our first open source release in 2003, to building a strong community of contributors around the project, and then Xen’s growth beyond server virtualization into end-user systems and now embedded devices. Xen is a great example of the power of open source to enable cooperation and drive technological progress.”
– Ian Pratt, Founder and President at Bromium, and Xen Project Founder

“From its beginnings as a research project, able to run just a handful of Linux VMs, through being the foundation of many of the world’s largest clouds, to being the open-source hypervisor of choice for many next-generation industrial, automotive and aeronautical applications, Xen Project has shown its adaptability, flexibility and pioneering spirit for 15 years. Today, at Citrix, Xen remains the core of our Citrix Hypervisor platform, powering the secure delivery of applications and data to organizations across the globe. Xen Project Hypervisor allows our customers to run thousands of virtual desktops per server, many of them using Xen’s ground-breaking GPU virtualization capabilities. Happy birthday, Xen!”
– James Bulpin, Senior Director of Technology at Citrix

“The Xen open source community is a vibrant and diverse platform for collaboration, something which is important to Arm and vital to the ongoing success of our ecosystem. We’ve contributed to the Xen open source hypervisor across a range of markets starting with mobile, moving into the strategic enablement that allowed the deployment of Arm-based cloud servers, and more recently focusing on the embedded space, exploring computing in safety-sensitive environments such as connected vehicles.”
– Mark Hambleton, Vice President of Open Source Software, Arm

“I – like many others – associate cloud computing with Xen. All my cloud-related projects are tied to companies running large deployments of Xen. These days even my weekend binge-watching needs are satisfied by a Xen instance somewhere. With Xen making its way into cars, rocket launch operations and satellites, it’s safe to say the industry at large recognizes it as a solid foundation for building the future, and I’m excited to be a part of it.”
– Mihai Dontu, Chief Linux Officer at Bitdefender

“Xen was the first open source hypervisor for the data center, the very foundation of the cloud as we know it. Later, it pioneered virtualization for embedded and IoT, making its way into set-top boxes and smaller ARM devices. Now, we are discussing automotive, medical and industrial devices. It is incredibly exciting to be part of a ground-breaking project that has been at the forefront of open source innovation since its inception.”
– Stefano Stabellini, Principal Engineer, Tech Lead at Xilinx and Xen on ARM Committer and Maintainer

“Congratulations to the Xen Project on this milestone anniversary. As the first open source data center hypervisor, Xen played a key role in defining what virtualization technology could deliver and has been the foundation for many advancements in the modern data center and cloud computing. Intel has been involved with Xen development since the early days and enjoys strong collaboration with the Xen community, which helped make Xen the first hypervisor to include Intel® Virtualization Technology (VT-x) support, providing a more secure, efficient platform for server workload consolidation and the growth of cloud computing.”
– Susie Li, Director of Open Source Virtualization Engineering, Intel Corp.

“It is amazing how a project that started 15 years ago has not lost any of its original appeal, despite the constant evolution of hardware architectures and new applications that were unimaginable when the Xen Project started. In certain segments, e.g. power management, the pace of innovation in Xen is just accelerating and serves as the ultimate reference for all other virtualization efforts. Happy quinceañera (sweet 15) Xen!”
– Vojin Zivojnovic, CEO and Co-Founder of Aggios

Building the Journey Towards the Next 15 Years; Sneak Peek into Xen Project 4.12
The next Xen Project release is set for March 2019. The release continues to support the Xen Project’s efforts around security with cloud environments and rich features and architectural changes for automotive and embedded use cases. Expect:

  • Deprivileged Device Model: Under tech preview in QEMU 3.0, the feature adds extra restrictions to a device model running in domain 0 in order to prevent a compromised device model to attack the rest of the system.  
  • Capability to compile a PV-only version of Xen giving cloud providers simplified management, reducing the surface of attack, and the ability to build a Xen Project hypervisor configuration with no “classic” PV support at all.
  • Xen to boot multiple domains in parallel on Arm, in addition to dom0 enabling booting of domains in less than 1 second. This is the first step towards a dom0-less Xen, which impacts statically configured embedded systems that require very fast boot times.  
  • Reduction of codesize to 46 KSLOC for safety certification and the first phase of making the codebase MISRA C compliant.
    • MISRA C is a set of software development guidelines for the C programming language developed by the Motor Industry Software Reliability Association with the aim to facilitate code safety, security, portability, and reliability in the context of embedded systems.

Thank you for the last 15 years and for the next 15+ to come!
Lars Kurth, Chairperson of the Xen Project

P.S. If you want more insight on why Xen has been so successful, check out this recent talk from Open Source Summit Europe!


Xen Project 4.10.2 and 4.9.3 are available

I am pleased to announce the release of Xen 4.10.2 and 4.9.3. Xen Project Maintenance releases are released in line with our Maintenance Release Policy. We recommend that all users of the 4.10 and 4.9 stable series update to the latest point release.

These releases are available from their git repositories

xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.10 (tag RELEASE-4.10.2)

xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.9 (tag RELEASE-4.9.3)

or from the Xen Project download pages



These releases contain many bug fixes and improvements. For a complete list of changes, please check the lists of changes on the download pages.

Google Summer of Code Project, TinyVMI: Porting LibVMI to Mini-OS

This blog post comes from Lele Ma, a Ph.D. student at William and Mary. He was recently a Google Summer of Code Intern working on the Honeynet Project. 


This post introduces the project I worked on with Honeynet Project at Google Summer of Code this year. The project of TinyVMI is to port a library (LibVMI) into a tiny operating system (Mini-OS). After porting, LibVMI will have all its functionalities running inside a tiny virtual machine, which has a much smaller size as well as higher performance compared to the same library running on a Linux OS.

Mini-OS & Unikernels

Mini-OS is a tiny operating system demo distributed with the source of Xen Project Hypervisor (abbreviated as Xen below). It has been a basis for the development of several unikernels, such as ClickOS and Rump kernels. Unikernels can be viewed as a minimized operating system with following features:

  • No ring0/ring3, or kernel/user mode separation. Traditional operating systems, like Linux, separate programs into kernel mode and user mode to protect malicious users (applications) from accessing kernel memory. However, in unikernels like Mini-OS, there is only one mode, ring0, or kernel mode. This eliminates the burden of maintaining the context-switching between two modes. The code size of the kernel and runtime overhead are both reduced.
  • A minimal set of libraries. Instead of shipping with a lot of system/application libraries to provide a general purpose computing environment, a unikernel aims to be configured with a minimal set of libraries that are only necessary for the application that runs in it, thus also called a library operating system. For example, in Mini-OS, users can configure with libc to write applications in C language.

Fig.1 General Purpose OS vs. Mini-OS Unikernel

As shown in Fig.1, a unikernel is much smaller in size and eliminates all unnecessary tools and libraries, and even file systems from the OS, keeping only the application code and a tiny OS kernel. Unikernels can be more efficient than traditional operating systems, especially for cloud platforms where each specialized application is usually managed in a standalone VM. Unikernels are supposed to be the next generation of cloud platforms because they can achieve efficiency in several aspects. These include but are not limited to:

  1. Less memory footprint. A unikernel requires significantly less memory than a traditional operating system. For example, a Mini-OS VM with LibVMI application only requires 64MB of main memory. However, a Linux VM would occupy 4GB of main memory to get average performance for a 64-bit Linux. The reduced memory footprints would allow a single physical machine to host more VMs and reduce the average cost per service.
  2. Faster booting. Since the memory footprint is small and has no redundant libraries or kernel modules, a tiny OS would require significantly less time to boot than a traditional OS. Booting a tinyOS is just like starting the application itself.
  3. No kernel mode switching. OS kernels and applications are in the same chunk of the memory region. CPU context switches caused by system calls are eliminated in unikernels. Therefore, the runtime performance of the unikernel can be much better than a traditional OS.
  4. More secure. Each unikernel’s VM runs only one application. Isolation between applications is enforced by the hypervisor, instead of a shared OS kernel. Compared to process isolation or container isolation in Linux, the unikernel is more secure from the lower level isolation.
  5. Easy deployment; easy to use. Unikernel applications are built into a single binary to run directly as a VM image, which simplifies the deployment of the service. Unikernel applications are designed to be single click and run. All functionalities are customized at building time. Once deployed, the binary package requires no human modifications except the whole binary package being replaced.

In brief, Mini-OS is a tiny OS originated from the Xen Project hypervisor. Like other unikernels, Mini-OS provides higher performance and a more secure computing environment than a traditional operating system on the cloud.

Why port LibVMI to MiniOS

LibVMI is a secure critical library that could be used to view a target VM’s raw memory from another guest VM, thus gaining a whole view of almost all the activities on the target VM.

Traditionally, LibVMI runs in Dom0 on the hypervisor. However, Dom0 is already very big even without LibVMI in it. I got the idea of separating LibVMI from Dom0 from the following observations:

  1. Dom0 is a general purpose OS hosting many daily use applications, such as administrator tools. However, LibVMI is a special purpose library and usually not for daily use. Furthermore, there are almost no direct communications between LibVMI and other applications. Thus it is not necessary to install LibVMI in Dom0.
  2. Security risk. Dom0 is a critical domain for the hypervisor platform. Introducing a new code base to the kernel would also introduce new security risks. Other applications on Dom0 could leverage kernel vulnerabilities to compromise LibVMI, and vice versa, a bug in LibVMI could crash other applications or even the entire Dom0 kernel.
  3. Performance overhead. As introduced above, a general purpose OS is large and inefficient to run a special purpose application. CPU mode switching, large memory footprints, and process scheduling all introduce overheads for Dom0.

Therefore, we propose to port LibVMI to the tiny Mini-OS, named TinyVMI, to explore whether we can achieve the above benefits.


First, the hypervisor isolates each guest VM from reading other VM’s memory pages. A guest VM should get enough permission before it can be used to introspect other VM’s memory. Second, LibVMI depends on several libraries that are not supported in the original Mini-OS. Therefore, in this project, we want solutions to overcome these two challenges.

Permissions in accessing other VM’s memory

To introspect a VM’s memory from another guest VM, the first thing is to get permissions from the hypervisor. By default, memory pages of each VM are strictly isolated from each other – they are not allowed to access the memory pages of other VMs. Although the hypervisor allows programmers to share memory pages between two VMs by grant tables, it requires the target VM to explicitly offer the page for sharing. Since the entire target VM is not trusted and no changes should be made to the target VM, LibVMI uses foreign memory mapping hypercalls to remap memory pages from the target VM to its own memory space. The permission of mapping a foreign page (target VM’s page) to its own address space for a guest VM (or Dom0) is controlled by the Xen Security Module (XSM), which will be introduced below.

Furthermore, Xen event channels allow a guest VM to monitor its memory status in real time with the help of hardware interruption. A ring buffer is shared between the hypervisor and the guest kernel to transfer event information. To access the ring buffer, XSM permission should also be granted.

Xen Security Module (XSM) uses FLASK policies as in SELinux, to enforce Mandatory Access Control(MAC) between different domains. Each permission (by default) is denied unless explicitly being allowed in the policy. Permissions are granted according to multiple categories the guest domain belongs to, such as the types, roles, users, and attributes of the guest domain (more).

The category of a VM is labeled in the configuration file we use to create it via xl create <config_file>. For example:

will label the VM as type domU_t1, under the role of system_r, and user of system_u, user system_u. Type is the lowest level of the category. Multiple types can be defined as one role multiple roles as one user.

Permissions are granted based on the types of a VM. For example, the permission of map_read allows a domain to map other domain’s memory with read-only permission. The policy:

will allow a VM with type domU_t1 to read the memory of another VM with type domU_t2.

In addition to the permissions granted from XSM, we also need the permission to read information from Xenstore, which is used to get metadata of the target VM, such as getting the Domain ID from the domain’s string name. Xenstore permission can be read via the command xenstore-ls -p:

The meaning of permission could be found from the manual. Command xenstore-chmod can be used to grant reading permissions to certain VMs. For example, to enable VM with ID 8 to read Xenstore directory /local/domain, you can run:

Build New Libraries into Mini-OS

The next challenge is building new libraries into Mini-OS. Mini-OS is an exemplary minimal operating system. To keep the kernel small, there are only a few libraries that can be built in it: newlib for C language library, a Xen-related library such as libxc to communicate with the hypervisor, and lwip for basic networking.

To port LibVMI to Mini-OS, 2 more libraries are needed. These include one JSON library to parse Rekall profiles, libjson-c, and one library with utility data structures, GLib.

In theory, most libraries written in C language can be built into Mini-OS with the help of newlib, such as libjson-c. This post introduces how to build new libraries. However, some of them might need to be manually customized for MiniOS by eliminating the unsupported portions, such as GLib.

Furthermore, security applications written in C++ programs can also be ported into Mini-OS. For example, DRAKVUF is a binary analysis system built on top of LibVMI and Xen. A portion of its code is in C++ language. To build these codes in Mini-OS, we need to cross-compile C++ standard libraries into the tiny kernel.

Project Status & Results

Functions added to Mini-OS

  • Support of LibVMI functions to introspect Linux and Windows guest on x86 architecture. Both memory access and event support are implemented. ARM architecture and other OS kernels (such as FreeBSD) have not been explored yet.
  • A customized GLib, a statically compiled libjson-c is cross-compiled into Mini-OS.
  • C++ language support. C++ standard library from GCC was cross-compiled into static libraries, such as libgcc, libstdc++, etc. Now in Mini-OS, we can program with C++ ! Not only C. Detailed steps can be found in this post.
  • A github site of Documentations and a Blog are maintained to document the manuals of how to build and run TinyVMI, as well as track the progress of each proceeded step during the summer.

Performance Analysis

In order to evaluate the TinyVMI system, we conduct a simple analysis and experiment to show its efficiency. We build two VM domains with LibVMI on the same hypervisor for comparison. One guest VM running Mini-OS with LibVMI and another VM, Dom0, running Linux (Ubuntu 16.04) with LibVMI. The target VM being introspected is a 64-bit Linux (Ubuntu 16.04). Results are shown in Fig.2 and Fig.3.

Fig.2 Code Size of LibVMI and Different Kernels

Fig.3 Time in Walking Through Page Table

Fig.2 shows the overall code size of the OS with LibVMI in it. LibVMI with MiniOS totaled 83K Lines of Code (LoC) while LibVMI with Linux kernel had 177K LoC, reducing more than 50% percent of code size. Note that the LoC of Linux kernel does not include any driver codes, which only reflects the possible minimal size of a Linux kernel. If drivers included, it could be 15M+ LoC for Linux system.

Fig.3 shows the time elapsed of reading one page by walking through the 4 levels of the page table while introspecting a 64-bit Linux guest VM. The time is an average of reading 500 consecutive pages. LibVMI in Mini-OS took 3.7 microseconds, while LibVMI in Linux took 5.7 microseconds, saving more than 30% of the time.


To briefly conclude the project, we have successfully ported the core functionalities of LibVMI into the tiny OS on Xen, Mini-OS. By customizing the XSM policy specifications and Xenstore permissions, a guest VM has been granted with permissions to introspect another guest VM via VMI technique. By customizing and cross compiling static libraries into Mini-OS, we have built LibVMI in a tiny OS, enabling a tiny VM to introspect both Linux and Windows guest VMs. Evaluations show the code size is reduced by more than 50% and performance is improved by more than 30% compared to VMI operations on Dom0 on the hypervisor.

Future Directions

  • DRAKVUF integration. After the last week of GSoC, C++ language support was added to TinyVMI under the help of this post from Notes to self. The next step would be cross-compiling the DRAKVUF system into TinyVMI. This will enable more applications to take full advantage of LibVMI interfaces already provided in the Mini-OS.
  • Dom0 Introspection. We all know Dom0 is huge. Although much work has been done to disaggregate it, it is still huge. TinyVMI itself has a small trusted computing base (TCB). However, we still need to trust Dom0 to enforce the XSM policies. This enlarges the TCB of the system significantly. Since we have to trust Dom0, it will be useless to monitor the main memory of Dom0 from TinyVMI. A further step to disaggregate Dom0 would be separate the XSM module management interface into another sub domain, or just to the same domain as TinyVMI. Taking this apart would make it possible to eliminate Dom0 from the trusted computing base, and allow TinyVMI to monitor Dom0 via VMI techniques.


Thanks to my mentors, Steven Maresca and Tamas K Lengyel, for accepting me as a student in GSoC this year. This is my first time at GSoC and this exciting project could not have been achieved without your prompt, helpful instructions and graceful patience. Thanks to Zibby Keaton for the grammar checkings on this post. Thanks to all Google Summer of Code committees for providing such a great opportunity for us to explore the world of open source!