Tag Archives: rumprun

On rump kernels and the Rumprun unikernel

The Rumprun unikernel, based on the driver components offered by rump kernels, provides a means to run existing POSIX applications as unikernels on Xen. This post explains how we got here (it matters!), what sort of things can be solved today, and also a bit of what is in store for the future. The assumption for this post is that you are already familiar with unikernels and their benefits, or at least checked out the above unikernel link, so we will skip a basic introduction to unikernels.

Pre-Xen history for rump kernels

The first line of code for rump kernels was written more than 8 years ago in 2007, and the roots of the project can be traced to some years before that. The initial goal was to run unmodified NetBSD kernel drivers as userspace programs for testing and development purposes. Notably, in our terminology we use driver for any software component that acts as a protocol translator, e.g. TCP/IP driver, file system driver or PCI NIC driver. Namely, the goal was to run the drivers in a minimal harness without the rest of the OS so that the OS would not get in the way of the development. That minimal quality of most of the OS not being present also explains why the container the drivers run in is called a rump kernel. It did not take long to realize the additional potential of isolated, unmodified, production-quality drivers. “Pssst, want a portable, kernel-quality TCP/IP stack?” So, the goal of rump kernels was adjusted to provide portable, componentized drivers. Developing and testing NetBSD drivers as userspace programs was now one side-effect enabled by that goal. Already in 2007 the first unikernel-like software stack built on rump kernels was sketched by using file system drivers as a mtools-workalike (though truthfully it was not a unikernel for reasons we can split hairs about). Later, in 2008, a proper implementation of that tool was done under the name fs-utils [Arnaud Ysmal].

The hard problem with running drivers in rump kernels was not figuring out how to make it work once, the hard problem was figuring out how to make it sustainable so that you could simply pick any vintage of the OS source tree and use the drivers in rump kernels out-of-the-box. It took about two weeks to make the first set of unmodified drivers run as rump kernels. It took four years, ca. 2007-2011, to figure out how to make things sustainable. During the process, the external dependencies on top of which rump kernels run were discovered to consist of a thread implementation, a memory allocator, and access to whatever I/O backends the drivers need to access. These requirements were codified into the rump kernel hypercall interface. Unnecessary dependencies on complications, such as interrupts and virtual memory, were explicitly avoided as part of the design process. It is not that supporting virtual memory, for example, was seen to be impossible, but rather that the simplest form meant things would work the best and break the least. This post will not descend into the details or rationales of the internal architecture, so if you are interested in knowing more, have a look at book.rumpkernel.org.

In 2011, with rump kernels mostly figured out, I made the following prediction about them: “[…] can be seen as a gateway from current all-purpose operating systems to more specialized operating systems running on ASICs”. Since this is the Xen blog, we should unconventionally understand ASIC to stand for Application Specific Integrated Cloud.  The only remaining task was to make the prediction come true. In 2012-2013, I did some for-fun-and-hack-value work by making rump kernels run e.g. in a web browser and in the Linux kernel. Those experiments taught me a few more things about fitting rump kernels into other environments and confirmed that rump kernels could really run anywhere as long as one figured out build details and implemented the rump kernel hypercall interface.

Birth of the rump kernel-based unikernel

Now we get to the part where Xen enters the rump kernel story, and one might say it does so in a big way. A number of people suggested running rump kernels on top of Xen over the years. The intent was to build e.g. lightweight routers or firewalls as Xen guests, or anything else where most of the functionality was located in the kernel in traditional operating systems. At that time, there was no concept of offering userspace APIs on top of a rump kernel, just a syscall interface (yes, syscalls are drivers). The Xen hypervisor was a much lower-level entity than anything else rump kernels ran on back then. In summer 2013 I discovered Mini-OS, which provided essentially everything that rump kernels needed, and not too much extra stuff, so Xen support turned out to be more or less trivial. After announcing the result on the Xen lists, a number of people made the observation that a libc bolted on top of the rump kernel stack should just work; after all, rump kernels already offered the set of system calls expected by libc. Indeed, inspired by those remarks and after a few days of adventures with Makefiles and scripts, the ability to run unmodified POSIX-y software on top of the Xen hypervisor via the precursor of the Rumprun unikernel was born. Years of architectural effort on rump kernels had paid rich dividends.

So it was possible to run software. However, before you can run software, you have to build it for the target environment — obviously. Back in 2013, a convoluted process was required for building. The program that I used for testing during the libc-bolting development phase was netcat. That decision was mostly driven by the fact that netcat is typically built with cc netcat.c, so it was easy to adapt netcat’s build procedure. Hand-adapting more complex build systems was trickier. That limitation meant that the Rumprun unikernel was accessible only to people who had the know-how to adapt build systems and the time to do so — that set of people can be approximated as the empty set. What we wanted was for people to be able to deploy existing software as unikernels using the existing “make + config + run” skill set.

The first step in the above direction was creating toolchain wrappers for building applications on top of the Rumprun unikernel [Ian Jackson]. The second step was going over a set of pertinent real-world application programs so as to both verify that things really work, and also to create a set of master examples for common cases [Martin Lucina]. The third step was putting the existing examples into a nascent packaging system. The combined result is that anybody with a Xen-capable host is no more than a few documented commands away from deploying e.g. Nginx or PHP as unikernels. We are still in the process of making the details maximally flexible and user-friendly, but the end result works already. One noteworthy thing is that applications for the Rumprun unikernel are always cross-compiled. If you are an application author and wish to see your work run on top of the Rumprun unikernel, make sure your build system supports cross-compilation. For example, but not limited to, using standard GNU autotools will just work.

Comparison with other unikernels

The goal of rump kernels was not to build a unikernel. It still is not. The mission of the rump kernel project is to provide reusable kernel-quality components which others may build upon. For example, the MirageOS project has already started work towards using rump kernels in this capacity [Martin Lucina]. We encourage any other project wishing to do the same to communicate with us especially if changes are needed. Everyone not having to reinvent the wheel is one thing; we are aiming for everyone not having to maintain the wheel.

So if the goal of the rump kernel project was not to build a unikernel, why are we doing one? At some point we simply noticed that we have the right components and a basic unikernel built on top of rump kernels fell out in a matter of days. That said, and as indicated above, there has been and still is a lot of work to be done to provide the peripheral support infrastructure for unikernels. Since our components come unmodified from NetBSD, one might say that the Rumprun unikernel targets legacy applications. Of course, here “legacy” means “current reality,” even when I do strongly believe that “legacy” will some day actually be legacy. But things change slowly. Again, due to unmodified NetBSD component reuse, we offer a POSIX-y API. Since there is no porting work which could introduce errors into the application runtime, libc or drivers, programs will not just superficially seem to work, they will actually work and be stable. In the programming language department, most languages with a POSIX-based runtime will also simply just work. In the name of the history aspect of this post, the first non-C language to run on top of rump kernels on Xen was LuaJIT [Justin Cormack].

The following figure illustrates the relationships of the concepts further. We have not discussed the anykernel, but for understanding the figure it is enough to know that the anykernel enables the use of unmodified kernel components from an existing OS kernel; it is not possible to use just any existing OS kernel to construct rump kernels (details at book.rumpkernel.org). Currently, NetBSD is the only anykernel in existence. The third set of boxes on the right is an example, and the Mirage + rump kernel amalgamation is another example of what could be depicted there.

Conclusions and future work

You can use rump kernels to deploy current-day software as unikernels on Xen. Those unikernels have a tendency to simply work since we are using unmodified, non-ported drivers from an upstream OS. Experiments with running a reasonable number of varied programs as Rumprun unikernels confirms the previous statement. Once we figure out the final, stable usage of the full build-config-deploy chain, we will write a howto-oriented post here. Future posts will also be linked from the publications and talks page on the rump kernel wiki. Meanwhile, have a look at repo.rumpkernel.org/rumprun and the wiki tutorial section. If you want to help with figuring out e.g. the packaging system or launch tool usage, check the community page on the wiki for information on how to contribute.

There will be a number of talks around the Rumprun unikernel this month. At the Xen 2015 Developer Summit in Seattle, Wei Liu will be talking about Rump Kernel Based Upstream QEMU Stubdom and Martin Lucina will be talking about Deploying Real-World Software Today as Unikernels on Xen with Rumprun. Furthermore, at the co-located CloudOpen, Martin Lucina will be one of the panelists on the Unikernel vs. Container Panel Debate. At the Unikernel User Summit at Texas Linux Fest Justin Cormack will present get started using Rump Kernels.

Baremetal vs. Xen vs. KVM — Redux

The Xen community was very interested in (and a little worried by!) the recent performance comparison of ”Baremetal, Virtual Box, KVM and Xen”, published by Phoronix, so I took it upon myself to find out what was going on.

Upon investigation I found that the 3.0 Linux kernel used in Ubuntu 11.04 was lacking a rather key set of patches in domain 0 which inform the Xen hypervisor about the power management (specifically cpufreq scaling) properties of the processors in the system. Without these patches Xen will not make use of the highest performing CPU frequencies. These patches are in the process of being upstreamed to Linux but are already readily available and reasonably easy to apply to a 3.0 onwards kernel. You can find them at:

I reran the benchmarks presented by Phoronix in the following scenarios:

  • Baremetal Baseline
  • KVM Baseline
  • Xen PVHVM Baseline
  • Xen PVHVM Rebuilt
  • Xen PVHVM CPUFreq
  • Xen PVHVM 3.1+CPUFreq

The “Baseline” results are stock Ubuntu 11.04. “Xen PVHVM Rebuilt” is a straight rebuild of the stock Ubuntu 11.04 kernel (to rule out a simple rebuild impacting the results too much), “Xen PVHVM CPUFreq” is that stock kernel plus the cpufreq patches and “Xen PVHVM 3.1 +CPUFreq” is a mainline 3.1 plus those patches (only really included because that’s
where those patches were originally developed, comparing 3.0 and 3.1 is a bit apples and oranges). In all cases only the dom0 kernel was modified and the guest was always using the stock 11.04 kernel.

All test cases were run on the same hardware. The baremetal results used 32GB of RAM, 250GB disk and 16 cores while in all cases the virtual machines were given 24GB of RAM, 24GB of disk and 16 cores.

The Xen guest was using a “PVHVM” configuration, that is an HVM (fully-virtualised) guest making full use of paravirtualised drivers and PV extensions (PV timers, PV interrupt injection, all of which are enabled by default). The KVM guest was configured to use the virtio drivers for IO as well as any other paravirtualisation which is enabled by default.

Here are the raw results as reported by the Phoronix Test Suite:

The following table compares the baseline KVM figures (nb: the patches are to Xen specific code and will not impact KVM) to the “Xen PVHVM CPUFreq” case and tells a very different story to the numbers shown by Phoronix.

As you can see in many cases the results were very close (9/17 cases were +/- 1% in their respective comparison to native) and in the remaining 8 cases 4 favoured Xen and 4 KVM. Overall 7 cases favoured Xen and 8 favoured KVM with 2 having identical results. This is not surprising since many of the test cases are heavily CPU bound and you would therefore naturally expect that two virtualisation solutions making full use of hardware virtualisation facilities would be approximately equivalent.

I sent the above results and analysis to Michael Larabel, the Phoronix author of the article, on 17 November but have yet to hear any response. In the meantime he has posted another article containing results of a set of tests clearly chosen to highlight the power management impact of not applying these patches. It’s disappointing that Phoronix chose not to engage with the Xen community before publishing these results despite being contacted several times by a variety of people. Of course, we are not the only community which recently has been affected by unbalanced reporting (see “About the Kernel 3.0 “Power Regression” Myth”) and one would do well to think carefully about the reliability of performance measurements from folks who do not take minimal steps to understand or explain the results which they are seeing.

The full test results are available upon request. I won’t delve any deeper here since I don’t feel the kind of vacuous “analysis” performed by Phoronix really adds much to the raw data and there really isn’t much else to say about them.

In Summary

  • The results published by Phoronix in ”Baremetal, Virtual Box, KVM and Xen” which favour KVM over Xen are caused by missing patches in the Linux Kernel.
  • Patches which fix this issue are available. With those patches applied to the Dom0 kernel the performance measured using the Phoronix benchmarks are very similar on KVM and Xen.
  • The behavior observed by Phoronix mainly applies to hardware which aggressively uses the power management capabilities of processors (i.e. Laptops are more affected than servers).

Xen.org Bugzilla Tracking

Henning Sprang, Mark Williamson, and I discussed the issue of people reporting bugs in the Bugzilla system with no guarantee that anyone was watching or working on the Bugzilla system. Several companies working on the Xen hypervisor are leveraging Bugzilla to track and monitor issues but there is no existing process to ensure that bugs entered by users or developers are being worked on. We are proposing the following process as a possible solution and will start this process next week:

· I will monitor the xen-bugs@lists.xensource.com mailing list daily to see what bugs are entered.

· I will filter out all “corporate” bugs that are being entered by companies working on the hypervisor and will follow-up with the bug creator of non “corporate” bugs to ensure that the data entered is complete for a developer to understand and reproduce the problem.

· Mark will do a review of the bug reported to ensure that it is in fact a system issue which needs developer attention

· I will send an email 1x per week to xen-devel@lists.xensource.com mailing list with all the bugs that are open for developer attention; the complete information on the bugs per week will be in the Wiki at http://wiki.xensource.com/xenwiki/XenBugs 

· Any developer interested in working on a bug from the list can either update Bugzilla themselves or work with me to ensure that the fix is documented and the bug is closed when work is complete

If you have any additional comments, please feel free to add them to this discussion on xen-community@lists.xensource.com as I have sent this information to that mailing list or in the comments below.