Monthly Archives: August 2015

Xen Project 4.6 RC2 Test Day is September 1, 2015

Join 4.6 Release Candidate Testing on September 1, 2015

39833137_mAlthough the Xen Project performs automated testing through the project’s Test Lab, we also depend on manual testing of release candidates by our users. Our Test Days help insure that upcoming releases are ready for production. It is particularly important that our users test out the upcoming release in their own environment. In addition, functional testing of features (in particular those which can’t be automated), stress-testing, edge case testing and performance testing are important for a new release.

Xen 4.6 Release Candidate Testing

A few weeks ago, Xen 4.6 went into code freeze and Xen 4.6 RC2 is now ready for testing. With this in mind the Test Day for Xen 4.6 RC2 has been set for next Tuesday, September 1, 2015.

Subsequent Test Days are expected to be scheduled roughly ever other week until Xen 4.6 is ready for release.

Test Day Information

General Information about Test Days can be found here:

Join us on Tuesday in #xentest on Freenode IRC!
Test a Release Candidate! Help others, get help! And have fun!
If you can’t make Tuesday, remember that Test and Issue Reports are welcome any time.

Event Report: Xen Project Developer Summit 2015

This year’s Xen Project Developer Summit is over! We had two days packed with highly technical sessions that were attended by 120 delegates. Our sessions have – as always – been very interactive with lots of discussions during and after the talks. Of course we did also have lots of time for in-corridor conversations during breaks, which most of us look forward to every year.

XPDS15


Andrew Cooper from Citrix is giving an introduction of Migration v2 in Xen 4.6. Check out the PDF and video.

Session Recordings and Slides

Most of the slides are already available as PDFs on the event website. We will re-post the slides later on our slideshare channel and on the Xen Project Website.

Video recordings of the conference sessions are already posted on our youtube channel and will also be posted on the Xen Project Website. Check out some of my personal highlights:

Security: xSplice – Live Patching the Xen Hypervisor

by Konrad Rzeszutek Wilk, Oracle

Other security, robustness and QoS related talks that are worth checking out are

User Stories: Virtualizing the Locomotive: Ready, Set, Go!

by Mark Kraeling, GE Transportation
A great Xen and Virtualization user story showing how Xen and Virtualization is used in extreme circumstances.

Other user stories that are worth checking out are

Hardware Support: ARM Virtualization Extensions

by Marc Zyngier & Thomas Molgaard, ARM Ltd

You may also want to check out the following talks covering new hardware features on Xen:

Xen and OpenStack

by Stefano Stabellini, Citrix

You may also want to check out the following feature update talks:

For more recordings check out our youtube channel!

Joint KVM and Xen Hackathon and Social Event

The joint activities between Xen and KVM have also been a great success, bringing developers from both communities more closely together. In particular the joint social event was a great success. I overheard many constructive conversations among members of both communities. In some cases, members of both communities were competing with each other in the bowling alley and playing pool: who said that a little bit of friendly competition can’t be fun (-: We will work with the organisers of KVM Forum such that we can build on this cooperation next year.

Xen Project 4.4.3 Maintenance Release is Available

I am pleased to announce the release of Xen 4.4.3. We recommend that all users of the 4.4 stable series update to this latest maintenance release.

Xen 4.4.3 is available immediately from its git repository:

    xenbits.xenproject.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.4
    (tag RELEASE-4.4.3)

or from the Xen Project download page at www.xenproject.org/downloads/xen-archives/xen-44-series/xen-443.html.

This release contains many bug fixes and improvements. For example:

  • A number of bug fixes that affected libvirt, in particular when used with OpenStack (this release contains all changes that we use in the Xen Project OpenStack CI loop; also see related OpenStack news, Latest News on Xen Support in libvirt & Xen and OpenStack);
  • Stability improvements to CPUPOOL handling, in particular when used with different schedulers;
  • Stability improvements to EFI support on some x86 platforms;
  • Security fixes since the release of Xen 4.4.2

For a complete list of changes in this release, please check lists of changes on the download page.

Alibaba Joins Xen Project Advisory Board As it Expands Aliyun Cloud Services Business

aliyunToday we officially welcome Alibaba as our newest Xen Project Advisory Board member. On the heels of the company announcing a $1 billion investment in its cloud computing unit Aliyun, we’re excited Aliyun is also committing to Xen Project virtualization.

As the cloud computing unit adds new data centers and upgrades cloud capabilities, Xen will deliver superior IT efficiencies, workload balancing, hyper-scalability and tight security by running VMs on a cloud service.

Aliyun is in good company, joining several other global cloud leaders, including AWS, Rackspace and Verizon, which are already Xen Project members.

Aliyun has been contributing vulnerability fixes to Xen for some time, and we are already benefiting from the queries, issues and patches its engineers regularly submit. It’s evident that Aliyun is extremely vigilant about security, and we believe they have a lot to contribute to Xen on this front.

“Aliyun is looking forward to deeper interaction and collaboration with the Xen Project board and community. We have been working with Linux for a long time, and Xen virtualization is increasingly important to enhancing our cloud and marketplace technology offerings in China and abroad,” says Wensong ZHANG, Chief Technology Officer, Aliyun.

Aliyun’s community involvement is also opening doors for Xen with other companies, partners and contributors in Asia. We recently announced a new partner in China. Hyper offers a new open source project that allows developers to run Docker images with Xen Project virtualization.  And last April Intel hosted our Xen Project Hackathon in Shanghai.

To learn more about live migration at Aliyun, including 20+ enhancements and hardware fixes involving issues of ~70 Shuguang X86 servers, be sure to check out Liu Jinsong’s presentation at Xen Project Developer Summit, Monday, August 17 at 3:20 p.m. Jinsong, a Xen PM, RAS maintainer and Aliyun engineer, is presenting “Live Migration at Aliyun – Benefits, Challenges, Developments and Future Works.”

Additional Resources

Will Docker Replace Virtual Machines?

Docker is certainly the most influential open source project of the moment. Why is Docker so successful? Is it going to replace Virtual Machines? Will there be a big switch? If so, when?

Let’s look at the past to understand the present and predict the future. Before virtual machines, system administrators used to provision physical boxes to their users. The process was cumbersome, not completely automated, and it took hours if not days. When something went wrong, they had to run to the server room to replace the physical box.

With the advent of virtual machines, DevOps could install any hypervisor on all their boxes, then they could simply provision new virtual machines upon request from their users. Provisioning a VM took minutes instead of hours and could be automated. The underlying hardware made less of a difference and was mostly commoditized. If one needed more resources, it would just create a new VM. If a physical machine broke, the admin just migrated or resumed her VMs onto a different host.

Finer-grained deployment models became viable and convenient. Users were not forced to run all their applications on the same box anymore, to exploit the underlying hardware capabilities to the fullest. One could run a VM with the database, another with middleware and a third with the webserver without worrying about hardware utilization. The people buying the hardware and the people architecting the software stack could work independently in the same company, without interference. The new interface between the two teams had become the virtual machine. Solution architects could cheaply deploy each application on a different VM, reducing their maintenance costs significantly. Software engineers loved it. This might have been the biggest innovation introduced by hypervisors.

A few years passed and everybody in the business got accustomed to working with virtual machines. Startups don’t even buy server hardware anymore, they just shop on Amazon AWS. One virtual machine per application is the standard way to deploy software stacks.

Application deployment hasn’t changed much since the ’90s though. Up until then, it still involved installing a Linux distro, mostly built for physical hardware, installing the required deb or rpm packages, and finally installing and configuring the application that one actually wanted to run.

In 2013 Docker came out with a simple, yet effective tool to create, distribute and deploy applications wrapped in a nice format to run in independent Linux containers. It comes with a registry that is like an app store for these applications, which I’ll call “cloud apps” for clarity. Deploying the Nginx webserver had just become one “docker pull nginx” away. This is much quicker and simpler than installing the latest Ubuntu LTS. Docker cloud apps come preconfigured and without any unnecessary packages that are unavoidably installed by Linux distros. In fact the Nginx Docker cloud app is produced and distributed by the Nginx community directly, rather than Canonical or Red Hat.

Docker’s outstanding innovations are the introduction of a standard format for cloud applications, including the registry. Instead of using VMs to run cloud apps, Linux containers are used instead. Containers had been available for years, but they weren’t quite popular outside Google and few other circles. Although they offer very good performance, they have fewer features and weaker isolation compared to virtual machines. As a rising star, Docker made Linux containers suddenly popular, but containers were not the reason behind Docker’s success. It was incidental.

What is the problem with containers? Their live-migration support is still very green and they cannot run non-native workloads (Windows on Linux or Linux on Windows). Furthermore, the primary challenge with containers is security: the surface of attack is far larger compared to virtual machines. In fact, multi-tenant container deployments are strongly discouraged by Docker, CoreOS, and anybody else in the industry. With virtual machines you don’t have to worry about who is going to use it or how it will be used. On the other hand, only containers that belong to the same user should be run on the same host. Amazon and Google offer container hosting, but they both run each container on top of a separate virtual machine for isolation and security. Maybe inefficient but certainly simple and effective.

People are starting to notice this. At the beginning of the year a few high profile projects launched to bring the benefits of virtual machines to Docker, in particular Clear Linux by Intel and Hyper. Both of them use conventional virtual machines to run Docker cloud applications directly (no Linux containers are involved). We did a few tests with Xen: tuning the hypervisor for this use case allowed us to reach the same startup times offered by Linux containers, retaining all the other features. A similar effort by Intel for Xen is being presented at the Xen Developer Summit and Hyper is also presenting their work.

This new direction has the potential to deliver the best of both worlds to our users: the convenience of Docker with the security of virtual machines. Soon Docker might not be fighting virtual machines at all, Docker could be the one deploying them.

A Chinese translation of the article is available here: http://dockone.io/article/598

On rump kernels and the Rumprun unikernel

The Rumprun unikernel, based on the driver components offered by rump kernels, provides a means to run existing POSIX applications as unikernels on Xen. This post explains how we got here (it matters!), what sort of things can be solved today, and also a bit of what is in store for the future. The assumption for this post is that you are already familiar with unikernels and their benefits, or at least checked out the above unikernel link, so we will skip a basic introduction to unikernels.

Pre-Xen history for rump kernels

The first line of code for rump kernels was written more than 8 years ago in 2007, and the roots of the project can be traced to some years before that. The initial goal was to run unmodified NetBSD kernel drivers as userspace programs for testing and development purposes. Notably, in our terminology we use driver for any software component that acts as a protocol translator, e.g. TCP/IP driver, file system driver or PCI NIC driver. Namely, the goal was to run the drivers in a minimal harness without the rest of the OS so that the OS would not get in the way of the development. That minimal quality of most of the OS not being present also explains why the container the drivers run in is called a rump kernel. It did not take long to realize the additional potential of isolated, unmodified, production-quality drivers. “Pssst, want a portable, kernel-quality TCP/IP stack?” So, the goal of rump kernels was adjusted to provide portable, componentized drivers. Developing and testing NetBSD drivers as userspace programs was now one side-effect enabled by that goal. Already in 2007 the first unikernel-like software stack built on rump kernels was sketched by using file system drivers as a mtools-workalike (though truthfully it was not a unikernel for reasons we can split hairs about). Later, in 2008, a proper implementation of that tool was done under the name fs-utils [Arnaud Ysmal].

The hard problem with running drivers in rump kernels was not figuring out how to make it work once, the hard problem was figuring out how to make it sustainable so that you could simply pick any vintage of the OS source tree and use the drivers in rump kernels out-of-the-box. It took about two weeks to make the first set of unmodified drivers run as rump kernels. It took four years, ca. 2007-2011, to figure out how to make things sustainable. During the process, the external dependencies on top of which rump kernels run were discovered to consist of a thread implementation, a memory allocator, and access to whatever I/O backends the drivers need to access. These requirements were codified into the rump kernel hypercall interface. Unnecessary dependencies on complications, such as interrupts and virtual memory, were explicitly avoided as part of the design process. It is not that supporting virtual memory, for example, was seen to be impossible, but rather that the simplest form meant things would work the best and break the least. This post will not descend into the details or rationales of the internal architecture, so if you are interested in knowing more, have a look at book.rumpkernel.org.

In 2011, with rump kernels mostly figured out, I made the following prediction about them: “[…] can be seen as a gateway from current all-purpose operating systems to more specialized operating systems running on ASICs”. Since this is the Xen blog, we should unconventionally understand ASIC to stand for Application Specific Integrated Cloud.  The only remaining task was to make the prediction come true. In 2012-2013, I did some for-fun-and-hack-value work by making rump kernels run e.g. in a web browser and in the Linux kernel. Those experiments taught me a few more things about fitting rump kernels into other environments and confirmed that rump kernels could really run anywhere as long as one figured out build details and implemented the rump kernel hypercall interface.

Birth of the rump kernel-based unikernel

Now we get to the part where Xen enters the rump kernel story, and one might say it does so in a big way. A number of people suggested running rump kernels on top of Xen over the years. The intent was to build e.g. lightweight routers or firewalls as Xen guests, or anything else where most of the functionality was located in the kernel in traditional operating systems. At that time, there was no concept of offering userspace APIs on top of a rump kernel, just a syscall interface (yes, syscalls are drivers). The Xen hypervisor was a much lower-level entity than anything else rump kernels ran on back then. In summer 2013 I discovered Mini-OS, which provided essentially everything that rump kernels needed, and not too much extra stuff, so Xen support turned out to be more or less trivial. After announcing the result on the Xen lists, a number of people made the observation that a libc bolted on top of the rump kernel stack should just work; after all, rump kernels already offered the set of system calls expected by libc. Indeed, inspired by those remarks and after a few days of adventures with Makefiles and scripts, the ability to run unmodified POSIX-y software on top of the Xen hypervisor via the precursor of the Rumprun unikernel was born. Years of architectural effort on rump kernels had paid rich dividends.

So it was possible to run software. However, before you can run software, you have to build it for the target environment — obviously. Back in 2013, a convoluted process was required for building. The program that I used for testing during the libc-bolting development phase was netcat. That decision was mostly driven by the fact that netcat is typically built with cc netcat.c, so it was easy to adapt netcat’s build procedure. Hand-adapting more complex build systems was trickier. That limitation meant that the Rumprun unikernel was accessible only to people who had the know-how to adapt build systems and the time to do so — that set of people can be approximated as the empty set. What we wanted was for people to be able to deploy existing software as unikernels using the existing “make + config + run” skill set.

The first step in the above direction was creating toolchain wrappers for building applications on top of the Rumprun unikernel [Ian Jackson]. The second step was going over a set of pertinent real-world application programs so as to both verify that things really work, and also to create a set of master examples for common cases [Martin Lucina]. The third step was putting the existing examples into a nascent packaging system. The combined result is that anybody with a Xen-capable host is no more than a few documented commands away from deploying e.g. Nginx or PHP as unikernels. We are still in the process of making the details maximally flexible and user-friendly, but the end result works already. One noteworthy thing is that applications for the Rumprun unikernel are always cross-compiled. If you are an application author and wish to see your work run on top of the Rumprun unikernel, make sure your build system supports cross-compilation. For example, but not limited to, using standard GNU autotools will just work.

Comparison with other unikernels

The goal of rump kernels was not to build a unikernel. It still is not. The mission of the rump kernel project is to provide reusable kernel-quality components which others may build upon. For example, the MirageOS project has already started work towards using rump kernels in this capacity [Martin Lucina]. We encourage any other project wishing to do the same to communicate with us especially if changes are needed. Everyone not having to reinvent the wheel is one thing; we are aiming for everyone not having to maintain the wheel.

So if the goal of the rump kernel project was not to build a unikernel, why are we doing one? At some point we simply noticed that we have the right components and a basic unikernel built on top of rump kernels fell out in a matter of days. That said, and as indicated above, there has been and still is a lot of work to be done to provide the peripheral support infrastructure for unikernels. Since our components come unmodified from NetBSD, one might say that the Rumprun unikernel targets legacy applications. Of course, here “legacy” means “current reality,” even when I do strongly believe that “legacy” will some day actually be legacy. But things change slowly. Again, due to unmodified NetBSD component reuse, we offer a POSIX-y API. Since there is no porting work which could introduce errors into the application runtime, libc or drivers, programs will not just superficially seem to work, they will actually work and be stable. In the programming language department, most languages with a POSIX-based runtime will also simply just work. In the name of the history aspect of this post, the first non-C language to run on top of rump kernels on Xen was LuaJIT [Justin Cormack].

The following figure illustrates the relationships of the concepts further. We have not discussed the anykernel, but for understanding the figure it is enough to know that the anykernel enables the use of unmodified kernel components from an existing OS kernel; it is not possible to use just any existing OS kernel to construct rump kernels (details at book.rumpkernel.org). Currently, NetBSD is the only anykernel in existence. The third set of boxes on the right is an example, and the Mirage + rump kernel amalgamation is another example of what could be depicted there.

Conclusions and future work

You can use rump kernels to deploy current-day software as unikernels on Xen. Those unikernels have a tendency to simply work since we are using unmodified, non-ported drivers from an upstream OS. Experiments with running a reasonable number of varied programs as Rumprun unikernels confirms the previous statement. Once we figure out the final, stable usage of the full build-config-deploy chain, we will write a howto-oriented post here. Future posts will also be linked from the publications and talks page on the rump kernel wiki. Meanwhile, have a look at repo.rumpkernel.org/rumprun and the wiki tutorial section. If you want to help with figuring out e.g. the packaging system or launch tool usage, check the community page on the wiki for information on how to contribute.

There will be a number of talks around the Rumprun unikernel this month. At the Xen 2015 Developer Summit in Seattle, Wei Liu will be talking about Rump Kernel Based Upstream QEMU Stubdom and Martin Lucina will be talking about Deploying Real-World Software Today as Unikernels on Xen with Rumprun. Furthermore, at the co-located CloudOpen, Martin Lucina will be one of the panelists on the Unikernel vs. Container Panel Debate. At the Unikernel User Summit at Texas Linux Fest Justin Cormack will present get started using Rump Kernels.