Monthly Archives: December 2017

Xen Project Contributor Spotlight: Irby Thompson

The Xen Project is comprised of a diverse set of member companies and contributors that are committed to the growth and success of the Xen Project Hypervisor. The Xen Project Hypervisor is a staple technology for server and cloud vendors, and is gaining traction in the embedded, security and automotive space. This blog series highlights the companies contributing to the changes and growth being made to the Xen Project and how the Xen Project technology bolsters their business.

contemporary-1850469_1920

Name: Irby Thompson
Title: Founder & CEO
Company: Star Lab Corp.

When did you start contributing to the Xen Project?
The Star Lab team started contributing to the Xen Project in 2015. At that time, our team had completed an extensive trade study of existing open-source and proprietary hypervisors, and determined that the Xen Project codebase and community offered the best security, stability, features, and performance available in the virtualization marketplace.

How does contributing to the Xen Project benefit your company?
Our contributions to the Xen Project help make the ecosystem stronger, while also enabling the entire community to adopt and benefit from our patches. For example, our team upstreamed kconfig support into Xen in 2016 in order to make the core hypervisor codebase more modular, and thus more adaptable across a wide range of industries. Likewise, Star Lab directly benefits from the many Xen Project developers who add new features, review source code, perform security and performance testing, and share lessons learned.

How does the Xen Project’s technology help your business?
The Xen Project hypervisor provides a robust foundation upon which industry-specific solutions can be built. Star Lab is primarily in the business of developing and deploying Crucible®, a Xen-based secure embedded virtualization platform for security-critical operational environments, including aerospace & defense, industrial, transportation, and telecommunications. By leveraging Xen as the foundation for Crucible, our team has been able to focus attention on addressing customer-specific needs.

What are some of the major changes you see with virtualization and the transition to cloud native computing?
Virtualization is quickly displacing both hardware (below) and operating systems (above) as the framework upon which modern systems are built. The smart abstractions made possible by virtualization reduce dependencies and make software applications easier to deploy, secure, and maintain. The future will see a merger of traditional virtualization with DevOps-style containerization to get the best qualities of both worlds and enable run-anywhere computing.

What advice would you give someone considering contributing to the Xen Project?
The ecosystem around Xen Project is full of interesting subprojects like MirageOS / unikernels, disaggregation / subdomains, tooling, and Arm support – all places where more development help is needed. Many volunteers make light work – so jump in and get involved!

What excites you most about the future of Xen?
The Xen Project continues to evolve from traditional server virtualization into other markets such as the embedded / IoT space, where the benefits of virtualization are just beginning to be realized. For example, Xen Project has the potential to be viable in safety-critical environments where a type-1 hypervisor can provide strong isolation and independence guarantees. Xen-based virtualization drives innovation in these industries and leads to significant cost savings over legacy architectures. At Star Lab, we are excited to be involved in driving the future of Xen Project!

What’s New in the Xen Project Hypervisor 4.10

I am pleased to announce the release of the Xen Project Hypervisor 4.10. As always, we focused on improving code quality, security hardening as well as enabling new features.

The Xen Project Hypervisor 4.10 continues to take a security-first approach with improved architecture and more centralized documentation. The release is equipped with the latest hardware updates from Arm and a more intuitive user interface.

We are also pleased to announce that Jürgen Groß will be the next release manager for Xen Project Hypervisor 4.11. Jürgen has been an active developer for the past few years, making significant code contributions to advance Xen support in Linux. He is a virtualization kernel developer at Suse and maintainer of Xen subsystem in Linux as well as parvirtualization.

We grouped updates to the Xen Project Hypervisor using the following categories:

  • Hypervisor general
  • Hypervisor Arm
  • Hypervisor x86
  • Toolstack
  • Misc

Hypervisor General

Credit 2 scheduler improvements: Soft-affinity support for the Credit 2 scheduler was added to allow those using the Xen Project in the cloud and server space to specify a preference for running a VM on a specific CPU. This enables NUMA aware scheduling for the Credit 2 scheduler. In addition we added cap support allowing users to set a the maximum amount of CPU a VM will be able to consume, even if the host system has idle CPU cycles.

Null scheduler improvements: The recent updates to the “null” scheduler guarantee near zero scheduling overhead, significantly lower latency, and more predictable performance. Added tracing support enables users to optimise workloads and introduced soft-affinity. Soft affinity adds a flexible way to express placement preference of vcpus on processors, which improves cache and memory performance when configured appropriately.

Virtual Machine Introspection improvements: Performance improvements have been made to VMI. A software page table walker was added to VMI on ARM, which lays the groundwork to alt2pm for ARM CPUs. For more information on alt2pm is available here.

PV Calls Drivers in Linux: In Xen Project 4.9, the Xen Project introduced the PV Calls ABI, which allows forwarding POSIX requests across guests. This enables a new networking model that is a natural fit for cloud-native apps. The PV Calls backend driver was added to Linux 4.14.

Better User Experience through the Xen Project User Interface

The Xen Project community also made significant changes to the hypervisor’s user interface. It is now possible to modify certain boot parameters without the need to reboot Xen. Guest types are now selected using the type option in the configuration file, where users can select a PV, PVH or HVM guest. The builder option is being depreciated in favor of the type option,  the PVH option has been removed and a set of PVH specific options have been added.

These changes allow the Xen Project to retain backward compatibility on new hardware without old PV code, providing the same functionality with a much smaller codebase. Additional user interface improvements are detailed in our blog post.

Hypervisor Arm

Support for Latest System-on-chip (SoC) Technology: The Xen Project now supports SoCs based on the 64-bit Armv8-A architecture from Qualcomm Centriq 2400 and Cavium ThunderX.

SBSA UART Emulation for Arm® CPUs: Implementation of SBSA UART emulation support in the in the Xen Project Hypervisor makes it accessible through the command line tools. This enables the guest OS to access the console when no PV console driver is present. In addition, the SBSA UART emulation is also required to be compliant with the VM System specification.

ITS support for ARM CPUs: Xen Project 4.10 adds support for ARM’s Interrupt Translation Service (ITS), which accompanies the GICv3 interrupt controller such as the ARM CoreLink GIC-500. ITS support allows the Xen Project Hypervisor to harnesses all of the benefits of the GICv3 architecture, improving interrupt efficiency and allowing for greater virtualization on-chip for both those using the Xen Project for the server and embedded space. ITS support is essential to virtualize systems with large amounts of interrupts. In addition ITS increases isolation of virtual machines by providing interrupt remapping, enabling safe PCI passthrough on ARM..

GRUB2 on 64-bit Armv8-A architecture: The GRUB community merged support to boot Xen on 64-bit Arm-based CPU platforms. GRUB2 support for Armv8-A improves the user experience when installing Xen via distribution package on UEFI platform.

Hypervisor x86

Rearchitecture Creates Smaller Attack Surface and Cleaner Code

Since the introduction of Xen Project Hypervisor 4.8, the project has overhauled the x86 core of its technology. The intention is to create a cleaner architecture, less code and a smaller computing base for security and performance. As part of this re-architecture, Xen Project 4.10 supports PVHv2 DomU. PVHv2 guests have a smaller TCB and attack surface compared to PV and HVM guests.

In Xen Project Hypervisor 4.9, the interface between Xen Project software and QEMU was completely reworked and consolidated via DMOP. For the Xen Project Hypervisor 4.10, the Xen Project community built on DMOP and added a Technology Preview for dm_restrict to constrain what device models, such as QEMU, can do after startup. This feature limits the impact of security vulnerabilities in QEMU. Any previous QEMU vulnerabilities that could normally be used for escalation privileges to the host cannot escape the sandbox.

This work significantly reduces potential security vulnerabilities in the Xen Project software stack.

L2 CAT for Intel CPUs: In Xen 4.10 we added support for Intel’s L2 Cache Allocation Technology(CAT) — available on certain models of (Micro) Server platforms. Xen L2 CAT support provides Xen users a mechanism to partition or share the L2 Cache among virtual machines, if such technology is present on the hardware Xen runs. This allows users to make better use of the shared L2 cache depending on the VM characteristic (e.g. priority).

Local Machine-Check Exception(LMCE) for Intel CPUs: Xen 4.10 provides LMCE support for HVM guests. A LMCE, if the affected vCPU is known, will be injected to related vCPU, otherwise, the LMCE will be broadcasted to all vCPUs running on the host. This allows for more efficient passing of MCE from hypervisor to virtual machines for further handling.

User Mode Instruction Prevention(UMIP) for Intel CPUs: User-Mode Instruction Prevention (UMIP) is a security feature present in new Intel Processors. If enabled, it prevents the execution of certain instructions if the Current Privilege Level (CPL) is greater than 0. Xen 4.10 exposes UMIP to virtual machines to take advantage of this feature.

Misc.

Improved Support Documentation

In Xen Project 4.10, a machine-readable file (support.md) was added to describe support related information in a single document. It defines support status and whether features are security supported and to which degree. For example, a feature may be security supported on x86, but not on Arm.

This file will be back-ported to older Xen releases and will be used to generate support information for Xen Project releases and will be published on xenbits.xen.org/docs/. This effort will both allow users to better understand how they are impacted by security issues, and centralizing security support related information is a pre-condition to become a CVE Numbering authority.

Summary

Despite the shorter release cycle, the community developed several major features, and found and fixed many more bugs. It is also rather impressive to see multiple vendors collaborate on the Xen Project Hypervisor to drive multiple projects forward. Contributions for this release of the Xen Project came from Amazon Web Services, AMD, Aporeto, Arm, BAE Systems, BitDefender, Cavium, Citrix, EPAM, GlobalLogic, Greenhost, Huawei Technologies, Intel, Invisible Things Lab, Linaro, Nokia, Oracle, Red Hat, Suse, US National Security Agency, and a number of universities and individuals.

As in Xen 4.9, we took a security-first approach for Xen 4.10 and spent a lot of energy to improve code quality and harden security. This inevitably slowed down the acceptance of new features somewhat and also delayed the release. However, we believe that we reached a meaningful balance between mature security practices and innovation.

On behalf of the Xen Project Hypervisor team, I would like to thank everyone for their contributions (either in the form of patches, code reviews, bug reports or packaging efforts) to the Xen Project. Please check our acknowledgement page, which recognises all those who helped make this release happen.

The source can be located in the https://xenbits.xenproject.org/gitweb/?p=xen.git;a=shortlog;h=refs/tags/RELEASE-4.10.0 tree (tag RELEASE-4.10.0) or can be downloaded as tarball from our website. For detailed download and build instructions check out the guide on building Xen 4.10

More information can be found at

Xen Project Contributor Spotlight: Mike Latimer

The Xen Project is comprised of a diverse set of member companies and contributors that are committed to the growth and success of the Xen Project Hypervisor. The Xen Project Hypervisor is a staple technology for server and cloud vendors, and is gaining traction in the embedded, security and automotive space. This blog series highlights the companies contributing to the changes and growth being made to the Xen Project and how the Xen Project technology bolsters their business.

contemporary-1850469_1920

Name: Mike Latimer
Title: Senior Engineering Manager, Virtualization Team
Company: SUSE

When did you start contributing to the Xen Project?
I first started working with the Xen Project in 2006 as a backline support engineer for SUSE. That role required working closely with SUSE’s virtualization development team to identify, debug and resolve Xen related issues our customers encountered. At that time, I was a silent contributor to the project as I leveraged the various Xen Project community mailing lists to increase my understanding of the project and contributed back through my engagements with our internal Xen developers. Some years later, I moved to engineering and worked directly with the Xen Project and related tooling. I now manage SUSE’s Virtualization Team and contribute through my own coding and QA related efforts, and also by ensuring our engineers have the resources they need to be active in the Xen Project.

How does contributing to the Xen Project benefit your company?
The Xen Project is an example of a very complex project which is successful due to a thriving and diverse community. Our membership in this community provides engineers an incredible opportunity to increase their own skills through peer review of their code, and directly observing how other engineers approach and resolve problems. This interaction between highly skilled engineers results in better engineers and better engineered products. In other words, it’s a win all around. SUSE benefits both by having a quality product we can offer to our customers and by the continual improvement our engineers experience.

How does the Xen Project’s technology help your business?
Internally, SUSE (and our parent company Micro Focus) relies on all forms of virtualization to provide many critical infrastructure components. Key services such as dns/dhcp servers, web servers, and various applications servers are commonly ran in Xen VMs. Additionally, Xen is an important part of the tooling used to build our distributions. For example, the well known Open Build Service infrastructure (which performs automated building of RPMs) uses Xen VMs for a portion of the builds.

SUSE prides itself on providing quality products that our customers need to resolve real-world challenges. Xen was doing this when we first included it in SUSE Linux Enterprise 10 (in 2006), and continues to do this today as Xen will be included in SUSE Linux Enterprise 15 (to be released in 2018). Xen has been an important differentiating factor with our distribution, and customer feedback has verified the value that they see in this offering.

What are some of the major changes you see with virtualization and the transition to cloud native computing?
In my opinion, the death of the hypervisor has been greatly exaggerated. While it is true that cloud computing has taken users one step away from the hypervisor, the role of the hypervisor has never been more important. As more and more applications move to cloud-based services, the underlying hypervisor will be expected to “just work” with everything required by those applications. This means that advanced functionality like device-passthrough, NUMA optimizations, and support for the latest CPU instructions will be expected to be available in cloud environments.

Of course, security is of paramount importance, and performance can’t be sacrificed either. Meeting these expectations, while continuing to provide core functionality (such as live migration, save/restore, snapshots, etc.) will be challenging, but the architecture of the Xen Project provides the stable foundation for today’s requirements, and the flexibility to adapt to new requirements as the cloud world continues to evolve.

What advice would you give someone considering contributing to the Xen Project?
I would encourage anyone working with the Xen Project to become an _active_ member of the community. Start by following the mailing lists and joining in the conversation. It may seem intimidating to begin working with such a technically complex project, but the community is accepting and interested in what anyone has to say. Even if your contribution are simply ACK’ing patches, or providing test reports, all input is appreciated.

If you are considering submitting code to the project, my advice is to submit early and submit often! Engage with the community early in the development process to allow time for the community to feel joint ownership for the success of your code. Don’t be afraid of criticism, and don’t be afraid of standing up for your point of view. The Xen Project thrives with these discussions, and the outcome should never be viewed as a win/lose proposition. Everyone benefits when the most correct solution wins.

What excites you most about the future of Xen?
I’m most interested in seeing Xen continue to differentiate itself from other hypervisor offerings. The Xen architecture is ideal for environments which require high security and performance, so I’m particularly interested in advances in this area. The convergence of PV

and HVM guest models (into PVH and PVHVM) has been an exciting recent change, and there should be further advances which ensure both guest models are as performant as possible. I’m also looking forward to increases in fault tolerance through such things as a restartable dom0, and better support for driver stub domains. By continuing to improve in these areas, Xen will remain a strong choice in the ever changing field of virtualization.

 

Unikraft: Unleashing the Power of Unikernels

This blog post was written by Dr. Felipe Huici, Chief Researcher, Systems and Machine Learning Group, at NEC Laboratories Europe

 The team at NEC Laboratories Europe spent quite a bit of time over the last few years developing unikernels – specialized virtual machine images targeting specific applications. This technology is fascinating to us because of its fantastic performance benefits: tiny memory footprints (hundreds of KBs or a few MBs), boot times compared to those of processes or throughput in the range of 10-40 Gb/s, among many other attributes. Specific metrics can be found in these articles: “My VM is Lighter (and Safer) than your Container,” “Unikernels Everywhere: The Case for Elastic CDNs,” and “ClickOS and the Art of Network Function Virtualization.”

The potential of unikernels is great (as you can see from the work above), but there hasn’t been a massive adoption of unikernels. Why? Development time.  For example, developing Minipython, a MicroPython unikernel, took the better part of three months to put together and test. ClickOS, a unikernel for NFV, was the result of a couple of years of work.

What’s particularly bad about this development model, besides the considerable time spent, is each unikernel is basically a “throwaway.” Every time we want to create a new unikernel targeting a different application, developers have to start from scratch. Essentially, there is a lack of shared research and development when it comes to building unikernels.

We (at NEC) wanted to change this, so we started to re-use the work and created a separate repo consisting of a “toolstack” that would contain functionality useful to multiple unikernels — mostly platform-independent versions of newlib and lwip (a C library and network stack intended for embedded systems).

This got us thinking that we should take our work to a much bigger level. We asked the question: Wouldn’t it be great to be able to very quickly choose, perhaps from a menu, the bits of functionality that we want for an unikernel, and to have a system automatically build all of these pieces together into a working image? It would also be great if we could choose multiple platforms (e.g., Xen, KVM, bare metal) without having to do additional work for each of them.

The result of that thought process is Unikraft. Unikraft decomposes operating systems into elementary pieces called libraries (e.g., schedulers, memory allocators, drivers, filesystems, network stacks, etc.) that users can then pick and choose from, using a menu to quickly build images tailored to the needs of specific applications. In greater detail, Unikraft consists of two basic components (see Figure 1):

  • Library pools contain libraries that the user of Unikraft can select from to create the unikernel. From the bottom up, library pools are organized into (1) the architecture library tool, containing libraries specific to a computer architecture (e.g., x86_64, ARM32 or MIPS); (2) the platform tool, where target platforms can be Xen, KVM, bare metal (i.e. no virtualization), user-space Linux and potentially even containers; and (3) the main library pool, containing a rich set of functionality to build the unikernel. This last library includes drivers (both virtual such as netback/netfront and physical such as ixgbe), filesystems, memory allocators, schedulers, network stacks, standard libs (e.g. libc, openssl, etc.), and runtimes (e.g. a Python interpreter and debugging and profiling tools). These pools of libraries constitute a codebase for creating unikernels. As shown, a library can be relatively large (e.g libc) or quite small (a scheduler), which allows for customization for the unikernel.
  • The Unikraft build tool is in charge of compiling the application and the selected libraries together to create a binary for a specific platform and architecture (e.g., Xen on x86_64). The tool is currently inspired by Linux’s KCONFIG system and consists of a set of Makefiles. It allows users to select libraries, to configure them, and to warn them when library dependencies are not met. In addition, the tool can also simultaneously generate binaries for multiple platforms.

unikraft

Figure 1. Unikraft architecture.

Getting Involved
We are very excited about the recent open source release of Unikraft as a Xen Project Foundation incubator project.  The Xen Project is a part of the Linux Foundation umbrella. We welcome developers willing to help improve Unikraft. Whether you’re interested in particular applications, programming languages, platforms, architectures or OS primitive. We are more than happy to build and receive contributions from the community. To get you started, here are a number of available resources:

Please don’t be shy about getting in touch with us, we would be more than happy to answer any questions you may have. You can reach the core Unikraft development team at sysml@listserv.neclab.eu .