Tag Archives: RC2

Linux 2.6.37: first upstream Linux kernel to work as Dom0

by Stefano Stabellini

Linux 2.6.37, released just few days ago, is the first upstream Linux kernel that can boot on Xen as Dom0: Linus pulled my “xen initial domain” patch series on the 28th of October and on the 5th of January the first Linux kernel was released with early Dom0 support!

Dom0 is the first domain started by the Xen hypervisor on boot and until now adding domain 0 support to the Linux kernel has required out of tree patches (note that NetBSD and Solaris have had Dom0 support for a very long time). This means that every Linux distro supporting Xen as virtualization platform has to maintain an additional kernel patch series.

Distro maintainers, worry no more: Dom0 support is upstream! It is now very easy to enable and support Xen in the standard kernel distro images and I hope this will lead to an upsurge in distribution support for Xen. Just enabling CONFIG_XEN in the kernel config of a 2.6.37 Linux kernel allows the very same Linux kernel image to boot on native, on Xen as Dom0, on Xen as normal PV guest and on Xen as PV on HVM guest!

That said, the kernel backends, in particular netback and blkback, are not yet available in the upstream kernel. Therefore a 2.6.37 vanilla kernel can only be used to start VMs on the very latest xen-unstable. In fact xen-unstable contains additional functionalities that allow qemu-xen to offer a userspace fallback for the missing backends. This support will become part of the Xen 4.1 release which is due in the next couple of months.

In the short term the out of tree patch set has been massively reduced. It is expected that the xen.git kernel tree will soon contain the proposed upstreamable versions of the backend drivers. I strongly encourage everyone to pull these and start testing upstream dom0 support!

I want to thank Jeremy Fitzharding, Konrad Rzeszutek Wilk, Ian Campbell and everyone else who was involved for the major contributions and general help that made this possible.

Xen – KVM – Linux – and the Community

At Xen Summit last week, several community members and I discussed the issues around the recent launch of RHEL without Xen and its implications for Xen and the Xen.org community. I thought that I would share my opinions with a wider audience via this blog and hopefully get feedback from the Xen community on this important topic. So, feel free to comment on this post or send me mail privately if you wish to express your opinion to just me.

Firstly, I would like to offer my congratulations to the KVM community for the successful launch of their solution in Red Hat 6 shipping later this year. We in the Xen.org community are very supportive of all open source projects and believe that innovations made in the Linux kernel for virtualization can equally be shared by KVM and Xen developers to further improve open source virtualization hypervisors. I look forward to KVM and Xen working together to ensure interoperability, common formats, and management interfaces to provide customers with the maximum flexibility in moving virtual machines between hypervisors as well as simplifying overall virtualization management infrastructure. Xen.org is currently promoting the DMTF management standard for virtualization and cloud computing and welcome the KVM community to join with us by leveraging our OVF and DMTF SVPC implementations.

Many Linux community members and technology press have been busy the past few weeks writing off Xen as no longer relevant based on the launch of KVM. I have enjoyed reading the many articles written about this and thought I would add some insight to help customers, companies, and journalists better understand the differences between KVM and Xen. KVM is a type-2 hypervisor built into the Linux kernel as a module and will ship with any Linux distribution moving forward as no work is required for the Linux distributions to add KVM. Having a virtualization platform built-in to the Linux kernel will be valuable to many customers looking for virtualization within a Linux based infrastructure; however these customers will lose the flexibility to run a bare-metal hypervisor, configure the hypervisor independent of the host operating system, and provide machine level security as a guest can bring down the operating system on KVM. Xen, on the other hand is a type-1 hypervisor built independent of any operating system and is a complete separate layer from the operating system and hardware and is seen by the community and customers as an Infrastructure Virtualization Platform to build their solutions upon. In fact, the Xen.org community is not in the business of building a complete solution, but rather a platform for companies and users to leverage for their virtualization and cloud solutions. In fact, the Xen hypervisor is found in many unique solutions today from standard server virtualization to cloud providers to grid computing platforms to networking devices, etc.

To get a better understanding of how Xen.org operates, you must understand what the mission and goal of the Xen.org community is:

  • Build the industry standard open source hypervisor
    • Core “engine” in multiple vendor’s products
  • Maintain Xen’s industry leading performance
    • First to exploit new hardware virtualization features
  • Help OS vendors paravirtualize their OSes
  • Maintain Xen’s reputation for stability and quality
  • Support multiple CPU types for large and small systems
  • Foster innovation
  • Drive interoperability

This mission statement has been in place for many years in Xen.org and is an accurate reflection of our community.  It is our most important mission to create an industry standard open source hypervisor that is a core engine in other vendor’s products. Clearly, Xen.org has succeeded in this mission as many companies including Amazon, GoGrid, RackSpace, Novell, Oracle, Citrix, Avaya, Fujitsu, VA Linux, and others are leveraging our technology as a core feature in their solutions. It is not the intention of Xen.org to build a competitive packaged solution for the marketplace, but rather create a best of breed open source technology that is available for anyone to leverage.  This distinction is critical to understand as many people are confused as to why Xen.org does not compete or market against other technologies such as VMWare, HyperV, and KVM. Our goal is to create the best hypervisor possible without any focus on creating a complete packaged solution for customers. We embrace the open model of allowing customers to choose from various solutions to create their optimal solution.

Xen.org also spends a great deal of developer effort in performance testing as well as ensuring that we leverage efforts from hardware companies such as AMD and Intel to support the latest available hardware technologies. For example, Xen 4.0 supports the latest in SR-IOV cards which are just now being shipped to customers.

The third bullet on the mission statement can now be checked off as Xen.org has been instrumental in the efforts to upstream DomU paravirtualization software into the Linux kernel so all Linux distributions are now available for paravirtualization with no user changes required.  Xen.org is also working to upstream changes for our Dom0 kernel  to Linux and is being led by Jeremy Fitzhardinge and Konrad Wilk who recently updated the community on their work at Xen Summit; slides here. As Xen is not written as a Linux module or specially for Linux only deployments, it takes additional efforts to properly include Xen dom0 support into the Linux kernel. The community is always open to new contributors to assist Jeremy and Konrad on their development project and can contact me for next steps.  Finally, it is worth remembering that a Dom0 for Xen can run on NetBSD, FreeBSD, Solaris, or other operating system and is not a Linux only solution. Xen continues to embrace the customer choice model in Dom0 operating system selection which is part of our core mission.

The remaining bullets also reflect what you see in Xen.org as we look to support customer choice in all computing elements as well as ensuring that Xen.org leads the industry in pushing the envelope in new features for hypervisors.

As you can see, Xen.org’s mission is not to create a stand-alone, Linux-only competitive product that is a single packaged offering for end-users. Instead, we focus exclusively on building the best open source hypervisor technology in the marketplace and allow others to leverage our technology in any manner they wish with a maximum amount of flexibility for processor choice, Dom0 operating system , DomU virtualization, management tools, storage tools, etc. This flexibility along with  technology capability is a competitive advantage for customers and companies that choose Xen. Going forward, the Xen.org community will continue to focus on these goals as we include our new Xen Cloud Platform project  and Xen Client Initiative into the technology deliverables from our open source community.

Simon Crosby on Xen, KVM, Novell, etc

Can a Chameleon Change its Spots?

I had lunch today with veteran virtualization blogger Alessandro Perilli, who was in the Seattle area for the Microsoft MVP Summit. Alessandro has repeatedly been the first to spot key industry trends. He is truly plugged-in, and brings to his analysis a level of technical insight and honesty that I find refreshing, and he doesn’t sensationalize just to get clicks.

We discussed the recent flurry of reporting on the fact that Novell is also developing for KVM, and it was good to see that Alessandro found this as unsurprising as I do. Novell SUSE Linux is, after all an enterprise Linux distribution. And KVM is just a kernel.org driver that comes with mainline Linux. So it’s logical to expect Novell’s customers to be aware of KVM and to expect them to ship and support it – like any other mainline feature. Indeed Novell’s activity on KVM has never been a secret – they announced a preview of KVM support in SLE 11 and have a roadmap for offering full support in due course.

Full post at http://community.citrix.com/pages/viewpage.action?pageId=116034454.

Xen Development Tree (Linux Version) Update

The Xen Community has completed a discussion around the selection of the proper tree for future development activities;

from Keir Fraser on June 4th  (full thread)…

With 3.4 out the door it is time to revisit the state of our Linux repositories. Currently we have a number of trees in various states of maintenance: – linux-2.6.18-xen.hg: the ‘original’ tree. Still maintained, used and tested but increasingly long in the tooth. – ext/linux-2.6.27-xen.hg: a snapshot of opensuse’s kernel port. This clones tree is not maintained or tested. – XCI/linux-2.6.27.git: a forward port of the Xen patches to 2.6.27. Maintained as part of XCI project. – Jeremy’s pv_ops patches against kernel.org: maintained, (somewhat) tested, but incomplete.

It is probably time to kill the 2.6.18 tree, or at least stop active development within it. It is increasingly a kludged collection of backports of more recent kernel patches, and is also missing a lot of drivers for more modern hardware.

Our proposal is to move XCI’s linux-2.6.27 tree out of the XCI subproject and make it the main user tree. Development and automated testing would occur on that tree and of course on Jeremy’s pv_ops patchset (which we want to completely move onto at some point in the future).

The community has decided to move all development to “Jeremy’s pv_ops tree” as the new platform. Jeremy’s tree contains the 2.6.29 Linux kernel with associated patches and will end the Xen’s use of 2.6.18 on all future releases. The tree switch is anticipated to be complete by the middle of next week and I will post the information on this blog.

As I posted late last week, Xen is still engaged with the kernel.org team to move pv_ops Dom0 into upstream.


Xen.org and Linux Kernel Update

There has been a great deal of developer discussion lately around the proposed patches to extend the Linux kernel’s existing Xen support to include control domain capabilities (loosely known as “the dom0 patches”). These discussions are generating a great deal of interest in Xen and Linux so I thought I would add some perspective for people in the community looking to get a better understanding of the situation.

As a reminder, paravirtualized Xen guest support has been available in Linux kernels since 2.6.24 for 32 bit Guests and 2.6.27 for 64 bit Guests. This joint effort between Xen.org and the kernel.org communities now enable all Linux distributions to support para-virtualization “out of the box” with no locally-maintained patches, additional coding or porting efforts required by customers or Linux distribution vendors.

The second focus of our Linux integration efforts centers on the control Guest (Domain0) present in every Xen system. As far as the hypervisor is concerned, it is a normal paravirtualized domain which happens to have some additional privileges (such as direct hardware access, the ability to start other domains, and so on).  Its main claim to fame is that the hypervisor starts it automatically at boot time, akin to the “init” process in a Linux system. There’s no fundamental reason why there should be a unique, absolutely privileged domain.  The Xen architecture allows for multiple privileged domains, and splitting the initial domain’s responsibilities into a number of more special-purpose domains can make things more reliable and secure.

Patches exist today that allow users to provide these essential services for a variety of operating systems including Linux, NetBSD, and Solaris. Specifically for Linux users, the patches are currently not available in upstream Linux and require efforts by all Linux distribution vendors to ensure proper Xen support in their distribution; as they have to do with the many other features they ship which do not enjoy upstream Linux support.

The Xen.org community recognizes the additional work required by Linux distribution vendors to enable Xen in their solutions and is working directly with the kernel.org community to include these patches directly in the Linux kernel, thereby removing this effort for Linux distribution vendors. The current discussions within the Xen.org mailing lists center on the best way to achieve this integration and the feedback from the kernel.org community including Linus Torvalds is helping us understand the best method for integration.

The complete technical discussions on the merge is available in xen-users and xen-devel but I thought I would highlight some comments from Jeremy Fitzhardinge, the lead community developer from Xen.org on the merge:

The issue is about Dom0 support, a subset of Xen, which primarily relates to allowing Xen domains to have direct access to hardware.  It is technically challenging because it covers quite different set of functionality in different parts of the kernel – pci, dma, interrupts, etc.

In some cases, the dom0 changes are fairly uncontroversial because they’re just another user of existing interfaces (dma_ops) or slightly controversial because they need tweaks to an existing interface (swiotlb).

However, where the existing kernel code doesn’t have a suitable abstraction layer, or even particularly clean internal interfaces (like the apic code), working out how to make the appropriate Xen changes poses a tricky tradeoff: do I attempt to restructure a large complex subsystem with lots of subtle interactions with the rest of the kernel – not to mention subtle interactions with many types of quirky hardware – just to add my changes?  Or do I make some relatively small, low risk (but low beauty) changes to get the job done?

I went for the latter; the cost-benefit tradeoff just didn’t seem to justify a massive refactor.  But others have pretty pointedly had the opposite view, so I’m now investigating what its actually going to involve.

Running HXEN with Linux Guests

Continuing my testing of the new HXEN project…

The current version of HXEN is focused on bringing up Windows guests (Vista & Windows 7); however, I wanted to see what happens if I bring up Linux guests. If anyone is also running Linux guests, I would like to get your feedback based on what I am seeing.

Host Machine – Windows XP Tablet, Dual-Processor with VT-d

Linux – Ubuntu 9.04

I am able to enter all the data required for installation; however the system freezes at the same place after multiple installation attempts – installing system window at 15% with detecting file systems on bottom of window.

Linux – Centos 4.3

I am able to completely install the Centos 4.3 and launch for user login; however, when the system is ready to display my login screen the windows goes black and nothing happens. NOTE – In order to launch the OS after installation change the .BAT file boot argument  from dc to cd.