Monthly Archives: April 2015

Introducing the Xen Project – OpenStack CI Loop

We recently introduced the new Xen Project Test Lab, a key piece of infrastructure to improve the quality of our code and code coverage. As stated earlier, “we are serious and proactive when it comes to minimising and, whenever possible, eliminating any adverse effects from defects, security vulnerabilities or performance problems”. This also applies to Xen Project integration with OpenStack, which is why the Xen Project Advisory Board made available funds to create the Xen Project – OpenStack CI Loop in January 2015. We started work on setting up our OpenStack CI Loop immediately after the funds were made available, fixed a number of issues in the Xen Project Hypervisor, Libvirt and OpenStack Nova and are excited to announce that our Xen Project – OpenStack CI Loop is now live and in production.

I wanted to thank members of our Test Infrastructure Working Group, comprised of employees from AMD, Amazon Web Services, Citrix, Intel and Oracle, and community members from Rackspace and Suse who have been supporting the creation of the Xen Project – OpenStack CI Loop.

What does an OpenStack CI Loop Do?

An OpenStack external testing platform (or CI Loop) enables third parties to run tests against an OpenStack environment that is configured with that third party’s drivers or hardware and reports the results of those tests on the code review of a proposed OpenStack patch. It is easy to see the benefit of this real-time feedback by taking a look at a code review that shows how these platforms provide feedback.

In this screenshot, you can see a number Verified +1 and one Verified -1 labels added by CI loops to OpenStack Nova.

In this screenshot, you can see a number Verified +1 and one Verified -1 labels added by CI loops to OpenStack Nova.

The figure below, shows the number of OpenStack Nova drivers for Hypervisors, which allow you to choose which Hypervisor(s) to use for your Nova Deployment. Note that these are classified into groups A, B and C.

This diagram shows the different Nova compute drivers and their quality status

This diagram shows the different Nova compute drivers and their quality status

In a nutshell, the groups have the following meaning:

  • Group C: These drivers have minimal testing.
  • Group B: These drivers have unit tests that gate commits and functional testing providing by an external system that does not gate commits, but advises patch authors and reviewers of results in the OpenStack code review system.
  • Group A: These drivers have unit tests that gate commits and functional testing that gate commits.

With the introduction of the Xen Project – OpenStack CI Loop we are close to achieving our first goal of moving the Xen Project Hypervisor from Group C to B. This is a goal we first publicly stated at this year’s FOSDEM’15 where some of you may had had the opportunity to hear Stefano Stabellini talk about using the Xen Project Hypervisor in OpenStack.

What we test against?

Currently, our CI Loop tests all OpenStack Nova commits against Ubuntu Xen 4.4.1 packages with a number of patches applied to them, together Libvirt 1.2.14 with a number of patches applied to them (for more details see this specification). All the patches are already integrated upstream. When off-the shelf packages of these changes are available in Linux distros, we will start testing against them.

What is next?

Monitor the CI loop test results against the official OpenStack CI: we will run the CI loop for a few weeks and monitor, whether there are any discrepancies between the results of the official Nova CI loop and ours. If there are any, we will investigate where the issue is and fix them. This is important, as we had a number of intermittent issues in the Xen Libvirt driver and we want to be sure, we have fixed them all.

Fix two known issues, and enable two disabled tests: We have two test cases related to iSCSI support (test_volume_boot_pattern), which are currently disabled. In one case, we believe the that the Tempest Integration Suite makes some KVM specific assumptions: Xen Project community members are working with the OpenStack community to fix the issue. The second issue is related, but requires further investigation.

Make the Xen Project – OpenStack CI Loop voting: The next logical step, is to work with the OpenStack community to make our CI loop voting. This is the first step to move from group C to B.

Beyond That: Build trust and become an active member of the OpenStack community. Our ultimate goal, is to ensure that the Xen Project Hypervisor moves into group A, alongside KVM. We are also investigating, whether it makes sense to run Xen Project + Libvirt against OpenStack Neutron. We are also looking at integrating OpenStack Tempest into our new Xen Project Test Lab, such that we can ensure that upstream Xen and Libvirt will work with OpenStack Nova.

Can I Help?

If you use Xen Project with OpenStack and Libvirt, feel free to get in touch. Some members of the Xen Project community will be at the Vancouver OpenStack Summit and we are keen to learn, what issues you have and whether we can help fixing them. As community manager, will help arrange meetings and build bridges: feel free to contact me under community dot manager at xenproject dot org. You can also ask questions on xen-devel@ and xen-users@ and connect with Xen Project Developers and Users.

Additional Resources

PIÑATA

Why Unikernels Can Improve Internet Security

This is a reprint of a 3-part unikernel series published on Linux.com. In this post, Xen Project Advisory Board Chairman Lars Kurth explains how unikernels address security and allow for the careful management of particularly critical portions of an organization’s data and processing needs. (See part one, 7 Unikernel Projects to Take On Docker in 2015.)

Many industries are rapidly moving toward networked, scale-out designs with new and varying workloads and data types. Yet, pick any industry —  retail, banking, health care, social networking or entertainment —  and you’ll find security risks and vulnerabilities are highly problematic, costly and dangerous.

Adam Wick, creator of the The Haskell Lightweight Virtual Machine (HaLVM) and a research lead at Galois Inc., which counts the U.S. Department of Defense and DARPA as clients, says 2015 is already turning out to be a break-out year for security.

“Cloud computing has been a hot topic for several years now, and we’ve seen a wealth of projects and technologies that take advantage of the flexibility the cloud offers,” said Wick. “At the same time though, we’ve seen record-breaking security breach after record-breaking security breach.”

The names are more evocative and well-known thanks to online news and social media, but low-level bugs have always plagued network services, Wick said. So, why is security more important today than ever before?

Improving Security

The creator of MirageOS, Anil Madhavapeddy, says it’s “simply irresponsible to continue to knowingly provision code that is potentially unsafe, and especially so as we head into a year full of promise about smart cities and ubiquitous Internet of Things. We wouldn’t build a bridge on top of quicksand, and should treat our online infrastructure with the same level of respect and attention as we give our physical structures.”

In the hopes of improving security, performance and scalability, there’s a flurry of interesting work taking place around blocking out functionality into containers and lighter-weight unikernel alternatives. Galois, which specializes in R&D for new technologies, says enterprises are increasingly interested in the ability to cleanly separate functionality to limit the effect of a breach to just the component affected, rather than infecting the whole system.

For next-generation clouds and in-house clouds, unikernels make it possible to run thousands of small VMs per host. Galois, for example, uses this capability in their CyberChaff project, which uses minimal VMs to improve intrusion detection on sensitive networks, while others have used similar mechanisms to save considerable cost in hardware, electricity, and cooling; all while reducing the attack surface exposed to malicious hackers. These are welcome developments for anyone concerned with system and network security and help to explain why traditional hypervisors will remain relevant for a wide range of customers well into the future.

Madhavapeddy goes as far to say that certain unikernel architectures would have directly tackled last year’s Heartbleed and Shellshock bugs.

“For example, end-to-end memory safety prevents Heartbleed-style attacks in MirageOS and the HaLVM. And an emphasis on compile-time specialization eliminates complex runtime code such as Unix shells from the images that are deployed onto the cloud,” he said.

The MirageOS team has also put their stack to the test by releasing a “Bitcoin pinata,” which is a unikernel that guards a collection of Bitcoins.  The Bitcoins can only be claimed by breaking through the unikernel security (for example, by compromising the SSL/TLS stack) and then moving the coins.  If the Bitcoins are indeed transferred away, then the public transaction record will reflect that there is a security hole to be fixed.  The contest has been running since February 2015 and the Bitcoins have not yet been taken.

PIÑATA

Linux container vs. unikernel security

Linux, as well as Linux containers and Docker images, rely on a fairly heavyweight core OS to provide critical services. Because of this, a vulnerability in the Linux kernel affects every Linux container, Wick said. Instead, using an approach similar to a la carte menus, unikernels only include the minimal functionality and systems needed to run an application or service, all of which makes writing an exploit to attack them much more difficult.

Cloudius Systems, which is running a private beta of OSv, which it tags as the operating system for the cloud, recognizes that progress is being made on this front.

“Rocket is indeed an improvement over Docker, but containers aren’t a multi-tenant solution by design,” said CEO Dor Laor. “No matter how many SELinux Linux policies you throw on containers, the attack surface will still span all aspects of the kernel.”

Martin Lucina, who is working on the Rump Kernel software stack, which enables running existing unmodified POSIX software without an operating system on various platforms, including bare metal embedded systems and unikernels on Xen, explains that unikernels running on the Xen Project hypervisor benefit from the strong isolation guarantees of hardware virtualization and a trusted computing base that is orders of magnitude smaller than that of container technologies.

“There is no shell, you cannot exec() a new process, and in some cases you don’t even need to include a full TCP stack. So there is very little exploit code can do to gain a permanent foothold in the system,” Lucina said.

The key takeaway for organizations worried about security is that they should treat their infrastructure in a less monolithic way. Unikernels allow for the careful management of particularly critical portions of an organization’s data and processing needs. While it does take some extra work, it’s getting easier every day as more developers work on solving challenges with orchestration, logging and monitoring. This means unikernels are coming of age just as many developers are getting serious about security as they begin to build scale-out, distributed systems.

For those interested in learning more about unikernels, the entire series is available as a white paper titled “The Next Generation Cloud: The Rise of the Unikernel.”

Read part 1: 7 Unikernel Projects to Take On Docker in 2015

7 Unikernel Projects to Take On Docker in 2015

This is a reprint of a 3-part unikernel series published on Linux.com. In part one, Xen Project Advisory Board Chairman Lars Kurth takes a closer look at the rise of unikernels and several up-and-coming projects to keep close tabs on in the coming months.

Docker and Linux container technologies dominate headlines today as a powerful, easy way to package applications, especially as cloud computing becomes more mainstream. While still a work-in-progress, they offer a simple, clean and lean way to distribute application workloads.

With enthusiasm continuing to grow for container innovations, a related technology called unikernels is also beginning to attract attention. Known also for their ability to cleanly separate functionality at the component level, unikernels are developing a variety of new approaches to deploy cloud services.

Traditional operating systems run multiple applications on a single machine, managing resources and isolating applications from one another.  A unikernel runs a single application on a single virtual machine, relying instead on the hypervisor to isolate those virtual machines. Unikernels are constructed by using “library operating systems,” from which the developer selects only the minimal set of services required for an application to run. These sealed, fixed-purpose images run directly on a hypervisor without an intervening guest OS such as Linux.

unikernel illustration
Image credit: Xen Project.

 

As well as improving upon container technologies, unikernels are also able to deliver impressive flexibility, speed and versatility for cross-platform environments, big data analytics and scale-out cloud computing. Like container-based solutions, this technology fulfills the promise of easy deployment, but unikernels also offer an extremely tiny, specialized runtime footprint that is much less vulnerable to attack.

There are several up-and-coming open source projects to watch this year, including ClickOSClive,HaLVMLINGMirageOSRump Kernels and OSv among others, with each of them placing emphasis on a different aspect of the unikernel approach.  For example, MirageOS and HaLVM take a clean-slate approach and focus on safety and security, ClickOS emphasizes speed, while OSv and Rump kernels aim for compatibility with legacy software. Such flexible approaches are not possible with existing monolithic operating systems, which have decades of assumptions and trade-offs baked into them.

How are unikernels able to deliver better security? How do the various unikernel implementations differ in their approach? Who is using the technology today? What are the key benefits to cloud and data center operators? Will unikernels on hypervisors replace containers, or will enterprises use a mix of all three? If so, how and why?  Answers to these questions and insights from the key developers behind these exciting new projects will be covered in parts two and three of this series.

For those interested in learning more about unikernels, the entire series is available as a white paper titled “The Next Generation Cloud: The Rise of the Unikernel.”

Introducing Xen Project’s New Test Lab

One of Xen Project’s highest priorities is to continually work to improve the quality of our code and code coverage. We’re serious and proactive when it comes to minimising and, whenever possible, eliminating any adverse effects from defects, security vulnerabilities or performance problems. Our users run some of the largest cloud and datacenter operations in the world, so we know that reboots and service interruptions are more than mere glitches in operations.

That’s why the Xen Project Advisory Board has made a significant investment to set up a Xen Project-owned Test Lab based on OSSTEST. We’re excited to announce that the lab is now live and in production.

Our new test lab will allows us to improve upstream quality of the Xen Project Hypervisor, while developers from different organizations now have access to a system that can be used to investigate test failures, schedule test jobs, etc. The test lab is also being used for regression testing of the Xen Project Hypervisor against different upstream and downstream environments. The Xen Project also tests against upstream and downstream open source projects such as the Linux Kernel and OpenStack to ensure that the Xen Project Hypervisor works seamlessly with important key technologies.

To help kick-start the lab, we created the Test Infrastructure Working Group, which helped source a COLO, spec out the hardware requirements for the test system and helped procure the machines needed to run OSSTEST. A special thank you goes to Ian Jackson, who has been instrumental in getting this effort off the ground over the past six months. We also appreciate contributions from our committee comprised of employees from AMD, Amazon Web Services, Citrix, Intel and Oracle, who have worked closely with our maintainers, project leads and wider community to create the new testing facility. As a result, we’ve decommissioned the old system run by Citrix as a service to the Xen Project community.

In this blog, we’ll describe our platform in more detail, identify what’ we’re doing today with OSSTEST and new goals from our Test Infrastructure Working Group.

Xen Project’s New Test Lab

This picture shows a section of our test lab, which is co-located at Earthlink

This picture shows a section of our test lab, which is co-located at the Earthlink Boston Data Centre

Currently the Xen Project owns a rack with a mixture of 24 x86 Intel and AMD hosts, and 8 ARM hosts (a custom-made set-up that includes 4 Cubietruck and 4 Arndale boards). Although we are currently only at 50 percent of planned capacity, the new test system has a much more diverse set of test machines and already offers more capacity than the old one did. It is also significantly more reliable than the old system.

Setting up a test lab with several different architectures and suppliers as well as a globally dispersed community is simply a challenge. For example, we discovered a number of bugs in the Linux kernel, FreeBSD and Hypervisor, which are currently being addressed. We also tripped over a number of unexpected issues, such as hardware issues and BIOS bugs in some of the machines in the COLO.

How Do We Use OSSTEST Today?

As mentioned earlier, the test lab is being used for regression testing. We’re testing different versions of Xen Project Hypervisor against different versions of the Linux Kernel, Linux distros, FreeBSD, Windows, QEMU, Rump Kernels and other open source projects we interface with. We also do test specific features such as Xen Security Modules, Credit 2 scheduler and other Xen Project features. And more recently we added functionality for performance testing of the Xen Project Hypervisor.

The set-up is essentially following a continuous integration paradigm; changes to Xen Project trees are pushed onto a staging branch. When tests succeed, changes are automatically pushed to the master branch. If there are failures, a bisection algorithm is run, which in many cases will identify the changeset that is causing the issue and developers are then notified that a specific changeset has created a problem.

This figure shows how the interaction between the test lab, the staging branch and the master branch.

This figure shows how the contribution workflow and its interaction between the test lab, the staging branch and the master branch.

What’s Next for Xen Project’s Test Lab

Bring the test lab to 100% capacity. As we work through the remaining issues, new machines will be taken into production as issues are resolved. The increased capacity will enable us to add on-demand testing capability, which will enable community members to test on hardware that they may not have themselves.

Broaden Access: The new system is in a community-operated COLO, rather than one hosted inside the Citrix private network. This, and the increased capacity, will mean that we can to grant access to the facility to Xen Project community members regardless of their affiliation, and provide an on-demand testing capability. To do this, we will need to agree upon processes to enable community members to access the test lab, in a similar way as we allow access to Coverity Scan data.

More Test Coverage: This increased capacity will allow us to further increase the breadth and depth of our testing. We have a number of enhancements in the pipeline, including for example testing of FreeBSD hosts, nested virtualisation, and compatibility testing with a variety of OS distributions.

Expand the infrastructure and add another rack. The Xen Project Advisory Board agreed to fund another rack and to procure further test machines in 2015. We expect that some 64-bit ARM machines will be part of the next tranche of machines that the Xen Project will buy.

Xen Project OpenStack CI loop: In addition to the Xen Project Test Lab, the Advisory Board is also funding a 3rd party continuous integration loop to improve Xen Project virtualization and OpenStack integration. A prototype of the CI loop is already running at jenkins.openstack.xenproject.org. Xen Project community members are currently fixing a number of issues in OpenStack Nova, Libvirt and Xen Project, such that the CI loop can become voting on OpenStack Nova commits.

We also continue to run the Coverity Scan static analysis service, which helps us quickly find, review and resolve Xen Project Hypervisor code defects and vulnerabilities. We’ve done this since 2013 and continue to regularly review and fix reported bugs.

If you’re interested in testing Xen, we encourage developers to join the xen-devel mailing list and ask how you can help. We’re always looking for help extending the set of test cases to tune performance for as wide a set of configurations and features as possible.

A Tale of Two Amazing Open Source Hypervisors

Born in the logic of ones and zeroes and forged in the heat of battle, two hypervisors–sworn foes in the realm of virtualization–are about to unite in a way many never thought possible. Over beer and code.

Join the teams behind Xen Project Developer Summit and KVM Forum in Seattle as they co-host a social event that will rock the virtualization world. On August 18, 2015, at the close of the Xen Project Developer Summit and on the eve of KVM Forum, attendees of both events can come together and collaborate in the best way possible: with crudites and hors d’oeuvres (and beer).

Virtualization is one of the most important technologies in IT today, so it makes perfect sense for the two best hypervisor projects to collaborate and socialize at an event that celebrates their similarities and bridges that gap between all things KVM and Xen.

virtlogos

The party will get started Tuesday, August 18, at a time and location to be announced shortly! Attendees of both conferences are welcome to come and join the fun and be reminded of what open source is all about.

And before raising a pint to toast to friends both old and new, there’ll be an opportunity for some serious coding. So, if you’re a KVM contributor, a Xen zealot, or a power user of XenServer or oVirt, the joint KVM Forum and Xen Project Developer Summit Hackaton is the place to be during daylight hours.

The hackathon will be held on Tuesday, August 18, 2015, in the Virginia Room, 4th Floor, Union St. Tower of the Sheraton Seattle from 1:00pm to 5:00pm. Aiming to foster technical collaboration between the two best hypervisors in IT today, the event will enable participants to learn more about what makes each project work, as well as to delve into work on libvirt code that could bridge the gaps between Xen and KVM. Bring your laptops, your ideas, and your code and help improve open source virtualization for the good of both projects. Collaboration is what makes open source truly great, so come be a part of greatness.

Finally, we all know greatness is nothing to be shy about, so we encourage Xen ecosystem developers, contributors and users to submit a speaking proposal for Xen Project Developer Summit.  The CFP is open through May 1. The topics of discussion are nearly endless — from scaling and optimizations, nested virtualization, performance enhancements, and hardening and security to high availability and continuous backup desktop virtualization, new devices, boards and architectures and more. Presenting at #xendevsummit is the excellent way to share your knowledge of all things Xen and help define and plan for the future of Xen. If you’re still looking for inspiration, check out last year’s slides and topics. Register soon to benefit from early bird pricing. See you in Seattle!