Category Archives: Uncategorized

Future of Xen Project: Video Spotlight Interview with Citrix’s George Dunlap

In this video, George Dunlap Senior Engineer of Citrix explains how and why Citrix works with the Xen Project, why companies use Xen Project Hypervisor, and new opportunities for the future of this technology.

Citrix Systems designs, develops and markets technology solutions that enable information technology (IT) services. Citrix has always been committed to the community and consistent in its principles of transparency and neutrality, helping the Xen Project maintain its position as one of the leading open source hypervisors.

One of the major benefits of being a part of the Xen Project is the multiplier benefit that comes with contributing code to open source communities. For example, if Citrix contributes 25% of the code, the equivalent of hiring 25 engineers, it receives 100 engineers’ worth of development as part of the Xen Project. This helps Citrix build the most efficient enterprise products possible and also allows the company to take an active role in leading the Xen Project Hypervisor into the future.

How has this collaboration been put into action? Most recently, Xen Project announced its 4.6 release, which was built into Citrix’s XenServer Dundee (this recently entered beta 1).*

As virtualization moves beyond simply being used for services, and expands into networks, mobile, automotive, and more embedded systems, features like isolation for security, lightweight for mobile, and high performance will continue to help Xen Project Hypervisor grow and support the next stage in cloud computing and virtualization. And Citrix will be there to help further this growth.

*XenServer Dundee has not been released and its feature set has not been finalized.

 

On rump kernels and the Rumprun unikernel

The Rumprun unikernel, based on the driver components offered by rump kernels, provides a means to run existing POSIX applications as unikernels on Xen. This post explains how we got here (it matters!), what sort of things can be solved today, and also a bit of what is in store for the future. The assumption for this post is that you are already familiar with unikernels and their benefits, or at least checked out the above unikernel link, so we will skip a basic introduction to unikernels.

Pre-Xen history for rump kernels

The first line of code for rump kernels was written more than 8 years ago in 2007, and the roots of the project can be traced to some years before that. The initial goal was to run unmodified NetBSD kernel drivers as userspace programs for testing and development purposes. Notably, in our terminology we use driver for any software component that acts as a protocol translator, e.g. TCP/IP driver, file system driver or PCI NIC driver. Namely, the goal was to run the drivers in a minimal harness without the rest of the OS so that the OS would not get in the way of the development. That minimal quality of most of the OS not being present also explains why the container the drivers run in is called a rump kernel. It did not take long to realize the additional potential of isolated, unmodified, production-quality drivers. “Pssst, want a portable, kernel-quality TCP/IP stack?” So, the goal of rump kernels was adjusted to provide portable, componentized drivers. Developing and testing NetBSD drivers as userspace programs was now one side-effect enabled by that goal. Already in 2007 the first unikernel-like software stack built on rump kernels was sketched by using file system drivers as a mtools-workalike (though truthfully it was not a unikernel for reasons we can split hairs about). Later, in 2008, a proper implementation of that tool was done under the name fs-utils [Arnaud Ysmal].

The hard problem with running drivers in rump kernels was not figuring out how to make it work once, the hard problem was figuring out how to make it sustainable so that you could simply pick any vintage of the OS source tree and use the drivers in rump kernels out-of-the-box. It took about two weeks to make the first set of unmodified drivers run as rump kernels. It took four years, ca. 2007-2011, to figure out how to make things sustainable. During the process, the external dependencies on top of which rump kernels run were discovered to consist of a thread implementation, a memory allocator, and access to whatever I/O backends the drivers need to access. These requirements were codified into the rump kernel hypercall interface. Unnecessary dependencies on complications, such as interrupts and virtual memory, were explicitly avoided as part of the design process. It is not that supporting virtual memory, for example, was seen to be impossible, but rather that the simplest form meant things would work the best and break the least. This post will not descend into the details or rationales of the internal architecture, so if you are interested in knowing more, have a look at book.rumpkernel.org.

In 2011, with rump kernels mostly figured out, I made the following prediction about them: “[…] can be seen as a gateway from current all-purpose operating systems to more specialized operating systems running on ASICs”. Since this is the Xen blog, we should unconventionally understand ASIC to stand for Application Specific Integrated Cloud.  The only remaining task was to make the prediction come true. In 2012-2013, I did some for-fun-and-hack-value work by making rump kernels run e.g. in a web browser and in the Linux kernel. Those experiments taught me a few more things about fitting rump kernels into other environments and confirmed that rump kernels could really run anywhere as long as one figured out build details and implemented the rump kernel hypercall interface.

Birth of the rump kernel-based unikernel

Now we get to the part where Xen enters the rump kernel story, and one might say it does so in a big way. A number of people suggested running rump kernels on top of Xen over the years. The intent was to build e.g. lightweight routers or firewalls as Xen guests, or anything else where most of the functionality was located in the kernel in traditional operating systems. At that time, there was no concept of offering userspace APIs on top of a rump kernel, just a syscall interface (yes, syscalls are drivers). The Xen hypervisor was a much lower-level entity than anything else rump kernels ran on back then. In summer 2013 I discovered Mini-OS, which provided essentially everything that rump kernels needed, and not too much extra stuff, so Xen support turned out to be more or less trivial. After announcing the result on the Xen lists, a number of people made the observation that a libc bolted on top of the rump kernel stack should just work; after all, rump kernels already offered the set of system calls expected by libc. Indeed, inspired by those remarks and after a few days of adventures with Makefiles and scripts, the ability to run unmodified POSIX-y software on top of the Xen hypervisor via the precursor of the Rumprun unikernel was born. Years of architectural effort on rump kernels had paid rich dividends.

So it was possible to run software. However, before you can run software, you have to build it for the target environment — obviously. Back in 2013, a convoluted process was required for building. The program that I used for testing during the libc-bolting development phase was netcat. That decision was mostly driven by the fact that netcat is typically built with cc netcat.c, so it was easy to adapt netcat’s build procedure. Hand-adapting more complex build systems was trickier. That limitation meant that the Rumprun unikernel was accessible only to people who had the know-how to adapt build systems and the time to do so — that set of people can be approximated as the empty set. What we wanted was for people to be able to deploy existing software as unikernels using the existing “make + config + run” skill set.

The first step in the above direction was creating toolchain wrappers for building applications on top of the Rumprun unikernel [Ian Jackson]. The second step was going over a set of pertinent real-world application programs so as to both verify that things really work, and also to create a set of master examples for common cases [Martin Lucina]. The third step was putting the existing examples into a nascent packaging system. The combined result is that anybody with a Xen-capable host is no more than a few documented commands away from deploying e.g. Nginx or PHP as unikernels. We are still in the process of making the details maximally flexible and user-friendly, but the end result works already. One noteworthy thing is that applications for the Rumprun unikernel are always cross-compiled. If you are an application author and wish to see your work run on top of the Rumprun unikernel, make sure your build system supports cross-compilation. For example, but not limited to, using standard GNU autotools will just work.

Comparison with other unikernels

The goal of rump kernels was not to build a unikernel. It still is not. The mission of the rump kernel project is to provide reusable kernel-quality components which others may build upon. For example, the MirageOS project has already started work towards using rump kernels in this capacity [Martin Lucina]. We encourage any other project wishing to do the same to communicate with us especially if changes are needed. Everyone not having to reinvent the wheel is one thing; we are aiming for everyone not having to maintain the wheel.

So if the goal of the rump kernel project was not to build a unikernel, why are we doing one? At some point we simply noticed that we have the right components and a basic unikernel built on top of rump kernels fell out in a matter of days. That said, and as indicated above, there has been and still is a lot of work to be done to provide the peripheral support infrastructure for unikernels. Since our components come unmodified from NetBSD, one might say that the Rumprun unikernel targets legacy applications. Of course, here “legacy” means “current reality,” even when I do strongly believe that “legacy” will some day actually be legacy. But things change slowly. Again, due to unmodified NetBSD component reuse, we offer a POSIX-y API. Since there is no porting work which could introduce errors into the application runtime, libc or drivers, programs will not just superficially seem to work, they will actually work and be stable. In the programming language department, most languages with a POSIX-based runtime will also simply just work. In the name of the history aspect of this post, the first non-C language to run on top of rump kernels on Xen was LuaJIT [Justin Cormack].

The following figure illustrates the relationships of the concepts further. We have not discussed the anykernel, but for understanding the figure it is enough to know that the anykernel enables the use of unmodified kernel components from an existing OS kernel; it is not possible to use just any existing OS kernel to construct rump kernels (details at book.rumpkernel.org). Currently, NetBSD is the only anykernel in existence. The third set of boxes on the right is an example, and the Mirage + rump kernel amalgamation is another example of what could be depicted there.

Conclusions and future work

You can use rump kernels to deploy current-day software as unikernels on Xen. Those unikernels have a tendency to simply work since we are using unmodified, non-ported drivers from an upstream OS. Experiments with running a reasonable number of varied programs as Rumprun unikernels confirms the previous statement. Once we figure out the final, stable usage of the full build-config-deploy chain, we will write a howto-oriented post here. Future posts will also be linked from the publications and talks page on the rump kernel wiki. Meanwhile, have a look at repo.rumpkernel.org/rumprun and the wiki tutorial section. If you want to help with figuring out e.g. the packaging system or launch tool usage, check the community page on the wiki for information on how to contribute.

There will be a number of talks around the Rumprun unikernel this month. At the Xen 2015 Developer Summit in Seattle, Wei Liu will be talking about Rump Kernel Based Upstream QEMU Stubdom and Martin Lucina will be talking about Deploying Real-World Software Today as Unikernels on Xen with Rumprun. Furthermore, at the co-located CloudOpen, Martin Lucina will be one of the panelists on the Unikernel vs. Container Panel Debate. At the Unikernel User Summit at Texas Linux Fest Justin Cormack will present get started using Rump Kernels.

Future of Xen Project – Video Spotlight with ARM’s Thomas Molgaard

ARM joined Xen Project two years ago as part of its drive into servers, networking and the emerging “Internet of Things” markets. In our latest “Future of Xen” video, Thomas Molgaard, Manager of Software Marketing – Systems & Software at ARM, talks about changes unfolding in enterprise and cloud computing that are creating new opportunities for his company and virtualization.

ARM designs the technology that is at the heart of advanced digital products, from wireless, networking and consumer entertainment solutions to imaging, automotive, security and storage devices. It offers electronics companies a comprehensive semiconductor IP portfolio, enhanced by the company’s broad partner community that increasingly embraces open source.

The company believes open source and its collaborative development model is keeping pace with transitions in the industry, helping to give companies more deployment options when it comes to cloud hosting, caching, scale-out storage and NoSQL and Hadoop analytics. ARM is hoping to offer even more variety to these application users. Early on the semiconductor design company recognized that Linux and Xen would play an important role in opening data centers up to enterprise-class 64-bit ARMv8 servers.  This recent eWeek article showcases a proof point from Cavium, one of the earliest vendors to launch ARM-based chips for servers, on display last week at OpenStack Summit in Vancouver.

Built from the ground up with open source best practices, Xen virtualization is increasingly deployed in applications targeted by ARM customers, including servers, networking infrastructure and embedded systems.  First to market with ARM support, Xen Project’s original ARM support focused on newer CPUs designed for servers. Taking direction from the community, ARM and Xen have expanded their scope to mobile, tablet, automotive, Internet of Things, midddlebox processing and other embedded applications.

In the video, Molgaard describes how Xen’s lean architecture is perfectly suited to ARM architecture-based solutions. Collaboration with the open source partners like Xen Project, Linaro and The Linux Foundation is extremely valuable as ARM makes further inroads in the data center and cloud infrastructure, he says.

 

Xen Project Hackathon 15 Event Report

After spending almost a week in Shanghai for the Xen Project Hackathon it is time to write up some notes.

More than 48 delegates from Alibaba, Citrix, Desay SV Automotive, GlobalLogic, Fujitsu, Huawei, Intel, Oracle, Suse and Visteon Electronics attended the event, which covered a wide range of topics.

I wanted to thank Susie Li, Hongbo Wang and Mei Yu from Intel for funding and organizing the event.

zizhu_intel Before Registration People Arriving Group Picture

Format

Xen Project Hackathons started originally as pure hackathons, but have over time evolved to follow the Open Space Unconference format, which we tested in 2012 and fully embraced in 2013. It appears to be one of the best formats to foster discussion and problem solving for groups of up to 50 people.

Besides providing an opportunity to meet face-to-face and build bridges, our hackathons have been very successful in tackling difficult issues, which require plenty of interaction. These issues range from modifying our development process and solving architecture problems to conducting difficult design discussions, coordinating inter-dependencies and sharing experiences. Of course we also write code and sometimes conduct live code reviews in smaller groups alongside the discussion sessions.

00019 00028 20150428_161138 20150428_170321

Discussed Topics

At the event, we covered topics such as:

  • Cadence of maintenance releases
  • Numbering of Xen Project Releases
  • Xen 4.6 Release Planning
  • Testing and Testing Frameworks
  • Hot-patching in the Xen Project Hypervisor
  • Changes to the COLO architecture and interdependencies with Migration v2
  • Possible Future Improvements to Live Migration
  • Upstreaming of Intel GVT-g
  • Automotive, including lessons learned on implementing graphics virtualization using OpenGL 2.0 and a walk through of a mediated graphics virtualization solution for the Imagination PowerVR SGX544 GPU on Xen and ARM
  • Xen and OpenStack
  • Evolution of Virtual Machine Introspection (including HW assistance) in the Xen Hypervisor
  • Vendor Strategies For Upgrading Xen in their products (e.g. from Xen 4.1.5 to 4.5)
  • Effectiveness of New Xen Project Security Policy

As usual, we will post summaries (or patches/RFC’s) from these discussions on xen-devel@ – I will also post links to follow-up discussions on our wiki.

Future Xen Project Developer Events in Asia

We’ve learned that the term hackathon is misleading for this event and confuses some of our attendees. Our hackathons are really more of an Architecture Workshop and Design Summit. For this reason, we will probably rename the Hackathon: for a current proposal on the new name check out this and this e-mail thread.

As the event was very successful and we have a growing, active developer community in China, we are considering holding another similar event in 2017 or a Xen Project Developer Summit at LinuxCon Japan in 2017. Stay tuned for more details.

20150427_205419 20150427_205410 20150427_211809 20150429_174059

 

PIÑATA

Why Unikernels Can Improve Internet Security

This is a reprint of a 3-part unikernel series published on Linux.com. In this post, Xen Project Advisory Board Chairman Lars Kurth explains how unikernels address security and allow for the careful management of particularly critical portions of an organization’s data and processing needs. (See part one, 7 Unikernel Projects to Take On Docker in 2015.)

Many industries are rapidly moving toward networked, scale-out designs with new and varying workloads and data types. Yet, pick any industry —  retail, banking, health care, social networking or entertainment —  and you’ll find security risks and vulnerabilities are highly problematic, costly and dangerous.

Adam Wick, creator of the The Haskell Lightweight Virtual Machine (HaLVM) and a research lead at Galois Inc., which counts the U.S. Department of Defense and DARPA as clients, says 2015 is already turning out to be a break-out year for security.

“Cloud computing has been a hot topic for several years now, and we’ve seen a wealth of projects and technologies that take advantage of the flexibility the cloud offers,” said Wick. “At the same time though, we’ve seen record-breaking security breach after record-breaking security breach.”

The names are more evocative and well-known thanks to online news and social media, but low-level bugs have always plagued network services, Wick said. So, why is security more important today than ever before?

Improving Security

The creator of MirageOS, Anil Madhavapeddy, says it’s “simply irresponsible to continue to knowingly provision code that is potentially unsafe, and especially so as we head into a year full of promise about smart cities and ubiquitous Internet of Things. We wouldn’t build a bridge on top of quicksand, and should treat our online infrastructure with the same level of respect and attention as we give our physical structures.”

In the hopes of improving security, performance and scalability, there’s a flurry of interesting work taking place around blocking out functionality into containers and lighter-weight unikernel alternatives. Galois, which specializes in R&D for new technologies, says enterprises are increasingly interested in the ability to cleanly separate functionality to limit the effect of a breach to just the component affected, rather than infecting the whole system.

For next-generation clouds and in-house clouds, unikernels make it possible to run thousands of small VMs per host. Galois, for example, uses this capability in their CyberChaff project, which uses minimal VMs to improve intrusion detection on sensitive networks, while others have used similar mechanisms to save considerable cost in hardware, electricity, and cooling; all while reducing the attack surface exposed to malicious hackers. These are welcome developments for anyone concerned with system and network security and help to explain why traditional hypervisors will remain relevant for a wide range of customers well into the future.

Madhavapeddy goes as far to say that certain unikernel architectures would have directly tackled last year’s Heartbleed and Shellshock bugs.

“For example, end-to-end memory safety prevents Heartbleed-style attacks in MirageOS and the HaLVM. And an emphasis on compile-time specialization eliminates complex runtime code such as Unix shells from the images that are deployed onto the cloud,” he said.

The MirageOS team has also put their stack to the test by releasing a “Bitcoin pinata,” which is a unikernel that guards a collection of Bitcoins.  The Bitcoins can only be claimed by breaking through the unikernel security (for example, by compromising the SSL/TLS stack) and then moving the coins.  If the Bitcoins are indeed transferred away, then the public transaction record will reflect that there is a security hole to be fixed.  The contest has been running since February 2015 and the Bitcoins have not yet been taken.

PIÑATA

Linux container vs. unikernel security

Linux, as well as Linux containers and Docker images, rely on a fairly heavyweight core OS to provide critical services. Because of this, a vulnerability in the Linux kernel affects every Linux container, Wick said. Instead, using an approach similar to a la carte menus, unikernels only include the minimal functionality and systems needed to run an application or service, all of which makes writing an exploit to attack them much more difficult.

Cloudius Systems, which is running a private beta of OSv, which it tags as the operating system for the cloud, recognizes that progress is being made on this front.

“Rocket is indeed an improvement over Docker, but containers aren’t a multi-tenant solution by design,” said CEO Dor Laor. “No matter how many SELinux Linux policies you throw on containers, the attack surface will still span all aspects of the kernel.”

Martin Lucina, who is working on the Rump Kernel software stack, which enables running existing unmodified POSIX software without an operating system on various platforms, including bare metal embedded systems and unikernels on Xen, explains that unikernels running on the Xen Project hypervisor benefit from the strong isolation guarantees of hardware virtualization and a trusted computing base that is orders of magnitude smaller than that of container technologies.

“There is no shell, you cannot exec() a new process, and in some cases you don’t even need to include a full TCP stack. So there is very little exploit code can do to gain a permanent foothold in the system,” Lucina said.

The key takeaway for organizations worried about security is that they should treat their infrastructure in a less monolithic way. Unikernels allow for the careful management of particularly critical portions of an organization’s data and processing needs. While it does take some extra work, it’s getting easier every day as more developers work on solving challenges with orchestration, logging and monitoring. This means unikernels are coming of age just as many developers are getting serious about security as they begin to build scale-out, distributed systems.

For those interested in learning more about unikernels, the entire series is available as a white paper titled “The Next Generation Cloud: The Rise of the Unikernel.”

Read part 1: 7 Unikernel Projects to Take On Docker in 2015

Future of Xen Project: Video Spotlight Interview with Cavium’s Larry Wikelius

With several companies introducing ARM servers recently, cloud providers and enterprise datacenters are excited to see new alternatives for reducing costs and power use come to market. Cavium, a semiconductor leader with a long heritage in security and wireless/ networking, entered the race with the introduction of ThunderX™ the industry’s first 48-core and 96-core family of ARMv8 workload optimized processors. To get to this point, numerous companies, developers and organizations, including Cavium, put great effort into the development of server software, standards and products to make ARM based SoCs a viable option in these environments. For Cavium, joining the Xen Project was a critical part of its work to advance the evolving ARM ecosystem. According to Larry Wikelius, Xen Project Advisory Board member and Cavium’s Director of Ecosystems and Partner Enablement, it has also been crucial to competing in this evolving market.

In our latest “Future of Xen” video, Larry says working with Xen Project hypervisor is an important requirement for certain customers. With many Cavium customers and partners already using the open source hypervisor, the company needs to not only support Xen, but commit to optimizing the hypervisor for private and public clouds as well as corporate datacenters. Cavium joined the Xen Project community last year and is pleased to see the Project dedicate significant resources and development cycles to ensuring full support, peak performance and efficiencies for ARM-based servers and SoCs. As a board member, Cavium is also able to shape the Project’s roadmap, ensuring it protects Xen deployments and a scale-out strategy to support cloud, telecommunications, Internet of Things devices, big data analytics and more. While the Project’s early commitment for ARM support is relevant, what’s equally important is the hypervior’s small footprint and the growing number of silicon vendors, software companies and end users investing in the Project.

So beyond scale out Data Center and Cloud deployments, what else is ahead for ARM-based servers and SoCs? Larry already sees the networking and carrier space mobilizing behind network function virtualization (NFV). Versions of its ThunderX chip aimed at (NFV) workloads as well as telecommunication, media, and gaming systems offer more I/O in general and security accelerators. Larry recently spoke about this topic at The Linux Foundation’s Collaboration Summit 2015 last month. Be sure to watch his video and check out slides from his talk to learn more.