Tag Archives: MirageOS

The Bare-Metal Hypervisor as a Platform for Innovation

In this industry, everyone seems to talk about innovation, but very few platforms exist which foster innovation.  More times than not, “innovation” is simply a buzzword used by some marketing campaign to hawk something about as novel as twenty-year-old accounting software.

Innovation does occur, of course.  But often real innovation leverages what already exists to create something which doesn’t yet exist.  It may borrow from the known, but it produces something previously unknown.  For example, the industry has been going wild over cloud computing in the past few years, but many of the core cloud computing concept are actually old mainframe concepts reimagined in the world of commodity servers.

Making a Place for Innovation to Thrive

A bare-metal hypervisor — like the one produced by the Xen Project — can be an excellent platform for innovation.  We think of hypervisors as old technology, plumbing for newer technologies like cloud — and, indeed, they are.  But the nature of the bare-metal hypervisor makes it an excellent platform for innovation to take place.

Continue reading


Why Unikernels Can Improve Internet Security

This is a reprint of a 3-part unikernel series published on Linux.com. In this post, Xen Project Advisory Board Chairman Lars Kurth explains how unikernels address security and allow for the careful management of particularly critical portions of an organization’s data and processing needs. (See part one, 7 Unikernel Projects to Take On Docker in 2015.)

Many industries are rapidly moving toward networked, scale-out designs with new and varying workloads and data types. Yet, pick any industry —  retail, banking, health care, social networking or entertainment —  and you’ll find security risks and vulnerabilities are highly problematic, costly and dangerous.

Adam Wick, creator of the The Haskell Lightweight Virtual Machine (HaLVM) and a research lead at Galois Inc., which counts the U.S. Department of Defense and DARPA as clients, says 2015 is already turning out to be a break-out year for security.

“Cloud computing has been a hot topic for several years now, and we’ve seen a wealth of projects and technologies that take advantage of the flexibility the cloud offers,” said Wick. “At the same time though, we’ve seen record-breaking security breach after record-breaking security breach.”

The names are more evocative and well-known thanks to online news and social media, but low-level bugs have always plagued network services, Wick said. So, why is security more important today than ever before?

Improving Security

The creator of MirageOS, Anil Madhavapeddy, says it’s “simply irresponsible to continue to knowingly provision code that is potentially unsafe, and especially so as we head into a year full of promise about smart cities and ubiquitous Internet of Things. We wouldn’t build a bridge on top of quicksand, and should treat our online infrastructure with the same level of respect and attention as we give our physical structures.”

In the hopes of improving security, performance and scalability, there’s a flurry of interesting work taking place around blocking out functionality into containers and lighter-weight unikernel alternatives. Galois, which specializes in R&D for new technologies, says enterprises are increasingly interested in the ability to cleanly separate functionality to limit the effect of a breach to just the component affected, rather than infecting the whole system.

For next-generation clouds and in-house clouds, unikernels make it possible to run thousands of small VMs per host. Galois, for example, uses this capability in their CyberChaff project, which uses minimal VMs to improve intrusion detection on sensitive networks, while others have used similar mechanisms to save considerable cost in hardware, electricity, and cooling; all while reducing the attack surface exposed to malicious hackers. These are welcome developments for anyone concerned with system and network security and help to explain why traditional hypervisors will remain relevant for a wide range of customers well into the future.

Madhavapeddy goes as far to say that certain unikernel architectures would have directly tackled last year’s Heartbleed and Shellshock bugs.

“For example, end-to-end memory safety prevents Heartbleed-style attacks in MirageOS and the HaLVM. And an emphasis on compile-time specialization eliminates complex runtime code such as Unix shells from the images that are deployed onto the cloud,” he said.

The MirageOS team has also put their stack to the test by releasing a “Bitcoin pinata,” which is a unikernel that guards a collection of Bitcoins.  The Bitcoins can only be claimed by breaking through the unikernel security (for example, by compromising the SSL/TLS stack) and then moving the coins.  If the Bitcoins are indeed transferred away, then the public transaction record will reflect that there is a security hole to be fixed.  The contest has been running since February 2015 and the Bitcoins have not yet been taken.


Linux container vs. unikernel security

Linux, as well as Linux containers and Docker images, rely on a fairly heavyweight core OS to provide critical services. Because of this, a vulnerability in the Linux kernel affects every Linux container, Wick said. Instead, using an approach similar to a la carte menus, unikernels only include the minimal functionality and systems needed to run an application or service, all of which makes writing an exploit to attack them much more difficult.

Cloudius Systems, which is running a private beta of OSv, which it tags as the operating system for the cloud, recognizes that progress is being made on this front.

“Rocket is indeed an improvement over Docker, but containers aren’t a multi-tenant solution by design,” said CEO Dor Laor. “No matter how many SELinux Linux policies you throw on containers, the attack surface will still span all aspects of the kernel.”

Martin Lucina, who is working on the Rump Kernel software stack, which enables running existing unmodified POSIX software without an operating system on various platforms, including bare metal embedded systems and unikernels on Xen, explains that unikernels running on the Xen Project hypervisor benefit from the strong isolation guarantees of hardware virtualization and a trusted computing base that is orders of magnitude smaller than that of container technologies.

“There is no shell, you cannot exec() a new process, and in some cases you don’t even need to include a full TCP stack. So there is very little exploit code can do to gain a permanent foothold in the system,” Lucina said.

The key takeaway for organizations worried about security is that they should treat their infrastructure in a less monolithic way. Unikernels allow for the careful management of particularly critical portions of an organization’s data and processing needs. While it does take some extra work, it’s getting easier every day as more developers work on solving challenges with orchestration, logging and monitoring. This means unikernels are coming of age just as many developers are getting serious about security as they begin to build scale-out, distributed systems.

For those interested in learning more about unikernels, the entire series is available as a white paper titled “The Next Generation Cloud: The Rise of the Unikernel.”

Read part 1: 7 Unikernel Projects to Take On Docker in 2015

7 Unikernel Projects to Take On Docker in 2015

This is a reprint of a 3-part unikernel series published on Linux.com. In part one, Xen Project Advisory Board Chairman Lars Kurth takes a closer look at the rise of unikernels and several up-and-coming projects to keep close tabs on in the coming months.

Docker and Linux container technologies dominate headlines today as a powerful, easy way to package applications, especially as cloud computing becomes more mainstream. While still a work-in-progress, they offer a simple, clean and lean way to distribute application workloads.

With enthusiasm continuing to grow for container innovations, a related technology called unikernels is also beginning to attract attention. Known also for their ability to cleanly separate functionality at the component level, unikernels are developing a variety of new approaches to deploy cloud services.

Traditional operating systems run multiple applications on a single machine, managing resources and isolating applications from one another.  A unikernel runs a single application on a single virtual machine, relying instead on the hypervisor to isolate those virtual machines. Unikernels are constructed by using “library operating systems,” from which the developer selects only the minimal set of services required for an application to run. These sealed, fixed-purpose images run directly on a hypervisor without an intervening guest OS such as Linux.

unikernel illustration
Image credit: Xen Project.


As well as improving upon container technologies, unikernels are also able to deliver impressive flexibility, speed and versatility for cross-platform environments, big data analytics and scale-out cloud computing. Like container-based solutions, this technology fulfills the promise of easy deployment, but unikernels also offer an extremely tiny, specialized runtime footprint that is much less vulnerable to attack.

There are several up-and-coming open source projects to watch this year, including ClickOSClive,HaLVMLINGMirageOSRump Kernels and OSv among others, with each of them placing emphasis on a different aspect of the unikernel approach.  For example, MirageOS and HaLVM take a clean-slate approach and focus on safety and security, ClickOS emphasizes speed, while OSv and Rump kernels aim for compatibility with legacy software. Such flexible approaches are not possible with existing monolithic operating systems, which have decades of assumptions and trade-offs baked into them.

How are unikernels able to deliver better security? How do the various unikernel implementations differ in their approach? Who is using the technology today? What are the key benefits to cloud and data center operators? Will unikernels on hypervisors replace containers, or will enterprises use a mix of all three? If so, how and why?  Answers to these questions and insights from the key developers behind these exciting new projects will be covered in parts two and three of this series.

For those interested in learning more about unikernels, the entire series is available as a white paper titled “The Next Generation Cloud: The Rise of the Unikernel.”

Schrödinger’s Cat in a (Xen) Virtualzed ‘Box’

Fedora Logo

Fedora Logo

Yes, apparently Schrödinger’s cat is alive, as the latest release of Fedora — Fedora 19, codename Schrödinger’s cat– as been released on July 2nd, and that even happened quite on time.

So, apparently, putting the cat “in a box” and all the stuff was way too easy, and that’s why we are bringing the challenge to the next level: do you dare putting Schrödinger’s cat “in a virtual box”?

In other words, do you dare install Fedora 19 within a Xen virtual machine? And if yes, how about doing that using Fedora 19 itself as Dom0?
Continue reading

Xen Support Updates

For those of you looking for more assistance with a particular Domain0 or DomainU guest operating system, I have created a new document that lists the OS and links to support sites.

I have also added a new link on the Xen.org Support pages to the Pre-Built DomU sites that currnently create images for your leverage.

As the document and Wiki page are “living” entities, please feel free to edit the Wiki page with more information or send me new information for the document.

Launch DomU from Pre-Built Image

In the second posting on my explorations with Xen 3.3.1, I am going to highlight the steps I followed and problems I had when launching a simple pre-built DomU image.

Like my compiling Xen post  – this is not for regular Xen users but for people thinking of trying out Xen on their own for the first time.

1) Pre-built DomU Image – I started with a very simple version of Linux to bring up, ttylinux. The file I used is at http://www.xen.org/files/ttylinux-i486-8.0.img.

2) Configuration File – I built the file ttylinux in the /etc/xen directory. The only issue I had was that my first version did not have the extra=’xencons=tty’ line so my DomU would start and then sit there with nothing happening execpt the cursor blinking.  The additional line gave the console control of the DomU process so I could login, etc.

kernel = “/boot/…”
name = “ttylinux”
vif = [ ” ]
ip = “local ip”
disk = [‘file:/home/location…/ttylinux-i486-8.0.img, sda1,w’]
root = “/dev/sda1 r0″
extra = ‘xencons=tty’

3) Running DomU – In the /etc/xen directory I run:  sudo xm create -c ttylinux which launches the ttylinux DomU  in the same console window. In another console window, I run sudo xm list and am able to see the Dom0 and DomU running.

4) Stopping DomU – To stop the DomU, I run: sudo xm destoy ttylinux which immediately stops the DomU process. I then run sudo xm list to confirm.

For more details on the configuration paramters and their meaning, I recommend the Runinng Xen book, searching on http://xen.markmail.org or the Clarkson University Tutorials from USENIX training.

In my next posting, I will take a standard linux distribution and create a DomU version and launch on Xen. Stay tuned…