Tag Archives: security

Why Unikernels Can Improve Internet Security

This is a reprint of a 3-part unikernel series published on Linux.com. In this post, Xen Project Advisory Board Chairman Lars Kurth explains how unikernels address security and allow for the careful management of particularly critical portions of an organization’s data and processing needs. (See part one, 7 Unikernel Projects to Take On Docker in 2015.)

Many industries are rapidly moving toward networked, scale-out designs with new and varying workloads and data types. Yet, pick any industry —  retail, banking, health care, social networking or entertainment —  and you’ll find security risks and vulnerabilities are highly problematic, costly and dangerous.

Adam Wick, creator of the The Haskell Lightweight Virtual Machine (HaLVM) and a research lead at Galois Inc., which counts the U.S. Department of Defense and DARPA as clients, says 2015 is already turning out to be a break-out year for security.

“Cloud computing has been a hot topic for several years now, and we’ve seen a wealth of projects and technologies that take advantage of the flexibility the cloud offers,” said Wick. “At the same time though, we’ve seen record-breaking security breach after record-breaking security breach.”

The names are more evocative and well-known thanks to online news and social media, but low-level bugs have always plagued network services, Wick said. So, why is security more important today than ever before?

Improving Security

The creator of MirageOS, Anil Madhavapeddy, says it’s “simply irresponsible to continue to knowingly provision code that is potentially unsafe, and especially so as we head into a year full of promise about smart cities and ubiquitous Internet of Things. We wouldn’t build a bridge on top of quicksand, and should treat our online infrastructure with the same level of respect and attention as we give our physical structures.”

In the hopes of improving security, performance and scalability, there’s a flurry of interesting work taking place around blocking out functionality into containers and lighter-weight unikernel alternatives. Galois, which specializes in R&D for new technologies, says enterprises are increasingly interested in the ability to cleanly separate functionality to limit the effect of a breach to just the component affected, rather than infecting the whole system.

For next-generation clouds and in-house clouds, unikernels make it possible to run thousands of small VMs per host. Galois, for example, uses this capability in their CyberChaff project, which uses minimal VMs to improve intrusion detection on sensitive networks, while others have used similar mechanisms to save considerable cost in hardware, electricity, and cooling; all while reducing the attack surface exposed to malicious hackers. These are welcome developments for anyone concerned with system and network security and help to explain why traditional hypervisors will remain relevant for a wide range of customers well into the future.

Madhavapeddy goes as far to say that certain unikernel architectures would have directly tackled last year’s Heartbleed and Shellshock bugs.

“For example, end-to-end memory safety prevents Heartbleed-style attacks in MirageOS and the HaLVM. And an emphasis on compile-time specialization eliminates complex runtime code such as Unix shells from the images that are deployed onto the cloud,” he said.

The MirageOS team has also put their stack to the test by releasing a “Bitcoin pinata,” which is a unikernel that guards a collection of Bitcoins.  The Bitcoins can only be claimed by breaking through the unikernel security (for example, by compromising the SSL/TLS stack) and then moving the coins.  If the Bitcoins are indeed transferred away, then the public transaction record will reflect that there is a security hole to be fixed.  The contest has been running since February 2015 and the Bitcoins have not yet been taken.

PIÑATA

Linux container vs. unikernel security

Linux, as well as Linux containers and Docker images, rely on a fairly heavyweight core OS to provide critical services. Because of this, a vulnerability in the Linux kernel affects every Linux container, Wick said. Instead, using an approach similar to a la carte menus, unikernels only include the minimal functionality and systems needed to run an application or service, all of which makes writing an exploit to attack them much more difficult.

Cloudius Systems, which is running a private beta of OSv, which it tags as the operating system for the cloud, recognizes that progress is being made on this front.

“Rocket is indeed an improvement over Docker, but containers aren’t a multi-tenant solution by design,” said CEO Dor Laor. “No matter how many SELinux Linux policies you throw on containers, the attack surface will still span all aspects of the kernel.”

Martin Lucina, who is working on the Rump Kernel software stack, which enables running existing unmodified POSIX software without an operating system on various platforms, including bare metal embedded systems and unikernels on Xen, explains that unikernels running on the Xen Project hypervisor benefit from the strong isolation guarantees of hardware virtualization and a trusted computing base that is orders of magnitude smaller than that of container technologies.

“There is no shell, you cannot exec() a new process, and in some cases you don’t even need to include a full TCP stack. So there is very little exploit code can do to gain a permanent foothold in the system,” Lucina said.

The key takeaway for organizations worried about security is that they should treat their infrastructure in a less monolithic way. Unikernels allow for the careful management of particularly critical portions of an organization’s data and processing needs. While it does take some extra work, it’s getting easier every day as more developers work on solving challenges with orchestration, logging and monitoring. This means unikernels are coming of age just as many developers are getting serious about security as they begin to build scale-out, distributed systems.

For those interested in learning more about unikernels, the entire series is available as a white paper titled “The Next Generation Cloud: The Rise of the Unikernel.”

Read part 1: 7 Unikernel Projects to Take On Docker in 2015

Updates to Xen Project Security Process

lockBefore Christmas, the Xen Project ran a community consultation to refine its Security Problem Response Process.  We recently approved changes that, in essence, are tweaks to our existing process, which is based on a Responsible Disclosure philosophy.

Responsible Disclosure and our Security Problem Response Process are important components of keeping users of Xen Project-based products and services safe from security exploits. Both ensure that products and services can be patched by members of the pre-disclosure list before details of a vulnerability are published and before said vulnerabilities can be exploited by black hats.

The changes to our response process fall into a number of categories:

  • Clarify whether security updates can be deployed on publicly hosted systems (e.g. cloud or hosting providers) during embargo
  • Sharing of information among pre-disclosure list members
  • Applications procedure for pre-disclosure list membership

The complete discussion leading to the changes, the concrete changes to the process, and the voting records supporting the changes are tracked in Bug #44 -Security policy ambiguities. On February 11, 2015, the proposed changes were approved in accordance with Xen Project governance. Note that some process changes are already implemented, whereas others are waiting for implementation tasks (e.g. new secure mailing lists) before they can fully be put in place. We have however updated our Security Problem Response Process as the most important elements of the process are already in place.

Process Changes Already in Operation

The updated policy makes explicit whether or not patches related to a Xen Security Issue can be deployed by pre-disclosure list members. The concrete policy changes can be found here and here. In practice, every Xen Security Advisory will contain a section such as:

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

This section will clarify whether deploying fixed versions of Xen during the embargo is allowed. Any restrictions will also be stated in the embargoed advisory. The Security Team will impose deployment restrictions only to prevent the exposure of security vulnerability technicalities, which present a significant risk of vulnerability rediscovery (for example, by visible differences in behaviour). Such situations have been, and are expected, to be rare.

Changes to Application Procedure for Pre-disclosure List Membership

We also made additional changes related to streamlining and simplifying the process of applying for pre-disclosure list membership. Detailed policy changes can be found here and here. Moving forward, future applications to become members of the Xen Project pre-disclosure list have to be made publicly on the predisclosure-applications mailing list. This enables Xen Project community members to provide additional information and also is in line with one of our community’s core principles: transparency. In addition, we’ve clarified our eligibility criteria to make it easier for the Xen Project Security Team, as well as observers of the mailing list, to verify whether applicants are eligible to become members of the list.

Process Changes Not Fully Implemented:  Sharing of Information Among Pre-disclosure List Members

Finally, members of the pre-disclosure list will be explicitly allowed to share fixes to embargoed issues, analysis, and other relevant information with the security teams of other pre-disclosure members. Information sharing will happen on a private and secure mailing list hosted by the Xen Project.  Note the list is not yet in place; more details here.

Xen Project Security Policy Improvements: Get Involved

The recent XSA-108 vulnerability resulted in a lot of media coverage, which ended up stress-testing some of our policy and security related processes. During the embargo period of XSA-108, the Xen Project Security Team was faced with some difficult questions of policy interpretation, as well as practical issues related to pre-disclosure list membership applications.

To ensure more clarity moving forward, the Xen Project Security Team started a community consultation to improve and better define the project’s Security Vulnerability Response Process. In particular we are seeking to clarify the following elements of the policy, which surfaced during the embargo period of XSA-108:

  • Sharing of information amongst pre-disclosure list members during an embargo period
  • Deployment of patches on public systems of fixed versions of the Xen Project Hypervisor during the embargo period
  • Service announcements to non-list-member users during an embargo period
  • Clarifying criteria related to pre-disclosure list membership and making it easier to verify them
  • Processing applications of pre-disclosure list membership during an embargo period

For more background and information read the e-mail thread on xen-devel@ called Security policy ambiguities – XSA-108 process post-mortem (also see here to see the entire conversation thread in one place).

If you use Xen Project Software in any way, we encourage you to voice your thoughts to help formulate and update our security policy to ensure it meets the needs of our entire community. To take part in the discussion please send mail to xen-devel@lists.xenproject.org. If you are a member of the list just reply to the relevant thread. If you are not a member of the mailing list and plan to respond to an e-mail that has already been sent you have two easy options:

  • You can reply to the message via our issue tracker using the Reply to this message link at the top of the message; or
  • Retrieve the mbox from the issue issue tracker, load the thread into your mail client and just reply.

Even if you chose not to subscribe to xen-devel@ – which you don’t have to participate – you may want to occasionally check the discussion thread activity on this thread, to ensure you are not missing any activity.

Going forward, we will collate community input and propose a revised version of the policy, which will be formally approved in line with Xen Project Governance. We have not set a specific deadline for the discussion, but aim to issue a revised policy within 4 weeks.

XSA-108: Not the vulnerability you’re looking for

There has an unusual amount of media attention to XSA-108 during the embargo period (which ended Wednesday) — far more than any of the previous security issues the Xen Project has reported. It began when a blogger complained that Amazon was telling customers it would be rebooting VMs in certain regions before a specific date. Other media outlets picked it up and noticed that the date happened to coincide with the release of XSA-108, and conjectured that the reboots had something to do with that. Soon others were conjecturing that, because of the major impact to customers of rebooting, that it must be something very big and important, similar to the recent Heartbleed and Shell Shock vulnerabilities. Amazon confirmed that the reboots had to do with XSA-108, but could say nothing else because of the security embargo.

Unfortunately, because of the nature of embargoes, nobody with any actual knowledge of the vulnerability was allowed to say anything about it, and so the media was entirely free to speculate without any additional information to ground the discussion in reality.

Now that the embargo has lifted, we can talk in detail about the vulnerability; and I’m afraid that people looking for another Shell Shock or Heartbleed are going to be disappointed. No catchy name for this one.

Continue reading

XSA-108: Additional Information from the Xen Project

The Xen Project Security Team today disclosed details of the Xen Security Advisory 108 / CVE-2014-7188 (Improper MSR range used for x2APIC emulation). The Xen Project does not normally comment on specific vulnerabilities other than issuing security advisories. However, given wide interest in this case, we believe it is helpful to provide more context. The recent Shellshock bug in Bash and the Heartbleed bug in OpenSSL last spring have put a spotlight on software security issues. Due to the proximity of the Shellshock bug and announcements of maintenance reboots from some cloud service providers, there was substantial speculation about XSA-108 among bloggers, tweeters, and reporters. For the Xen Project Security Team, XSA-108 started as a security issue like any other, but this speculation quickly turned an ordinary bug fix into an extraordinary event.

A Technical Overview of XSA-108

XSA-108 was caused by a bug in the emulation code used when running HVM guests on x86 processors. The bug allows an attacker with elevated guest OS privileges to crash the host or to read up to 3 KiB of random memory that might not be assigned to the guest. The memory could contain confidential information if it is assigned to a different guest or the hypervisor. The vulnerability does not apply to PV guests.

Why Security Matters

Managing vulnerabilities and bug fixes is par for the course in any software code base. All software has bugs, and some bugs have security implications. Hypervisors play a critical role in the security of many systems; therefore, the Xen Project community has collaboratively developed a mature and robust process for handling security problems. The Xen Project Security Team works with organizations that meet criteria set by the community to protect users, while limiting the risk that a security vulnerability can be used by an attacker.

A Unique Open Source Security Process

The Xen Project developed its Security Policy to:

  • Encourage people who find security issues to report them in private.
  • Enable software vendors who distribute Xen Project software, public cloud and hosting providers and large scale users of Xen Project Software to address an issue in private such that risk of exposure to their users is minimized.

The current version of our security policy was established through an open community collaboration, which focused on issues of fairness between large and small vendors while controlling the distribution of sensitive information.

We believe that no other open source community has established a security process and policy as open and transparent as ours. As a result, the policy meets the demands of multiple stakeholders all with very different needs.

We believe that the process has been working well, as it did for XSA-108. Several cloud providers updated their servers, something that they decided was necessary in this case to best ensure their users were not put at risk. Most likely smaller vendors have done the same. Product vendors and Linux distributions will make updates available to their users following the embargo date.

But as we have learned from open source software development, there is always room for improvement through proposing changes and discussing their merits.

Lessons Learned

The speculation around XSA-108 highlighted a number of areas where we can improve. For example, we may need to adjust how we handle a sudden influx of applications to join the Xen Project Security pre-disclosure list. Also, the security policy could be clarified to ensure all members on the pre-disclosure list better understand what’s expected of them during the embargo period.

As pointed out earlier, our security process has worked extremely well for the last three years and has protected users of Xen Project software. This also holds true in this case. Software and service providers have been able to prepare updates in advance of disclosure and, consequently, users are more secure.

What’s Next?

We also need to recognize that public interest in software security and vulnerabilities will likely continue, if not increase. Next week, we will start an open discussion on our mailing lists, to make any necessary adjustments to our security process in light of pressure exerted on vendors as well as community members during the embargo period for XSA-108.

Additional Information:

Ballooning, rebooting, and the feature you’ve never heard of

Today I’d like to talk about a functionality of Xen you may not have heard of, but might have actually used without even knowing it. If you use memory ballooning to resize your guests, you’ve likely used “populate-on-demand” at some point. 

As you may know, ballooning is a technique used to dynamically adjust the physical memory in use by a guest. It involves having a driver in the guest OS, called a balloon driver, allocate pages from the guest OS and then hand those pages back to Xen. From the guest OS perspective, it still has all the memory that it started with; it just has a device driver that’s a real memory hog. But from Xen’s perspective, the memory which the device driver asked for is no longer real memory — it’s just empty space (hence “balloon”). When the administrator wants to give memory back to the VM, the balloon driver will ask Xen to fill the empty space with memory again (shrinking or “deflating” the balloon), and then “free” the resulting pages back to the guest OS (making the memory available for use again).

While this can be used to shrink guest memory and then expand it again, this technique has an important limitation: It can never grow the memory above the starting size of the VM. This is because the only way to grow guest memory is to “deflate” the balloon. Once it gets back to the starting size of the VM, the balloon is entirely deflated and no additional memory can be added by the balloon driver.

To see why this is important, consider the following scenario.

Host A and B both have 4GiB of RAM, and 2 VMs with 2GiB of RAM each. Suppose you want to reboot host B to do some hardware maintenance. You could do the following:

  • Balloon all 4 VMs down to 1GiB
  • Migrate the 2 VMs from host B onto host A
  • Shut down host B to do your maintenance
  • Bring up host B
  • Migrate the 2 VMs originally on host B back
  • Balloon all 4 VMs back up to 2GiB

All well and good. But suppose that while you had one of those VMs ballooned down to 1GiB, you needed to reboot one. Now you have a problem: Most operating systems will only check how much memory is available at boot time. You only have 1GiB of free memory. If you boot with 1GiB of memory, you will be able to balloon *smaller* than 1GiB, but you will not be able to balloon back up to 2GiB when the maintenance of host B is done.

This is where populate-on-demand comes in. It allows a VM to boot with a maximum memory larger than its current target memory. It enables a guest that thinks it has 2GiB of RAM to boot while only actually using 1GiB of RAM. It can do this because it only needs to allow the guest to run until the balloon driver can start. Once the balloon driver starts, it will “inflate” the balloon to the proper size. At that point, there is nothing special to do; the VM looks like it did when we shut it down (guest thinks it has 2GiB of RAM, but 1GiB is allocated to the balloon driver and not accessed). When host B comes back up and more memory is free, the balloon driver can deflate the balloon, bringing the total memory back up to 2GiB.

Populate-on-demand comes into play in Xen whenever you start an HVM guest with maxmem and memory set to different values. In that case, the guest will be told it has maxmem RAM, but will only have memory allocated to it; the populate-on-demand code will allow the guest to run in this mode until the balloon driver comes up and hands “frees” maxmem-memory back to Xen.

Virtualizing memory: A primer

In order to desrcibe how populate-on-demand works, I’ll need to explain a bit more about how Xen virtualizes memory. On real hardware, the actual hardware memory is referred to as physical memory; and it is typically divided into 4k-chunks called physical frames. These frames are addressed by their physical frame number, or pfn. In the x86 world, pfns typically start at 0, and are mostly contiguous (with the occasional “hole” for IO devices). Historically, on x86 platforms, a description of which pfns are available for use by memory is in something called the E820 map, provided by the BIOS to operating systems at boot.

When we virtualize, we need to provide the guest with virtual “physical address space,” described in the virtual E820 map provided to the guest. These are called guest physical frame numbers, or gpfns. But of course there is still real hardware backing this memory; in the virtualization world, it is common to refer to these as machine frames, or mfns. Every useable gpfn must have a mfn behind it.

But the gpfns have to start at 0 and be contiguous, while the mfns which back them may come from anywhere in Xen’s memory. So every VM has a physical to machine translation table, or p2m table, which maps the gpfn space onto the mfn space. Each gpfn will have an entry in the table, and every useable bit of RAM has an mfn behind it. Normally this is done by the domain builder in domain 0, which will ask Xen to fill the p2m table appropriately (including any holes for IO devices if necessary).

Ballooning then works like this. To inflate the balloon, the balloon driver will ask the guest OS for a free page. After allocating the page, it puts it on its list of pages and finds the gpfn for that page. It then tells Xen it can take the memory behind the gpfn back. Xen will replace the mfn in that gpfn space with “invalid entry,” and put the mfn on its own free list (available to be given to another VM). If the guest were to attempt to read or write this memory now, it would crash; but it won’t, because the guest OS thinks the page is in use by the balloon driver. The balloon driver won’t touch it, and the OS won’t use it for anything else.

To deflate the balloon, the balloon driver will choose one of the pages on its list that it has allocated, and then asks Xen to put some memory behind the gpfn. If Xen determines that the guest is allowed to increase its memory, and there is free memory available, then it will allocate an mfn and put it in the p2m table behing that gpfn. Now the gpfn is useable again; the balloon driver then frees the page back to the guest OS, which will put it on its own free list to use for whatever needs memory.

Populate on Demand: The Basics

The idea behind populate-on-demand was that the guest didn’t actually need all of its memory to boot up until the balloon driver was active — it only needed a small portion of it. But there was no way for the domain builder to know ahead of time which gpfns the guest OS will actually need to use in order to do that; nor which memory will be given to the balloon driver by the guest OS once it starts up.

So when building a domain in populate-on-demand mode the domain builder tells Xen to allocate the mfns into a special pool, which I will call here the PoD pool, according to how much memory is specified in the memory parameter. (In the Xen code it’s actually called the PoD cache, but it’s not a good name, because in computer science “cache” has a very specific meaning that doesn’t match what the PoD pool does. This will probably be renamed at some point for clarity.)

It then creates the guest’s p2m table as it did before, but instead of filling it with mfns, it fills it with a special PoD entry. The PoD entry is an invalid entry; so as the guest boots, whenever it touches a gpfn backed by a PoD entry, it will trap up into Xen. When Xen sees that the PoD entry, it will take an mfn from the PoD pool and put it in the p2m for that gpfn. It will then return to the guest, at which point the memory access will succeed and the guest can continue.

Thus, rather than populating the p2m table when building the domain, the p2m table is populated on demand; hence the name.

The key reason for having the the PoD pool is that the memory is already allocated to the domain. If you do a domain list it shows up as owned by the domain; and it cannot be allocated to a different domain. If this were instead allocate on demand, where you actually allocated the memory from Xen when you hit an invalid entry, there would be a risk that the memory you needed to boot until the balloon driver could run would already have been allocated to a different domain.

However, the guest can’t run like this for long. There are far more PoD entries in the p2m table than there are mfns in the PoD pool — that was the point. But the guest OS doesn’t know that; as far as it’s concerned, it has maxmem to work with. If the balloon driver doesn’t start, nothing will keep it from trying to use all of its memory. If it uses up all the memory in the PoD pool, the next time Xen hits a PoD entry, there won’t be any mfns in the PoD pool to populate the entry with. At that point, Xen would have no choice but to kill the guest.

Getting back to normal: the balloon driver

The balloon driver, like the guest operating system, knows nothing about populate-on-demand. It just knows that it has maxmem gpfn space, and it needs to hand maxmemmemory back to Xen. So it begins allocating pages from the guest operating system, and freeing the gpfns back to Xen.

What Xen does next depends on a few things. Xen keeps track of both the number of PoD entries in the p2m table, and the number of mfns in the PoD pool.

  • If the gpfn is a PoD entry, Xen will simply replace the PoD entry with a normal invalid entry and return. This reduces the number of outstanding PoD entries in the pool.
  • If the gpfn has a real mfn behind it, and the number of PoD entries left in the p2m table is more than the number of mfns in the PoD pool, Xen will replace the entry with an invalid entry, and put the mfn back into the PoD pool. This increases the size of the pool.
  • If the gpfn has a real mfn behind it, but the number of PoD entries left in the p2m table is equal to the number of mfns in the pool, it will put the mfn back on the free list, ready to be used by another domain.

Eventually, the number of outstanding PoD entries is equal to the number of entries in the PoD pool, and the system is now in a stable state. There is no more risk that the guest will touch a PoD entry and not find memory in the pool; and for an active OS, eventually all pages will be touched, and the VM will be the same as one booted not in PoD mode.

It’s never that simple: Page scrubbing

At a high level, that’s the idea behind populate-on-demand. Unfortunately, the real world is often a bit more messy than we would like.

On real hardware, if you do a soft reboot (or if you do some special trick, like spraying the RAM with liquid nitrogen), the memory when the operating system starts may still contain information from a previous boot. The freshly booting operating system has no idea what may be in there: it may be security sensitive information like someone’s taxes or private data keys.

To avoid any risk that information from the previous boot might leak into untrusted programs which might run this time, most operating systems will scrub the memory at boot — that is, fill all the memory with zeros. This also means that drivers can assume that freshly allocated memory will already be zeroed, and not bother doing it themselves. Doing this all at once, at the beginning, allows the operating system to use more efficient algorithms, and also localizes the processor cache pollution.

For an operating system running under Xen this is unnecessary, because Xen will scrub any memory before giving it to the guest (for pretty much the same potential security issue). However, many operating systems which run on Xen — in particular, proprietary operating systems like Windows — don’t know this, and will do their own scrub of memory anyway. Typically this happens very early in boot, long before it is possible to load the balloon driver. This pretty much guarantees that every gpfn will be written to before the balloon driver loads. How does populate on demand deal with that?

The key is that the state of a gpfn after it has been scrubbed by the operating system is the same as the default initial state of a gpfn just populated by the PoD code. This means that after a gpfn has been scrubbed by the operating system, Xen can reclaim the page: it can replace the mfn in the p2m table with a PoD entry, and put the mfn in the PoD pool. The next time the VM touches the page, it will be replaced with a different zero page from the PoD pool; but to the VM it will look the same.

So the populate-on-demand system has a number of zero-page reclaim techniques. The primary one is that when populating a new PoD entry, we look at recently populated entries and see if they are zero, and if they are, we reclaim them. The effect of this is to have each scrubbing thread only have one outstanging PoD page at a time.

If that fails, there is another technique we call the “emergency sweep.” When Xen hits a PoD entry, but the PoD pool is empty, before crashing the guest, it will search through all of guest memory, looking for zeroed pages to reclaim. Because this method is very slow, it is only used as a last resort.

Conclusion

So that’s populate-on-demand in a nutshell. There are more complexities under the hood (like trying to keep superpages together), but I’ll leave those for another day.