Tag Archives: security

Xen Project Spectre/Meltdown FAQ

Updated to v3 on Dec 12th!

Google’s Project Zero announced several information leak vulnerabilities affecting all modern superscalar processors. Details can be found on their blog, and in the Xen Project Advisory 254. To help our users understand the impact and our next steps forward, we put together the following FAQ.

Note that we will update the FAQ as new information surfaces.

Changes since the initial publication:

  • v3: Added information related to Comet mitigation for Variant 3 (Meltdown) – for now Xen 4.10 only
  • v2: Added information related to Vixen mitigation for Variant 3 (Meltdown) – see Are there any patches for the vulnerability?
  • v2: Replaced SPx with Variant x to be in line with the terminology used elsewhere vulnerability?

Is Xen impacted by Meltdown and Spectre?

There are two angles to consider for this question:

  • Can an untrusted guest attack the hypervisor using Meltdown or Spectre?
  • Can a guest user-space program attack a guest kernel using Meltdown or Spectre?

Systems running Xen, like all operating systems and hypervisors, are potentially affected by Spectre (referred to as Variant 1 and 2 in Advisory 254). For Arm Processors information, you can find which processors are impacted here.  In general, both the hypervisor and a guest kernel are vulnerable to attack via Variant 1 and 2.

Only Intel processors are impacted by Meltdown (referred to as Variant 3 in Advisory 254). On Intel processors, only 64-bit PV mode guests can attack Xen using Variant 3. Guests running in 32-bit PV mode, HVM mode, and PVH mode (both v1 and v2) cannot attack the hypervisor using Variant 3. However, in 32-bit PV mode, HVM mode, and PVH mode (both v1 and v2), guest userspaces can attack guest kernels using Variant 3; so updating guest kernels is advisable.

Interestingly, guest kernels running in 64-bit PV mode are not vulnerable to attack using Variant 3, because 64-bit PV guests already run in a KPTI-like mode.

However, keep in mind that a successful attack on the hypervisor can still be used to recover information about the same guest from physical memory.

Is there any risk of privilege escalation?

Meltdown and Spectre are, by themselves, only information leaks. There is no suggestion that speculative execution can be used to modify memory or cause a system to do anything it might not have done already.

Where can I find more information?

We will update this blog post and Advisory 254 as new information becomes available. Updates will also be published on xen-announce@.

We will also maintain a technical FAQ on our wiki for answers to more detailed technical questions that emerge on xen-devel@ (and in particular this e-mail thread) and other communication channels.

Are there any patches for the vulnerability?

We have published a mitigation for Meltdown on Intel CPUs: please refer to Advisory 254. The published solutions are labelled Vixen and Comet (also see README.which-shim). Alternative solutions are being worked on.

A Mitigation for Variant 2/CVE-2017-5715 is available on the xen-devel@ mailing list, but has not yet undergone rigorous enough review to be published as an official patch.

As information related to Meltdown and Spectre is now public, development will continue in public on xen-devel@ and patches will be posted and attached to Advisory 254 as they become available in the next few days.

Can Variant 1/Variant 2 be fixed at all? What plans are there to mitigate them?

Variant 2 can be mitigated in two ways, both of which essentially prevent speculative execution of indirect branches. The first is to flush the branch prediction logic on entry into the hypervisor. This requires microcode updates, which Intel and AMD are in the process of preparing, as well as patches to the hypervisor which are also in process and should be available soon.

The second is to do indirect jumps in a way which is not subject to speculative execution. This requires the hypervisor to be recompiled with a compiler that contains special new features. These new compiler features are also in the process of being prepared for both gcc and clang, and should be available soon.

Variant 1 is much more difficult to mitigate. We have some ideas we’re exploring, but they’re still at the design stage at this point.

Does Xen have any equivalent to Linux’s KPTI series?

Linux’s KPTI series is designed to address Variant 3 only. For Xen guests, only 64-bit PV guests are able to execute a Variant 3 attack against the hypervisor. A KPTI-like approach was explored initially, but required significant ABI changes.  Instead we’ve decided to go with an alternate approach, which is less disruptive and less complex to implement. The chosen approach runs PV guests in a PVH container, which ensures that PV guests continue to behave as before, while providing the isolation that protects the hypervisor from Variant 3. This works well for Xen 4.8 to Xen 4.10, which is currently our priority.

For Xen 4.6 and 4.7, we are evaluating several options, but we have not yet finalized the best solution.

Devicemodel stub domains run in PV mode, so is it still more safe to run device models in a stub domain than in domain 0?

The short answer is, yes, it is still safer to run stub domains than otherwise.

If an attacker can gain control of the device model running in a stub domain, it can indeed attempt to use these processor vulnerabilities to read information from Xen.

However, if an attacker can gain control of a device model running in domain 0 without deprivileging, the attacker can gain control of the entire system.  Even with qemu deprivileging, the qemu process may be able to execute speculative execution attacks against the hypervisor.

So although XSA-254 does affect device model stub domains, they are still safer than not running with a stub domain.

What is the Xen Project’s plan going forward?

The Xen Project is working on finalizing solutions for Variant 3 and Variant 2 and evaluating options for Variant 1. If you would like to stay abreast on our progress, please sign up to xen-announce@. We will update this FAQ as soon as we have more news and updated information. Answers to more detailed technical questions will be maintained in a technical FAQ on our wiki. Thank you for your patience.

How can I ask further questions?

Please respond to this e-mail thread on xen-devel@ or xen-users@.

Automotive, Security and the Future of the Xen Project at The Xen Project Developer and Design Summit

The Xen Developer and Design Summit schedule is now live! This conference combines the formats of the Xen Project Developer Summits with the Xen Project Hackathons. If you are part of the Xen Project’s community of developers and power users, come join us in Budapest, Hungary, July 11 – 13 for this must-attend event!

pandas-656890_1920

The conference will cover many different topic areas including community, embedded/automotive, performance, tooling, hardware, security and more. The format will include traditional panels and presentation, as well as design and problem solving sessions.

Design and problem solving session proposals will be accepted until July 7. This is a great way to meet other developers face-to-face to:

  • Discuss and advance the design and architecture of future functionality
  • Coordinate and plan upcoming features
  • Discuss and share best practices and ideas on how to improve community collaboration
  • Hear interactive sessions covering lessons learned from contributors, users and vendor

Submit your design and problem solving ideas here.

Keynotes this year are coming from Lars Kurth, Xen Project Chairperson and Director of Open Source Solutions at Citrix; Oleksandr Andrushchenko, Lead Software Engineer at EPAM Systems; Stefano Stabellini, Virtualization Architect at Aporeto; and Wei Liu, Senior Software Engineer at Citrix.

Here’s a small sampling of other speaking sessions during the conference:

Automotive

  • Dedicated Secure Domain as an Approach for Certification of Automotive Sector Solutions from Iurii Mykhalskyi of GlobalLogic
  • Harmony of CPU Scheduling Between RT Guest OS and Rich Guest OS in Automotive Virtualization from Sangyun Lee of LG Electronics

Security

  • Hypervisor-Based Security: Bringing Virtualized Exceptions Into the Game from Mihai Dontu of Bitdefender
  • Uniprof: Transparent Unikernel Performance Profiling and Debugging from Florian Schmidt of NEC

Future of Xen

  • Intel GVT-g: From Production to Upstream from Zhi Wang of Intel
  • Recent and Ongoing Xen Related Work in the Linux Kernel from Jürgen Groß of SUSE

General Hypervisor

  • Bring up PCI Passthrough on ARM from Julien Grall of ARM
  • EFI Secure Boot, Shim and Xen: Current Status of Developments from Daniel Kiper of Oracle

You can view the entire schedule here. Early bird specials for tickets (price is $250) are available until May 31st.

A special thank you to our Diamond Sponsor Citrix and Gold sponsors ARM, Intel and Superfluidity. We look forward to seeing you at the event in July, and please stay informed on Xen Project updates by following us on social (Twitter and Facebook) and registering to our xen-announce mailing list.

 

Request for Comment: Scope of Vulnerabilities for which XSAs are issued

Issuing advisories has a cost: It costs the security team significant amounts of time to craft and send the advisories; it costs many of our downstreams time to apply, build, and test patches; and it costs many of our users time to decide whether to do an update, and if so, to test and deploy it.

Given this, the Xen Project Security Team wants to clarify when they should issue an advisory or not: the Xen Security Response Process only mentions “‘vulnerabilities”, without specifying what constitutes a vulnerability.

We would like guidelines from the community about what sorts of issues should be considered security issues (and thus will have advisories issued). I have posted the second version a draft of a section I am proposing to be added to the Xen Security Policy to xen-devel; a copy is included below for your convenience. There are only minor modifications from the first draft, so barring major feedback from the wider community it will likely achieve consensus. If you want input, now is the time to speak up.

Most of it is just encoding long-established practice. But there are two key changes and / or clarifications that deserve attention and discussion:

    Criteria 2c: Leaking of mundane information from Xen or dom0 will not be considered a security issue unless it may contain sensitive guest or user data

Criteria 4: If no operating systems are vulnerable to a bug, no advisory will be issued.

If you want to weigh in on the question, please join the discussion on xen-devel before 28 February. The title of the thread is “RFC v2: Scope of Vulnerabilities for which XSAs are issued”.

Continue reading

What You Need to Know about Recent Xen Project Security Advisories

Today the Xen Project announced eight security advisories: XSA-191 to XSA-198. The bulk of these security advisories were discovered and fixed during the hardening phase of the Xen Project Hypervisor 4.8 release (expected to come out in early December). The Xen Project has implemented a security-first approach when publishing new releases.

In order to increase the security of future releases, members of the Xen Project Security Team and key contributors to the Xen Project, actively search and fix security bugs in code areas where vulnerability were found in past releases. The contributors use techniques such as code inspections, static code analysis, and additional testing using fuzzers such as American Fuzzy Lop. These fixes are then backported to older Xen Project releases with security support and published in bulk to make it easier for downstreams consumers to apply security fixes.

Before we declare a new Xen Project feature as supported, we perform a security assessment (see declare the Credit2 scheduler as supported). In addition, the contributors focused on security have started crafting tests for each vulnerability and integrated them into our automated regression testing system run regularly on all maintained versions of Xen. This ensures that the patch will be applied to every version which is vulnerable, and also ensures that no bug is accidentally reintroduced as development continues to go forward.

The Xen Project’s mature and robust security response process is optimized for cloud environments and downstream Xen Project consumers to maximize fairness, effectiveness and transparency. This includes not publicly discussing any details with security implications during our embargo period. This encourages anyone to report bugs they find to the Xen Project Security team, and allows the Xen Project Security team to assess, respond, and prepare patches, before before public disclosure and broad compromise occurs.

During the embargo period, the Xen Project does not publicly discuss any details with security implications except:

  • when co-opting technical assistance from other parties;
  • when issuing a Xen Project Security Advisory (XSA). This is pre-disclosed to only members on the Xen Project Pre-Disclosure List (see www.xenproject.org/security-policy.html); and
  • when necessary to coordinate with other projects affected

The Xen Project security team will assign and publicly release numbers for vulnerabilities. This is the only information that is shared publicly during the embargo period. See this url for “XSA Advisories, Publicly Released or Pre-Released”: xenbits.xen.org/xsa.

Xen’s latest XSA-191, XSA-192, XSA-193, XSA-194, XSA-195, XSA-196, XSA-197 and XSA-198 Advisory can all be found here:
xenbits.xen.org/xsa

Any Xen-based public cloud is eligible to be on our “pre-disclosure” list. Cloud providers on the list were notified of the vulnerability and provided a patch two weeks before the public announcement in order to make sure they all had time to apply the patch to their servers.

Distributions and other major software vendors of Xen Project software were also given the patch in advance to make sure they had updated packages ready to download as soon as the vulnerability was announced. Private clouds and individuals are urged to apply the patch or update their packages as soon as possible.

All of the above XSAs that affect the hypervisor can be deployed using the Xen Project LivePatching functionality, which enables re-boot free deployment of security patches to minimize disruption and downtime during security upgrades for system administrators and DevOps practitioners. The Xen Project encourages its users to download these patches.

More information about the Xen Project’s Security Vulnerability Process, including the embargo and disclosure schedule, policies around embargoed information, information sharing among pre-disclosed list members, a list of pre-disclosure list members, and the application process to join the list, can be found at: www.xenproject.org/security-policy.html

PV Calls: a new paravirtualized protocol for POSIX syscalls

Let’s take a step back and look at the current state of virtualization in the software industry. X86 hypervisors were built to run a few different operating systems on the same machine. Nowadays they are mostly used to execute several instances of the same OS (Linux), each running a single server application in isolation. Containers are a better fit for this use case, but they expose a very large attack surface. It is possible to reduce the attack surface, however it is a very difficult task, one that requires minute knowledge of the app running inside. At any scale it becomes a formidable challenge. The 15-year-old hypervisor technologies, principally designed for RHEL 5 and Windows XP, are more a workaround than a solution for this use case. We need to bring them to the present and take them into the future by modernizing their design.

The typical workload we need to support is a Linux server application which is packaged to be self contained, complying to the OCI Image Format or Docker Image Specification. The app comes with all required userspace dependencies, including its own libc. It makes syscalls to the Linux kernel to access resources and functionalities. This is the only interface we must support.

Many of these syscalls closely correspond to function calls which are part of the POSIX family of standards. They have well known parameters and return values. POSIX stands for “Portable Operating System Interface”: it defines an API available on all major Unixes today, including Linux. POSIX is large to begin with and Linux adds its own set of non-standard calls on top of it. As a result a Linux system has a very high number of exposed calls and, inescapably, also a high number of vulnerabilities. It is wise to restrict syscalls by default. Linux containers struggle with it, but hypervisors are very accomplished in this respect. After all hypervisors don’t need to have full POSIX compatibility. By paravirtualizing hardware interfaces, Xen provides powerful functionalities with a small attack surface. But PV devices are the wrong abstraction layer for Docker apps. They cause duplication of functionalities between the guest and the host. For example, the network stack is traversed twice, first in DomU then in Dom0. This is unnecessary. It is better to raise hypervisor abstractions by paravirtualizing a small set of syscalls directly.

PV Calls

It is far easier and more efficient to write paravirtualized drivers for syscalls than to emulate hardware because syscalls are at a higher level and made for software. I wrote a protocol specification called PV Calls to forward POSIX calls from DomU to Dom0. I also wrote a couple of prototype Linux drivers for it that work at the syscall level. The initial set of calls covers socket, connect, accept, listen, recvmsg, sendmsg and poll. The frontend driver forwards syscalls requests over a ring. The backend implements the syscalls, then returns success or failure to the caller. The protocol creates a new ring for each active socket. The ring size is configurable on a per socket basis. Receiving data is copied to the ring by the backend, while sending data is copied to the ring by the frontend. An event channel per ring is used to notify the other end of any activity. This tiny set of PV Calls is enough to provide networking capabilities to guests.

We are still running virtual machines, but mainly to restrict the vast majority of applications syscalls to a safe and isolated environment. The guest operating system kernel, which is provided by the infrastructure (it doesn’t come with the app), implements syscalls for the benefit of the server application. Xen gives us the means to exploit hardware virtualization extensions to create strong security boundaries around the application. Xen PV VMs enable this approach to work even when virtualization extensions are not available, such as on top of Amazon EC2 or Google Compute Engine instances.

This solution is as secure as Xen VMs but efficiently tailored for containers workloads. Early measurements show excellent performance. It also provides a couple of less obvious advantages. In Docker’s default networking model, containers’ communications appear to be made from the host IP address and containers’ listening ports are explicitly bound to the host. PV Calls are a perfect match for it: outgoing communications are made from the host IP address directly and listening ports are automatically bound to it. No additional configurations are required.

Another benefit is ease of monitoring. One of the key aspects of hardening Linux containers is keeping applications under constant observation with logging and monitoring. We should not ignore it even though Xen provides a safer environment by default. PV Calls forward networking calls made by the application to Dom0. In Dom0 we can trivially log them and detect misbehavior. More powerful (and expensive) monitoring techniques like memory introspection offer further opportunities for malware detection.

PV Calls are unobtrusive. No changes to Xen are required as the existing interfaces are enough. Changes to Linux are very limited as the drivers are self-contained. Moreover, PV Calls perform extremely well! Let’s take a look at a couple of iperf graphs (higher is better):

iperf client

iperf server

The first graph shows network bandwidth measured by running an iperf server in Dom0 and an iperf client inside the VM (or container in the case of Docker). PV Calls reach 75 gbit/sec with 4 threads, far better than netfront/netback.

The second graph shows network bandwidth measured by running an iperf server in the guest (or container in the case of Docker) and an iperf client in Dom0. In this scenario PV Calls reach 55 gbit/sec and outperform not just netfront/netback but even Docker.

The benchmarks have been run on an Intel Xeon D-1540 machine, with 8 cores (16 threads) and 32 GB of ram. Xen is 4.7.0-rc3 and Linux is 4.6-rc2. Dom0 and DomU have 4 vcpus each, pinned. DomU has 4 GB of ram.

For more information on PV Calls, read the full protocol specification on xen-devel. You are welcome to join us and participate in the review discussions. Contributions to the project are very appreciated!

Virtual Machine Introspection: A Security Innovation With New Commercial Applications

The article from Lars Kurth, the Xen Project chairperson, was first published on Linux.com.

A few weeks ago, Citrix and Bitdefender launched XenServer 7 and Bitdefender Hypervisor Introspection, which together compose the first commercial application of the Xen Project Hypervisor’s Virtual Machine Introspection (VMI) infrastructure. In this article, we will cover why this technology is revolutionary and how members of the Xen Project Community and open source projects that were early adopters of VMI (most notably LibVMI and DRAKVUF) collaborated to enable this technology.

Evolving Security Challenges in Virtual Environments

Today, malware executes in the same context and with the same privileges as anti-malware software. This is an increasing problem, too. The Walking Dead analogy I introduced in this Linux.com article is again helpful. Let’s see how traditional anti-malware software fits into the picture and whether our analogy applies to anti-malware software.

In the Walking Dead universe, Walkers have taken over the earth, feasting on the remaining humans. Walkers are active all the time, and attracted by sound, eventually forming a herd that may overrun your defences. They are strong, but are essentially dumb. As we explored in that Linux.com article, people make mistakes, so we can’t always keep Walkers out of our habitat.

For this analogy, let’s equate Walkers with malware. Let’s assume our virtualized host is a village, consisting of individual houses (VMs) while the Hypervisor and network provides the infrastructure (streets, fences, electricity, …) that bind the village together.

Enter the world of anti-malware software: assume the remaining humans have survived for a while and re-developed technology to identify Walkers fast, destroy them quickly and fix any damage caused. This is the equivalent of patrols, CCTV, alarmed doors/windows and other security equipment, troops to fight Walkers once discovered and a clean-up crew to fix any damage. Unfortunately, the reality of traditional malware security technology can only be deployed within individual houses (aka VMs) and not on the streets of our village.

To make matters worse, until recently malware was relatively dumb. However, this has changed dramatically in the last few years. Our Walkers have evolved into Wayward Pine’s Abbies, which are faster, stronger and more intelligent than Walkers. In other words, malware is now capable of evading or disabling our security mechanisms.

What we need is the equivalent of satellite surveillance to observe the entire village, and laser beams to remotely destroy attackers when they try and enter our houses. We can of course also use this newfound capability to quickly deploy ground troops and clean-up personnel as needed. In essence that is the promise that Virtual Machine Introspection gives us. It allows us to address security issues from outside the guest OS without relying on functionality that can be rendered unreliable from the ground. More on that topic later.

From VMI in Xen to the First Commercial Application: A Tale of Collaboration

The development of Virtual Machine Introspection and its applications show how the Xen Project community is bringing revolutionary technologies to market.

The development of Virtual Machine Introspection and its applications show how the Xen Project community is bringing revolutionary technologies to market.

The idea of Virtual Machine Introspection for the Xen Project Hypervisor hatched at Georgia Tech in 2007, building on research by Tal Garfinkel and Mendel Rosenblum in 2003. The technology was first incorporated into the Xen Project Hypervisor via the XenAccess and mem-events APIs in 2009. To some degree, this was a response to VMware’s VMsafe technology, which was introduced in 2008 and deprecated in 2012, as the technology had significant limitations at scale. VMSafe was replaced by vShield, which is an agent-based, hypervisor-facilitated, file-system anti-virus solution that is effectively a subset of VMsafe.

Within the Xen Project software however, Virtual Machine Introspection technology lived on due to strong research interests and specialist security applications where trading off performance against security was acceptable. This eventually led to the creation of LibVMI (2010), which made these APIs more accessible. This provided an abstraction that eventually allowed exposure of a subset of Xen’s VMI functionality to other open source virtualization technologies such as KVM and QEMU.

In May 2013, Intel launched its Haswell generation of CPUs, which is capable of maintaining up to 512 EPT pointers from the VMCS via the #VE and VMFUNC extensions. This proved to be a potential game-changer for VMI, enabling hypervisor controlled and hardware enforced strong isolation between VMs with lower than previous overheads, which led to a collaboration of security researchers and developers from Bitdefender, Cisco, Intel, Novetta, TU Munich and Zentific. From 2014 to 2015, the XenAccess and mem-events APIs have been re-architected into the Xen Project Hypervisor’s new VMI subsystem, alt2pm and other hardware capabilities have been added, as well as support for ARM CPUs and a baseline that was production ready has been released in Xen 4.6.

Citrix and Bitdefender collaborated to bring VMI technology to market: XenServer 7.0 introduced its Direct Inspect APIs built on the Xen Projects VMI interface. It securely exposes the introspection capabilities to security appliances, as implemented by Bitdefender HVI.

What Can Actually Be Done Today?

Coming back to our analogy: what we need is the equivalent of satellite surveillance to observe the entire village. Does VMI deliver? In theory, yes: VMI makes it possible to observe the state of any virtual machine (house and its surroundings in the village), including memory and CPU state and to receive events when the state of the virtual machine changes (aka if there is any movement). In practice, the performance overhead of doing this is far too high, despite using hardware capabilities.

In our imagined world that is overrun by Walkers and Abbies, this is equivalent to not having the manpower to monitor everything, which means we have to use our resources to focus on high value areas. In other words, we need to focus on the suspicious activity on system perimeters (the immediate area surrounding each of our houses).

This focus is executed by monitoring sensitive memory areas for suspicious activity. When malicious activity is detected, a solution can take corrective actions on the process state (block, kill) or VM state (pause, shutdown) while collecting and reporting forensic details directly from a running VM.

Think of a laser beam on our satellite that is activated whenever an Abbie or Walker approaches our house. In technical terms, the satellite and laser infrastructure maps to XenServer’s Direct Inspect API, while the software which controls and monitors our data maps onto Bitdefenders Hypervisor Introspection.

It is important to stress that monitoring and remedial action takes place from the outside, using the hypervisor to provide hardware-enforced isolation. This means that our attackers cannot disable surveillance nor laser beams.

Of course, no security solution is perfect. This monitoring software may not always detect all suspicious activity, if that activity does not impact VM memory. This does not diminish the role of file-system-based security; we must still be vigilant, and there is no perfect defense. In our village analogy, we could also be attacked through underground infrastructure such as tunnels and canalisation. In essence this means we have to use VMI together with traditional anti-malware software.

How does VMI compare to traditional hypervisor-facilitated anti-virus solutions such as vShield? In our analogy, these solutions require central management of all surveillance equipment that is installed in our houses (CCTV, alarmed doors/windows, …) while the monitoring of events is centralized very much like a security control centre in our village hall. Albeit such an approach significantly simplifies monitoring and managing of what goes on within virtual machines, it does not deliver the extra protection that introspection provides.

You can find more information (including some demos) about VMI, XenServer Direct Inspect API and BitDefender Hypervisor Introspection here:

Xen Project Virtual Machine Introspection

Conclusion

The development of VMI and its first open source and commercial applications show how the Xen Project community is innovating in novel ways, and is capable of bringing revolutionary technologies to market. The freedom to see the code, to learn from it, to ask questions and offer improvements has enabled security researchers and vendors such as Citrix and Bitdefender to bring new solutions to market.

It is also worth pointing out that hardware-enabled security technology is moving very fast: only a subset of Intel’s #VE and VMFUNC extensions are currently being deployed in VMI. Making use of more hardware extensions carries the promise of combining the protection of out-of-guest tools with the performance of in-guest tools.

What is even more encouraging is that other vendors such as A1Logic, Star Lab and Zentific are working on new Xen Project-based security solutions. In addition, the security focused, Xen-based OpenXT project has started to work more closely with the Xen Project community, which promises further security innovation.

A few of these topics will be discussed in more detail during Xen Project Developer Summit happening in Toronto, CA from August 25 – 26, 2016. You learn more about the event here.