This is a guest blog post by Wei Liu, one of our Google Summer of Code students. Please welcome Wei into the community.
Hi, all. Iâ€™m Wei Liu, a graduate student from Wuhan University, Wuhan, Hubei, China. Our university is said to be one of the most beautiful universities in China. I have been doing Xen development for the last two years. My research interests include virtual machine, operating system and their security. When Iâ€™m not doing my job, I read science fiction and see movies. My favorite science fiction is “Three bodies”. Iâ€™m also a football player and play the harmonica.
Itâ€™s my honor to be accepted to GSoC 2011 and work with Xen community. My project is VirtIO on Xen. Let me talk a little bit about my project.
As you all know, VirtIO is a generic library for paravirtualization mainly used in KVM. But it should not be too hard to port VirtIO to Xen. When done, Xen will have access to the Linux kernel’s VirtIO interfaces and developers will have an alternative way to deliver PV drivers besides from the original ring buffer flavor.
This project requires to:
- Modification of upstream QEMU
- Replacing the KVM-specific interface with a generic QEMU interface
- Modification Xen / Xentools to support VirtIO
- Modifications to the Linux kernel VirtIO interfaces.
The project will take two usage scenarios into consideration: PV-on-HVM and Normal PV. These two scenarios require working on different set of functions:
- XenBus vs VirtualPCI, it’s about how to create a channel;
- PV vs HVM, it’s about how events are handled.
In the PV on HVM case, the Virtual PCI bus will be used to establish a channel between Dom0 and DomU. In some sense, it makes no differences on the Linux kernel side.
In the normal PV case, QEMU needs to use event channel to get / send notifications, and foreign mapping functions in libxc / libxl to map memory pages. XenBus / Xenstore will be used to establish a channel between Dom0 and DomU. The Linux VirtIO driver should use Xen’s event channel as kick / notify function.
When the porting is finished, I will carry on some performance tests with standardized tools such as ioperf, netperf and kernbench. A short report will be written based on the results.
This is a brief introduction to the project. Any comments are welcomed.