Tag Archives: Linux containers

Will Docker Replace Virtual Machines?

Docker is certainly the most influential open source project of the moment. Why is Docker so successful? Is it going to replace Virtual Machines? Will there be a big switch? If so, when?

Let’s look at the past to understand the present and predict the future. Before virtual machines, system administrators used to provision physical boxes to their users. The process was cumbersome, not completely automated, and it took hours if not days. When something went wrong, they had to run to the server room to replace the physical box.

With the advent of virtual machines, DevOps could install any hypervisor on all their boxes, then they could simply provision new virtual machines upon request from their users. Provisioning a VM took minutes instead of hours and could be automated. The underlying hardware made less of a difference and was mostly commoditized. If one needed more resources, it would just create a new VM. If a physical machine broke, the admin just migrated or resumed her VMs onto a different host.

Finer-grained deployment models became viable and convenient. Users were not forced to run all their applications on the same box anymore, to exploit the underlying hardware capabilities to the fullest. One could run a VM with the database, another with middleware and a third with the webserver without worrying about hardware utilization. The people buying the hardware and the people architecting the software stack could work independently in the same company, without interference. The new interface between the two teams had become the virtual machine. Solution architects could cheaply deploy each application on a different VM, reducing their maintenance costs significantly. Software engineers loved it. This might have been the biggest innovation introduced by hypervisors.

A few years passed and everybody in the business got accustomed to working with virtual machines. Startups don’t even buy server hardware anymore, they just shop on Amazon AWS. One virtual machine per application is the standard way to deploy software stacks.

Application deployment hasn’t changed much since the ’90s though. Up until then, it still involved installing a Linux distro, mostly built for physical hardware, installing the required deb or rpm packages, and finally installing and configuring the application that one actually wanted to run.

In 2013 Docker came out with a simple, yet effective tool to create, distribute and deploy applications wrapped in a nice format to run in independent Linux containers. It comes with a registry that is like an app store for these applications, which I’ll call “cloud apps” for clarity. Deploying the Nginx webserver had just become one “docker pull nginx” away. This is much quicker and simpler than installing the latest Ubuntu LTS. Docker cloud apps come preconfigured and without any unnecessary packages that are unavoidably installed by Linux distros. In fact the Nginx Docker cloud app is produced and distributed by the Nginx community directly, rather than Canonical or Red Hat.

Docker’s outstanding innovations are the introduction of a standard format for cloud applications, including the registry. Instead of using VMs to run cloud apps, Linux containers are used instead. Containers had been available for years, but they weren’t quite popular outside Google and few other circles. Although they offer very good performance, they have fewer features and weaker isolation compared to virtual machines. As a rising star, Docker made Linux containers suddenly popular, but containers were not the reason behind Docker’s success. It was incidental.

What is the problem with containers? Their live-migration support is still very green and they cannot run non-native workloads (Windows on Linux or Linux on Windows). Furthermore, the primary challenge with containers is security: the surface of attack is far larger compared to virtual machines. In fact, multi-tenant container deployments are strongly discouraged by Docker, CoreOS, and anybody else in the industry. With virtual machines you don’t have to worry about who is going to use it or how it will be used. On the other hand, only containers that belong to the same user should be run on the same host. Amazon and Google offer container hosting, but they both run each container on top of a separate virtual machine for isolation and security. Maybe inefficient but certainly simple and effective.

People are starting to notice this. At the beginning of the year a few high profile projects launched to bring the benefits of virtual machines to Docker, in particular Clear Linux by Intel and Hyper. Both of them use conventional virtual machines to run Docker cloud applications directly (no Linux containers are involved). We did a few tests with Xen: tuning the hypervisor for this use case allowed us to reach the same startup times offered by Linux containers, retaining all the other features. A similar effort by Intel for Xen is being presented at the Xen Developer Summit and Hyper is also presenting their work.

This new direction has the potential to deliver the best of both worlds to our users: the convenience of Docker with the security of virtual machines. Soon Docker might not be fighting virtual machines at all, Docker could be the one deploying them.

A Chinese translation of the article is available here: http://dockone.io/article/598

New Hyper Open Source Project Allows Developers To Leverage Docker and Xen Virtualization Infrastructure

Docker’s popularity and usefulness in cloud systems architectures is evident, having won over countless developers. Yet, it’s not a replacement for mature, proven and security-hardened virtualization technologies that support many of the world’s largest clouds in production.

So, while developers clearly want to take advantage of container technology to easily package applications, they also need a seamless migration path to their existing virtual infrastructure. That’s where our new partner Hyper announced today comes into play.

Hyper, a Chinese-based company with a new open source project by the same name, allows developers to run Docker images with Xen Project virtualization, version 4.5 or later. Download available here.

“Hyper offers the best of both worlds — VMs and containers,” said Xu Wang, Co-Founder at Hyper. “Our technology allows enterprises to leverage any mature, implemented virtualization infrastructure and eliminate unwanted complexity and also take advantage of container technology to easily package applications. We are partnering with Xen more closely to help developers get more out of their hypervisor, while also enjoying the benefits of container technology.”

To learn more, be sure to check out the presentation “Hyper: Make VM Run Like Containers” at Xen Project Developer Summit, Aug. 17-18. You’ll also find them at The Linux Foundation’s new ContainerCon event, as a bronze sponsor.

Hyper Enables the Next-Generation Container-as-a-Service

Caas (Container-as-a-Service) is gaining traction in cloud computing by leveraging the portability of Docker to avoid various technical limitations in a Platform-as-a-Service. However, the shared kernel approach introduces unnecessary complexity, overcapacity and security insecurity.

To eliminate these problems, Hyper uses virtualization to achieve hardware-enforced isolation. Unlike a VM + container approach, Hyper does not employ a GuestOS in the VM instance. Instead a HyperKernel, a customized Linux Kernel which includes Docker functionality, is loaded to host the Docker images. Hyper guests also does not require any Linux Container technology: in other words in Hyper guests do not require LXC, cgroups, namespace and  Docker daemon to run; they only require MOUNT namespace to support pods of Docker images.

Screen Shot 2015-07-20 at 3.47.49 PM (1)

 

To learn more about this minimalist approach, which also offers sub-second boot, rapid ROI, enhanced security, minimal resource footprint and overheads and more, check out these additional resources:

Continuous innovation is the lifeblood of any project, and Xen Project is fortunate to have an extremely active and growing community. Partners like Hyper allow Xen Project to stay one step ahead of the industry with security, performance and scalability as cloud and computing infrastructures evolve.

New Blogroll Links for Xen Content

I have added a few new Blogroll links to other people who have active blogs on the Xen hypervisor. Please take some time to visit their blogs for interesting information: