Canonical is betting that LXD, which it calls the “pure-container hypervisor,” can beat VMware, KVM and other traditional hypervisors. To see for myself, I recently gave it a whirl. Here’s what I found.
By “pure-container hypervisor,” Canonical means it is a hypervisor that works by creating containers running on top of the host system, just like Docker. There is no hardware emulation evolved. Because LXD containers have much less overhead than traditional virtual machines, they theoretically can support many more guest operating systems than traditional hypervisors, while also delivering better performance.
Experimenting with LXD: The Good and the Bad
I don’t have a full data center on hand for comparing an LXD stack to a VMware or KVM alternative. What I do have, however, is a laptop running Ubuntu 16.04, which I used to run LXD. Here is what I liked about it:
- It’s easy to install. A simple apt-get install lxd is all it takes. That beats VMware, which is more annoying to set up on Ubuntu (although this matters less in a data center production environment). KVM is just as easy to install, though, so there is no difference in that regard.
- You can pull prebuilt container images from public repositories, and LXD comes preconfigured to pull container images for various versions of Ubuntu. That’s handy. Virtual machine images would be more difficult to import.
- My LXD containers had no noticeable impact on system performance on the host, even when I had multiple containers running at once. They did not consume more memory or add overhead (as far as top revealed, at least). That is a nice advantage over traditional virtual machines, which use up more resources on the host, even if the virtual machines are sitting idle.
- It was easy to start a container using the prebuilt image, log into it with lxc exec my-container — /bin/bash, install an application inside the container and then access that application from the host. As proof, here’s an NGINX instance that is running inside an LXD container (which has IP 10.143.95.184) and accessed from Firefox on my laptop:
All in all, starting this container and installing NGINX inside it was a very Docker-like experience. With minimal setup and just a few commands, I had an application running inside an isolated environment.
On the other hand, LXD is not perfect. This is what I would like to see improved:
- To configure it, you have to run lxd init from the command line interface (CLI). This starts a wizard of sorts, which asks questions related to networking and storage and configures your system according to what you tell it you want. While the tool worked well enough, it seems a little buggy. For example, sometimes it implies that you can just press enter in response to a prompt to select the default option, but other times you have to enter an option explicitly, even if a default is displayed. This was not a major issue, but it made LXD feel more like a beta technology than something that is production-ready.
- It took 13.092 seconds to start an Ubuntu 16.04 LXD container, which, as far as I could tell, is basically a containerized version of Ubuntu Server. That’s not terribly fast; I can start an Ubuntu Server virtual machine using KVM on the same laptop in about the same amount of time. VirtualBox takes longer, but I wouldn’t use VirtualBox for data center application hosting.
My overall impression is this: It’s still rough around the edges, and when it comes to startup time for virtual appliances, I did not see a significant difference between LXD and traditional hypervisors. (To be fair, I was not doing an apples-to-apples comparison of virtual machine images.)
Yet, once you get past those limitations, LXD provides a very streamlined method for starting virtual machines. And while I did not attempt to max out my laptop by seeing just how many containers I could run, the evidence I saw suggested that it can support a very large number of virtual machine instances without cutting into host performance. That’s a big advantage.