Why Docker Won’t Kill Virtual Machines

Conventional wisdom posits that Docker containers are killing virtual machines. That’s an overstatement. VMware, KVM and other virtual machine platforms still have bright futures. Here’s why.

There’s no doubt that Docker’s debut in 2013 changed the game in significant ways for traditional virtualization. Organizations that had once depended on VMware or KVM for all of their software deployment needs found a new option in Docker. Docker made application containers a practical alternative to virtual machines.

There’s also no doubt that the Docker revolution is forcing VMware and other companies to change their strategy in major ways. They can’t count on keeping their customers simply by providing a compelling alternative to bare-metal servers.

Why Virtual Machines Won’t Die

Still, it’s an overstatement to say that Docker containers will replace traditional virtualization.

VMware, KVM and other hypervisor frameworks are not going anywhere anytime soon, thanks to the following reasons:

  • Some applications don’t run well in containers. For example, applications that require graphical interfaces don’t work well in a containerized environment. (True, you can do GUIs in containers, but it’s not what Docker was designed for.) Virtual machines will remain the deployment platform of choice for applications like these.
  • Containers are not cross-platform. Yes, Docker now runs natively on Windows, as well as on Linux. But you can’t take a Docker container image created for a Windows application and run it on Linux, or vice versa. You can, however, take a virtual machine image based on any type of operating system and run it on almost any type of host server. In this way, virtual machines provide cross-platform agility that Docker just can’t match.
  • Containers require a new skillset. Among the current generation of developers and admins, almost no one heard of containers before Docker came along. In contrast, anyone who has built software or administered a data center since the early 2000s knows about (and has probably used) VMware. IT personnel are learning Docker, sure. But container expertise is much harder to find than virtualization expertise, and it will be that way for a long while to come. The resulting lack of access to staff with container expertise is another factor that limits Docker adoption rates.
  • Docker is less user-friendly. Docker was designed by developers for developers. Although the Docker platform is not as rough around the edges today as it was a few years ago, it’s still hardly something for beginners. Using containers almost always requires you to work from the command line, to understand complex networking and storage concepts and much more. With virtual machines, this is not the case. Sure, VMware is complicated, but the management tools are much more user-friendly. And if you think VMware is hard, you can use a hypervisor like VirtualBox, where you can point and click your way to setting up a virtual machine in a few minutes.

So, if I were you, I wouldn’t go betting against virtual machines just yet. They’ll remain an important part of enterprise infrastructure for many years to come, even as they increasingly coexist with containers.

Christopher Tozzi

Christopher Tozzi has covered technology and business news for nearly a decade, specializing in open source, containers, big data, networking and security. He is currently Senior Editor and DevOps Analyst with Fixate.io and Sweetcode.io.

Christopher Tozzi has 254 posts and counting. See all posts by Christopher Tozzi