The debate over whether containers and hypervisors complement each other or eliminate the need for the latter remains contentious. Most containers today are deployed on top of hypervisors largely because of a lack of container tooling and concerns about being able to isolate application workloads running inside containers.
Attempting to establish a middle ground between containers and hypervisors, the OpenStack Foundation created Kata Containers. These combine the Clear Containers virtualization software developed by Intel with runV technology from Hyper.sh to enable containers that are compliant with the Open Container Initiative (OCI) specification and the container runtime interface (CRI) for Kubernetes to run on a lightweight hypervisor. Supporters of the project include Arm, Canonical, Dell/EMC, Intel, Red Hat, 99cloud, AWcloud, China Mobile, City Network, CoreOS, EasyStack, Fiberhome, Google, Huawei, JD.com, Mirantis, NetApp, SUSE, Tencent, Ucloud and UnitedStack.
Anne Bertucio, marketing coordinator for the OpenStack Foundation, says the community working on the Kata Container 1.0 project will define application programming interfaces (APIs), making it possible to support plugins as well. Those plugins will make it possible to integrate Kata Containers within a variety of legacy hypervisors to give organizations the ability to leverage Kata Containers on legacy virtual machines as well as bare-metal infrastructure. The first plugin to be supported is the Kernel-based Virtual Machine (KVM) on which OpenStack is based, says Bertucio.
In the future, the Kata Container community has committed to expanding support for device pass-through and accelerators.
Kata Containers, of course, are not the only effort underway to marry containers and hypervisors. There are other efforts to deploy hypervisors within a container. In addition, Google has launched gVisor, an open source lightweight sandbox for isolating Docker containers running on Kubernetes clusters that provides an alternative to relying on virtual machines.
Where to run containers has become an issue because developers would like to eliminate the overhead that comes with running workloads on types of hypervisors that run guest operating systems. Containerized applications running on bare-metal servers simply are faster. The trouble is that most IT organizations have security concerns about deploying containers without any ability to isolate workloads, while the tooling they have in place is optimized for one form of a hypervisor or another. During the application development process, most developers are content to develop applications on top of hypervisors. But once an application moves into production, interest in bare-metal servers increases significantly.
It’s still too early to say where containers will end up running most often in production environments. Containers today are already ubiquitous in application development. But as those applications are ready to be deployed in production environments the debate concerning where best to deploy those applications will become more heated. Most IT operations teams today have little to no experience working with bare-metal servers or any other form of existing legacy virtual machines such as VMware. But as the cost of commercial virtual machine software gets factored into the overall container equation, alternative approaches become that much more attractive.