One of the issues IT organizations routinely face is the assumption that certain trade-offs must be made when choosing to deploy containers on bare metal servers instead of virtual machines. Most IT organizations today opt to deploy containers on virtual machines because they already have the tooling in place to manage virtual machines. The virtual machine is also perceived to be more secure because it serves to better isolate container workloads.
CoreOS, however, stood much of that thinking on its proverbial head by unveiling a version of its Quay Container Registry offering that is now 80 percent faster in terms of startup time. Based on the open-source Kubernetes container orchestration framework, CoreOs CTO Brandon Philips says the latest version of Quay is much faster because it actually makes use of virtual machines that have been deployed on containers.
To achieve that, CoreOS is deploying instances of containers hosting virtual machines on the Packet public cloud service, which makes available bare metal servers as cloud infrastructure. That instance of Quay then can be used to make use of container files and source code to create a container image that can then be deployed on, for example, an Amazon Web Services (AWS) cloud.
Beyond making it faster to create those images using Kubernetes, this implementation of Quay serves as an example of how IT organizations can eat their proverbial container cake and have it, too. The virtual machines provide isolation from an IT security perspective, while the containers running on bare metal servers provide the means of increasing IT infrastructure utilization rates.
Naturally, it remains to be seen just how much this approach to running virtual machines on containers might be employed in other use cases. But it does serve to illustrate how two forms of virtualization technologies can be combined to advance container adoption. Most IT organizations probably will continue to run containers on top of virtual machines for the foreseeable future. But many independent software vendors might opt to run virtual machines inside containers to address both security and performance scalability concerns.
Many IT organizations are already coming to conclusion that just about every legacy technology they have can be wrapped in a container to make an application both more accessible and easier to port. While that container might add some processing overhead, that legacy application becomes a lot more accessible because it can be invoked using standard container application programming interfaces (APIs.) Down the road an IT organization can then opt to determine if it wants to use containers to further deconstruct that legacy applications into a series of more granular microservices. Given that capability, deploying a virtual machine inside a container is one more step removed.
IT organizations these days spend a lot of time trying to figure out how to get one virtual machine format versus another to run on various types of cloud platforms. The deployment of virtual machines themselves on containers may make one day soon render that entire conversation moot.