Tom Phelan, former lead software engineer, VMware, now co-founder and Chief Architect, BlueData, believes that despite the disruption containers are causing, these software separators work best alongside of VMs and not as their substitutes (BlueData applies both VMs and containers to the complexities of Big Data).
ContainerJournal challenges Phelan’s assertion, Q&A style.
ContainerJournal: Why do containers work best alongside VMs? What are some prime examples of containers and VMs working side by side?
Tom Phelan: There are usage scenarios where containers by themselves can do the job quite well. Here is some context that will help answer your questions.
If you are using containers as they are generally available via open source, they do have some shortcomings. VMs have had 10 years to mature. They now provide a stable and secure platform. Containers are relatively new, and have not yet had that time to mature. There are still gaps in network support as well as fault isolation and security boundaries for containers.
Today, one way to give the user the fault isolation and security that they need for enterprise software deployments and still minimize the CPU overhead of VMs is to run containers within VMs. In other words, rather than running 20 different applications in 20 different VMs on the same server, run the 20 different programs in 20 different containers and run 10 of those containers in one VM with 10 containers in another VM.
The CPU overhead of 2 VMs is much less than that of 20 VMs. Meanwhile the security of all the containers is increased. If a user (either thru accident or intent) exploits one of the existing security holes in containers and causes another user’s applications to fail, the extent of that exploit is limited to the containers running within a single virtual machine. The containers running within the other virtual machine on the server are unaffected.
There are always trade-offs between the cost and risk of data loss/time loss due to software failure. Typically, the more you pay, the more you reduce the risk of loss. The same formula holds true for containers and VMs.
Consider the real life scenario of multiple software developers sharing a single physical server. As the developers work, their code will be buggy. Giving each developer a VM to test their code wastes a lot of CPU and increases the cost of development. Giving each developer a container to test their code exposes all developers to the errors made by any developer. If the programming error of one developer brings down the whole server, then all developers will lose time – increasing the cost of development.
The solution is to determine how much you can afford to spend on software development and then partition the developers and their containers into just enough VM groups so that the increase in the cost of development due to one developer’s mistake impacting other developers is offset by the reduced costs achieved thru improved CPU utilization by running containers instead of VMs.
Containers also have some limits in terms of network connectivity. The lack of IP address persistence and the need to use NAT to provide access to some applications running in the containers hamper some versions of containers. These limitations are being addressed in newer releases of containers, but those versions are not supported on the older versions of the Linux operating system that are the norm in the enterprise.
CJ: What kinds of debate about container orchestration options will increase?
TP: The whole field of container resource managers and data center operating systems is still like the “wild west.” The debates will continue as the industry shakes out the weaker offerings and zeros in on one or two of the strongest options over time. I’ve seen this happen time and again with other technologies.
Currently, Mesos seems to have a lot of mind share. The arguments on which technology is the best revolve around security, networking, complexity, and scalability. For instance, Kubernetes has some limitations on how network connectivity is handled and applications may require some rearchitecture, while Mesos has been accused of being overly complex for smaller deployments.
For data center operating systems, the list includes CoreOS, RancherOS, Mesosphere DCOS (which is different from Apache Mesos), RedHat Atomic products/red-hat-atomic-enterprise-platform (a stripped down version of RHEL SELinux), VMWare Photon, and Ubuntu Snappy. Here the debates are around performance, scalability, and suitability for enterprise deployments.
CJ: How do you answer any naysayers who would argue that containers alone will suffice?
TP: It’s a broad statement to say that containers should never be used by themselves. I’m bullish about containers and the container revolution. The proponents of containers as the sole solution (without VMs) are not wrong. There are situations and environments where today’s container technology by itself will suffice. Indeed, containers are being used on their own in many different use cases and by many organizations.
I’m also pragmatic and realistic – especially when it comes to meeting the needs of our enterprise customers. Today, there are certainly situations where containers by themselves are not suitable.
And in other situations, such as is the case here at BlueData where we’ve incorporated additional functionality to run Hadoop and Spark clusters using containers in an enterprise-grade production deployment, code can be added to the systems that use containers so that they provide a good solution without requiring the use of VMs.
That having been said, the container ecosystem is very active and working to solve these issues and limitations. I expect that over the next few years we will see containers attain the same levels of security and stability that VMs enjoy – and then the “naysayers” will be right.
CJ: What else is important to discuss on this specific topic?
TP: Today, containers are focused on sharing CPU and memory resources. For most nontrivial applications, sharing network and storage resources is at least as important as sharing CPU and memory. We see a lot of room for improvement in how containers use and manage network and storage resources.