Beware the Captive Kubernetes

It’s common knowledge that the Kubernetes container orchestration framework is revolutionizing the data center. In many ways, the container revolution represented by Kubernetes is comparable in impact to that caused by the introduction of virtual machines by VMware two decades ago. What’s different this time, however, is how the application ecosystem is responding to the sea change.

Kubernetes is a distributed open source platform with a common API that allows application software to be deployed in a containerized environment whereby compute, memory, networking, and storage resources can be easily shared. Resources can be shifted quickly to those application programs that currently require them.

Many vendors in the IT ecosystem are repackaging their applications to allow them to be deployed on existing Kubernetes environments. This allows their customers to realize the elasticity, flexibility and cost savings that are the hallmark of Kubernetes and the containerization value proposition.

Some vendors, however, are taking a different tack, and this is disturbing. Rather than changing their applications to work in a shared environment, they are embedding an instance of the Kubernetes container orchestrator into their application.

This means that Kubernetes is being used by the application, but no other application can share the same Kubernetes deployment—or the compute resources given to that Kubernetes deployment. It is a captive Kubernetes cluster, if you will. I see this as an attempt by the application vendor to claim it works with Kubernetes, while restricting the value of Kubernetes to their customers—i.e. they no longer have the ability to share computing resources with multiple applications and support rapid migration of those resources based on real-time application need.

When virtual machines were introduced, application vendors embraced the technology and worked to deploy their solutions within virtualized environments—as a platform to reduce the inefficiencies, costs, and time associated with traditional deployments on physical infrastructure. They did not seek to embed the virtual machine deployment environment into their applications.

Likewise, containerized applications should be designed to run on a shared Kubernetes environment. And they should work on any Kubernetes distribution, whether it’s Google Kubernetes Engine, Red Hat OpenShift, Heptio, Canonical or any other. Enterprises should be free to select the Kubernetes distribution of their choice and application vendors should work with them all.

I hope what we are seeing today is an anomaly engaged in by some rogue application vendors, and not the start of a widespread trend. It would be a shame to see the value of the Kubernetes container orchestrator vastly reduced by a proliferation of multiple application-specific, captive Kubernetes clusters effectively forming a balkanized collection of unshared compute resources.

Tom Phelan

Tom Phelan

Tom has spent the last 25 years as a senior architect, developer, and team lead in the computer software industry in Silicon Valley. Prior to co-founding BlueData, Tom spent 10 years at VMware as a senior architect and team lead in the core R&D Storage and Availability group. Most recently, Tom led one of the key projects – vFlash, focusing on integration of server-based Flash into the vSphere core hypervisor. Prior to VMware, Tom was part of the early team at Silicon Graphics that developed XFS, one of the most successful open source file systems. Earlier in his career, he was a key member of the Stratus team that ported the Unix operating system to their highly available computing platform. Tom received his Computer Science degree from the University of California, Berkeley.

Tom Phelan has 1 posts and counting. See all posts by Tom Phelan