As a container orchestration platform, Kubernetes has a well-deserved reputation for being difficult to both deploy and master. But with the release of version 1.5, the container orchestration platform is becoming a lot more accessible.
In addition to availability on both Linux and Windows Server, the latest release of Kubernetes unveiled this week by the Cloud Native Computing Foundation (CNCF) arm of The Linux Foundation includes a mechanism that makes it possible to set up multiple container clusters using a single command. In addition, there is a more sophisticated scheduler that can be employed to run jobs and tasks, and tools for automating the setup of a network overlay within Kubernetes, as well as implementing policies across that overlay.
Finally, Kubernetes 1.5 includes support for a Helm tool, which simplifies the deployment of applications on top of the platform.
At a CoreOS Tectonic Summit this week, David Aronchick, a product manager at Google, also described some of the features being prepped for version 1.6. They include support for federated deployments of Kubernetes, stateful upgrades, much higher levels of scale in terms of the number of nodes supported, higher availability and integrated metric application programming interfaces (APIs).
As the container orchestration platform matures, however, there is a fair amount of confusion concerning when to rely on Kubernetes to holistically manage compute, storage and networking versus relying on other frameworks such as OpenStack. CoreOS CEO Alex Polvi says OpenStack is really optimized for organizations that need to manage IT at a level of scale on par with a cloud service provider. The average IT organizations should be able to get by using Kubernetes.
Of course, Kubernetes was developed by Google to be used by Google engineers rather than the average IT administrator. Polvi says it will fall to vendors such as CoreOS to make the platform more accessible to the average IT administrator, as part of what CoreOS labels as a Google Infrastructure for Everyone Else (GIFEE) initiative.
In the meantime, there are multiple efforts underway to containerize various elements of OpenStack. Mirantis, Intel, Google and CoreOS are cooperating on an effort to turn the OpenStack control plane into a set of modules that run inside containers on top of Kubernetes. At the same time, there is a separate OpenStack project, dubbed Kolla, that is also focused on embedding OpenStack technologies inside containers.
Just about everyone involved in these projects says the goal is to mesh Kubernetes and OpenStack together in a way that results in the sum of the parts being greater than the proverbial whole. In the meantime, IT organizations should expect Kubernetes to emerge as the dominant container orchestration platform in terms of the number of vendors that support it. However, that may not stop the average IT administrator from opting for rival approaches to managing container clusters that today might prove to be more graphically appealing.