The maintainers of the open source KubeSphere framework for managing Kubernetes clusters have released an update that now makes it possible to also monitor and manage the consumption of graphical processor units (GPUs).
Feynman Zhou, head of developer relations for the project, says version 3.2 of KubeSphere will make it easier for IT teams that are building artificial intelligence (AI) workloads to more efficiently consume the GPU resources used to train them. The latest version of KubeSphere makes it possible to create workloads, schedule resources and manage GPU resource quotas by tenants via a graphical user interface, he notes.
Other additional capabilities include custom monitoring dashboards, support for converting Grafana dashboards into KubeSphere monitoring dashboards and cross-cluster scheduling by replicas or weight in multi-cluster and multi-cloud scenarios.
KubeSphere employs a set of wizards, based on the open source KubeKey installer, that are accessed via a web user interface. These wizards enable IT teams to deploy Kubernetes alongside other components needed to stand up a complete environment. This approach simplifies Kubernetes life cycle management by using a plug-and-play platform architecture to add support for additional components as IT teams see fit. KubeKey uses Docker as the default container runtime to install Kubernetes, but now also supports runtimes such as containerd, CRI-O and iSula after Dockershim’s deprecation.
In effect, KubeSphere is a distributed operating system for Kubernetes environments that includes an instance of the open source Jenkins continuous integration/continuous delivery (CI/CD) software, an application store for deploying applications using Helm charts and observability tools along with an instance of Porter load balancing software. It also supports GlusterFS, CephRBD, NFS, LocalPV storage as well as other plugins compatible with the container storage interface (CSI). Going forward, KubeSphere will soon add support for an operator to make it simpler to deploy stateful applications, notes Zhou.
KubeSphere also supports the Kubernetes CIS Benchmark and Docker CIS Benchmark to audit compliance and has a built-in auditing log system. The community also plans to add support for Open Policy Agent (OPA) Gatekeeper to help enforce policies and strengthen governance with fine-grained admission control along with support for KubeEye, an open source diagnostic tool for scanning and identifying various Kubernetes cluster issues.
The KubeSphere project itself was launched by QingCloud, an infrastructure-as-a-service (IaaS) provider based in China. The project itself is now administered by maintainers from multiple organizations as an open source alternative to other frameworks for managing Kubernetes environments. The maintainers of KubeSphere claim it’s been downloaded more than 700,000 times by IT professionals located in more than 90 countries. In total, there are now more than 250 contributors to the KubeSphere project.
Zhou said KubeSphere is most often initially adopted by DevOps teams that need to manage Kubernetes environments at scale. However, the framework is designed to make those environments more accessible to the average IT administrator via a web interface. In many IT organizations, the number of Kubernetes clusters deployed has increased so much that IT administrators are now assuming more responsibility for managing them alongside site reliability engineers (SREs). In effect, the management of Kubernetes environments is becoming more of a team sport that requires increased collaboration among SREs, that are typically part of a DevOps team, and a centralized IT organization.
There are, of course, multiple frameworks from managing Kubernetes environments at scale. The maintainers of KubeSphere are making a case for a comprehensive approach rooted in open source software projects that are aggregated to make it easier for IT teams to consume a management platform. It’s still too early to say how the battle over Kubernetes management platforms will play out, but as the number of open source options expands, the total cost of managing Kubernetes at scale continues to decline.