Although Kubernetes is a clear industry-standard container orchestrator, IT leaders continue to cite challenges in operating the platform. These challenges are escalated when overseeing multiple Kubernetes distributions across different environments. Organizations are also encountering novel difficulties as they seek to run Kubernetes on the edge and within bare metal environments.
Spectro Cloud recently released a study that benchmarks the current state of Kubernetes adoption throughout the market and showcases common barriers engineers are facing. Shockingly, 86% of respondents cite exponential challenges associated with using multiple Kubernetes environments. Multi-environment deployments will likely increase, as the study found rising multi-cluster deployments and increasing interest in using Kubernetes with edge computing.
Below, I’ll highlight the key findings from the report. I also checked in with Tenry Fu, CEO and co-founder, Spectro Cloud, and Dave Cope, chief revenue and marketing officer, Spectro Cloud, to gather further insight. According to Fu, “Kubernetes has become a very clear choice for production applications.”
Kubernetes Explores The Edge
A full 43% of all production workloads now live in Kubernetes today, according to the report. This number is set to rise, as the CNCF estimates that by 2023 the majority of applications will use cloud-native technologies.
One surprising finding from the study is the increased use of Kubernetes in edge environments. Thirty-five percent (35%) of respondents already deploy Kubernetes at the edge. One reason the edge is looking attractive, says Fu, is to enable real-time AI/ML processing on data without the high ingress and egress fees associated with moving it to and from the cloud. Operators may also use K8s in bare metal environments with no cloud or data center. For example, Kubernetes has even found its way onto an F-16 jet!
In response to this newfound desire to place Kubernetes on the edge, infrastructure teams will likely require new platforms for centralized management. As a result, automation and configuration tools specifically designed to help manage Kubernetes at the edge have emerged, such as KubeEdge, Akri and SuperEdge.
More Environments, More Problems
If we look at the Kubernetes stack, there’s more than just the infrastructure piece, says Fu. There are so many additional add-ons for things like security, RBAC or stateful storage. Organizations must increasingly deploy to multiple bespoke environments too, whether it’s various clouds, different cloud regions, edge, bare metal or hybrid on-premises environments. The report found that 89% are using multiple distributions, and 70% are running more than five clusters.
More than three-quarters of respondents already run Kubernetes across multiple environments such as clouds and edge. As more distribution styles emerge, troubleshooting and maintaining consistency for these diverse distributions is challenging and could increase performance and availability issues. As a result, 86% say that when you use multiple Kubernetes environments, the challenges increase exponentially.
In a soup of thousands of vendors and open source utilities, all these packages are evolving and versioning at different rates, explains Cope. “You can piece together parts, sure, but in production, that doesn’t become manageable quickly,” he said. Operators want the promise of Kubernetes but need to manage it at scale. “People are looking for a platform that can take advantage of innovation and be able to run in production with guardrails.”
Kubernetes: High Optimism, Steep Learning Curve
83% are saying they’re satisfied overall with Kubernetes, citing benefits such as improved application performance and availability, and increased business innovation. Executives are concerned with achieving business outcomes, and the outcomes of using Kubernetes are apparent, even for mid-size companies in recent years. Yet, for people on the ground, Kubernetes itself is a steep learning curve, says Fu.
According to the report, 80% say that Kubernetes has a steeper learning curve than most technologies they use. Fu credits the complex ecosystem as being particularly burdensome. Organizations experiencing a cloud IT skills shortage might lack developers with backend service operator knowledge or configuration skills. Institutionalizing operational knowledge with templates for service orchestration and app deployment flows is one way to ease developer experience and usability around the platform.
“When organizations are using more clusters across multiple environments, they see the challenge increase exponentially,” says Fu. Since many environments work differently with clusters, operating the same way between environments continues to be a common challenge.
Organizations will require consistent methods to run Kubernetes in different environments, and the Kubernetes industry is still evolving to meet these requirements. As I’ve previously covered, the CNCF radar indicates an increase in multi-cluster deployments, highlighting new tools such as Cluster API and kOps to help manage these new paradigms. “Once this becomes manageable, it will truly advance the industry,” says Fu. According to Fu, this sort of infrastructure should be more transparent and behind the scenes—directly governed by Ops without developers having to operate it—that’s the ideal situation.
Conducted by Dimensional Research, the Spectro Cloud report surveyed 300 technologists in organizations with 250+ employees that use Kubernetes in production. To read the entire report, you can check it out here.