Kubernetes in the Enterprise: A Primer

As Kubernetes moves deeper into the enterprise, its growth is having an impact on the ecosystem at large.

When Kubernetes came on the scene in 2014, it made an impact and continues to impact the way companies build software. Large companies have backed it, causing a ripple effect in the industry and impacting open source and commercial systems. To understand how K8S will continue to affect the industry and change the traditional enterprise data center, we must first understand the basics of Kubernetes.

Containers and Why We Care

A container is a piece of software that packages code and its dependencies (system tools, runtime, libraries, binaries, etc.) and runs it on a host machine’s OS kernel in an isolated environment. Containers offer several benefits: Portability is one benefit, as the software can run on any infrastructure consistently. Gone are the days of people using the excuse, “It works fine on my laptop.” Kubernetes solves that issue, as it works in the cloud.

Another benefit is the resource efficiency and speed. Since containers don’t virtualize hardware, multiple containers can share OS resources. Essentially, this means many more containers can run simultaneously on the same machine, lowering costs considerably. At the same time, containers are very fast to start up. If you’ve ever heard the “serverless” buzzword, this is what makes it at all possible.

While containers on their own bring a lot to the table, the industry-changing benefits become apparent when one takes the next logical step: container orchestration. This is where K8S comes in.

Container Orchestration and Why it’s Important

Modern applications, especially significantly large ones, are often no longer monoliths; rather, they consist of several loosely coupled components that need to communicate and work in tandem. This could include, for example, services for ingesting social media data streams, ETL pipelines or APIs meant to serve analytics dashboards. These services can run in separate containers, allowing developers to release, deploy and scale independently. This offers a nice separation of concerns, enabling faster release cycles for key components as well as efficient resource allocation. This also helps solve several challenges that arise from this type of architecture, such as automated deployment and replication of containers, rolling updates, high availability when a container fails, secure communication between containers and more.

It’s obvious that these features are among the pillars of the modern cloud, and partially explain why K8S is nowadays ubiquitous. It is equally easy to see why this approach is a perfect fit for stateless apps. But what about the needs of enterprise systems?

Kubernetes: Tackling Challenges in the Enterprise

Managing State

Let’s address the elephant in the room straight away. While managing state is not an enterprise-specific challenge, it is an important one. Stateful applications such as databases, caches and message queues face challenges in terms of portability since state needs to be maintained whenever a container starts, stops or is replicated. This is particularly challenging in a distributed or even multi-cloud environment.

K8S tries to address this mainly through Volumes, Persistent Volumes and StatefulSets. In practice, all these options are great to have and cover many scenarios, but for the time being, there are still many that they do not, and the sheer complexity of containerizing stateful apps in general often outweighs the benefits in production scenarios. The question of managing storage and containerizing stateful apps is a hot topic, and there is a lot of effort currently underway (e.g., Ceph, Rook, KubeDirector, KubeDB, RedHat’s Operator Framework).

Security

This is a big deal in the enterprise world. Despite their many advantages, containers do not offer the same level of isolation as VMs, and multi-tenancy, in particular, can be a challenge. Many companies are working to make containers more secure; for example, Google has open-sourced gVisor in a bid to bring better isolation to containers, which integrates nicely with K8S.

High-Performance Computing (HPC)

Enterprise data centers typically run a variety of workloads on different types of servers. For example, GPU machines are meant to run intensive compute operations such as ML/AI pipelines. To address this, K8S uses taints and tolerations to ensure that pods are scheduled into appropriate nodes. This approach essentially allows workloads to run on the appropriate infrastructure and can be of use in other cases, such as running workloads on machines within a DMZ.

Multi-cloud

Enabling hybrid/cloud deployments and avoiding vendor lock-in are key requirements for the modern enterprise. This poses significant technical challenges that cannot be addressed by a simple tool, but typically require a combination of technologies and architectural approaches. This is one of the reasons why we’ve seen a growth in enterprise K8S offerings, such as OpenShift, Docker Enterprise and Google’s Anthos.

An Open Source Success Story

K8S is one of the top open source projects operating under the Cloud Native Computing Foundation (CNCF), which acts as an umbrella organization to K8S and is backed by some of the largest companies in the industry including Apple, Microsoft, Google, Amazon, SAP, Oracle and others. As a result, a vast ecosystem of open source technologies has evolved around K8S. This includes a wide variety of technologies from monitoring solutions such as Prometheus and container runtimes including Containerd, to package managers such as Helm. Many of the biggest players in the industry are thus incentivized to take part in shaping the future of cloud computing by contributing to the CNCF and, in turn, leveraging the ecosystem for their commercial offerings.

Moving Forward

While enterprises have specific challenges and Kubernetes does a great job of addressing those, it’s worth mentioning that containers are not the answer to everything. That being said, K8S is considered the default way to manage containerized systems at present. It’s also constantly evolving, and companies are working to overcome some of these current challenges. As the evolution of K8S continues, we’ll increasingly see sophisticated technology emerge, especially from the big tech players, as they’re directly involved in enriching the ecosystems and building commercial offerings on top of it.

Clearly, we are at an exciting point in the evolution of cloud-native technologies. As for Kubernetes, it is increasingly making headway into the enterprise world, and this growth is likely to continue, fueling how the ecosystem will adapt and evolve.

Kosmas Pouianou

Kosmas is a full-stack engineer with experience in both start-up and enterprise environments. He has a strong focus on user experience and a particular interest in cloud technologies and social entrepreneurship.

Kosmas Pouianou has 1 posts and counting. See all posts by Kosmas Pouianou