k3OS Takes Kubernetes to the Edge

In the tradition of embedded Linux comes k3OS, an open source project for managing Kubernetes instances on embedded platforms at the edge. k3OS combines a Linux distro with a k3s Kubernetes distro in one. It simplifies the path to quickly stand up clusters and maintain them over time. Let’s explore how two paths meet taking Kubernetes to the edge, and how you can get started running it today.

Smaller Footprint, Faster Boot

Embedded computing systems traditionally go where server-class or PC-class machines can’t. They offer smaller processors, memory, disk and I/O payloads to fit within resource constraints of size, power and cooling. They come in a wide variety of form factors, some with industrial or military levels of ruggedness. Embedded systems are typically pre-programmed with a specific application optimized for their intended use, and their configuration isn’t altered by users.

Programming embedded systems started with assembly language, then moved into compiled high-level languages targeting bare-metal hardware. Visionaries then developed multi-tasking executives with services around a hard-real-time kernel. These grew into full-fledged real-time operating system (RTOS) environments, complete with networking and file system capability.

Embedded programmers flocked to Unix-like platforms for RTOS application development. When Linux debuted, one of its strengths was portability—to a new processor or a new board. For the first time, self-hosting Linux became a possibility, where the development and target machine were one in the same. Working within embedded resource constraints was still a problem.

The answer was embedded Linux. By trimming out unnecessary pieces, a distro could be tightly configured for a specific embedded platform. For instance, if the system lacked a display, those drivers could be eliminated. Ditto for other things not needed for the target application. The resulting smaller footprint meant Linux booted faster, from a smaller local hard drive, network storage or local flash. As processors improved, context switching times and interrupt latency became less of a concern—“soft” real-time proved more than fast enough for many applications.

k3OS: 5 Less Than K8s

Kubernetes, also known by its K8s numeronym, brought the idea of containers for orchestrating computing, networking and storage against user workloads. Since its open sourcing by Google in 2014, Kubernetes has become a DevOps favorite, enabling continuous deployment.

If you have server-class platforms, Kubernetes is straightforward to install and manage. What if you want to add embedded platforms to a Kubernetes cluster? As efficient as Kubernetes is, its standard deployment is still too big for many small form-factor computers out there. That’s because it contains everything anyone has thought of so far. Most edge use cases don’t need all that stuff.

Same problem, same solution bringing Kubernetes to the edge. In February 2019, Rancher Labs launched k3s, an optimized version of Kubernetes that runs in environments with less than 512MB of RAM. The company pulled out deprecated code, non-default admission controllers, in-tree cloud providers, in-tree storage drivers and more. Server processes combined into a single process and again for worker nodes. Instead of Docker, containerd is used. The company also added support for SQLite as an alternative to etcd. The result is a certified k3s distribution binary of less than 40MB. Here’s the k3s GitHub repository including X86_64, ARMv7 and ARM64 ports.

Innovating With Kubernetes to the Edge

IoT, medical diagnostic equipment, industrial control, machine vision and many other applications need more compute power at the edge. As already mentioned, Linux is fast enough to meet many soft-real-time requirements. Certainly, one could go get an embedded Linux distro, put k3s on it and be up and running. That leaves a DevOps team the task of coordinating patches for both distros. Two risks develop: Overlooked Linux patches leave security hole, and patching Linux blindly can cause unavailable Kubernetes nodes, lost quorums or workload spikes.

k3OS puts an Ubuntu kernel packaged with mostly Alpine binaries under management of Kubernetes. When a k3OS node boots, it comes up in seconds, directly into Kubernetes. Add several k3OS nodes and they form a cluster. A kubectl command triggers upgrades of the combined Linux/k3s distro, while draining nodes and sequencing reboots. Wiping out unnecessary components in both Linux and Kubernetes minimizes the attack surface.

The idea to run Kubernetes to the edge, using relatively small embedded platforms as edge nodes, is compelling. It softens the demarcation between an enterprise IT philosophy and an embedded OT philosophy. Teams can choose the right embedded platform, particularly Arm-based units consuming less power and offering unique SoC configurations for I/O. (There’s an easy Arm overlay installation process that can bring a custom bootable Arm image into k3OS.) Once in the Kubernetes cluster, edge nodes with k3OS behave just like bigger nodes in the cloud or on-prem.

I’m excited to see how this develops, and if others find it equally compelling. Visit the k3OS site for more details, including a recorded meetup held May 8. Or, head straight to the k3OS GitHub repository to get started. I’d enjoy hearing your k3OS thoughts and experiences in the comments.

Don Dingee

A technologist who started out working on aircraft and missile guidance systems, Don Dingee now runs STRATISET, a strategy consultancy helping businesses navigate digital transformation. For a decade he covered embedded and edge computing, EDA, and IoT technology at Embedded Computing Design and SemiWiki.com. He’s co-author of “Mobile Unleashed”, a history of Arm chips in mobile devices. For fun, he debates sabermetrics and wrestles his Great Pyrenees dog.

Don Dingee has 1 posts and counting. See all posts by Don Dingee