Kubernetes’ Move to the Edge: A Great Thing

Chick-fil-A, reported to be on track to becoming the third largest U.S. fast food chain behind McDonald’s and Starbucks, is not only known for its addictive chicken sandwiches and waffle fries. Behind the scenes, the company also is in the forefront of adopting a potent technology combo: edge computing and Kubernetes.

According to a Medium post by the company, Chick-fil-A will be running Kubernetes at the edge on 6,000 devices in all 2,000 of its restaurants—part of the chain’s internet of things (IoT) strategy to collect and analyze more data to improve customer service and operational efficiency. One example is being able to predict how many waffle fries should be cooked every minute of the day.

This approach illustrates why Kubernetes has rapidly become a key ingredient in edge computing—a proven and effective runtime platform to help solve unique challenges across telecommunications, media, transportation, logistics, agricultural, retail and other market segments.

The telco industry in particular has much to gain from edge computing. As competition among operators intensifies, it is essential for telco companies to differentiate themselves with new use cases such as industrial automation, virtual reality, connected cars, sensor networks and smart cities. Telcos increasingly are tapping into edge computing to make sure these applications work seamlessly while also driving down the costs of deploying and managing the network infrastructure.

With data being created at an unprecedented rate, telcos must consider how economical it is to transfer data from the edge to the core and whether it is less expensive to filter and pre-process data locally. Workloads that aren’t subject to demanding latency requirements should continue to be served by the most optimal cloud solutions possible. However, the coming wave of new uses cases requires operators to rethink how the network is architected. And that’s where edge computing comes in.

Edge computing is a variant of cloud computing, with infrastructure services—compute, storage and networking—placed physically closer to the field devices that generate data, eliminating round trips to the data center and increasing service availability.

This provides three benefits: first, lower latency, which boosts the performance of field devices by enabling them to not only respond quicker but to more events. Second, lower internet traffic, which helps reduce costs and increase overall throughput, allowing the core data center to support more field devices. Finally, for internet-independent applications, higher availability if there is a network outage between the edge and the core.

Interest in edge computing is being driven by exponential data increases from smart devices in the IoT, the coming impact of 5G networks and the growing importance of performing artificial intelligence tasks at the edge—all of which require the ability to handle elastic demand and shifting workloads. As a result, Gartner says the amount of enterprise-generated data that is created and processed outside a traditional centralized data center or cloud will soar to 75% by 2025 from just 10% today.

Edge clouds should have at least two layers, both of which will maximize operational effectiveness and developer productivity, though each layer is constructed differently.

The first is the infrastructure-as-a-service (IaaS) layer. Besides providing compute and storage resources, the IaaS layer should satisfy the network performance requirements of ultra-low latency and high bandwidth.

The second involves Kubernetes, which has become a de facto standard for orchestrating containerized workloads in the data center and the public cloud, and has emerged as a hugely important foundation for edge computing.

While using Kubernetes for this layer is optional, it has proven to be an effective platform for those organizations getting into edge computing. Because Kubernetes provides a common layer of abstraction on top of physical resources—compute, storage and networking—developers or DevOps engineers can deploy applications and services in a standard way anywhere, including at the edge.

Kubernetes also enables developers to simplify their DevOps practices and minimize time spent integrating with heterogeneous operating environments, leading to happy developers and happy operators.

So how can an organization deploy these layers?

The first step is to think about the physical infrastructure, and what technology can be used to manage the infrastructure effectively, converting the raw hardware into an IaaS layer.

Operational primitives are needed that can be used for hardware discovery, providing the flexibility to allocate compute resources and repurpose them dynamically.

Technology exists to automatically create edge clouds based on KVM pods, which effectively enable operators to create virtual machines with pre-defined sets of resources (RAM, CPU, storage and over-subscription ratios).

Once discovery and provisioning of physical infrastructure for the edge cloud is complete, the second step is to choose an orchestration tool that will make it easy to install Kubernetes, or any software, on the edge infrastructure.

Then, voilà! It’s time to deploy the environment and start onboarding and validating the application.

Using Kubernetes, companies can run containers at the edge in a way that maximizes resources, makes testing easier and allows DevOps teams to move faster and more effectively as these organizations consume and analyze more data in the field.

It will be fascinating to watch as more and more organizations adopt this model in the years to come.

Carmine Rimi

Carmine Rimi is a Kubernetes lead at Canonical, the company behind Ubuntu.

Carmine Rimi has 1 posts and counting. See all posts by Carmine Rimi