Kubernetes Deployment for Edge Computing

Unlike cloud computing, which often requires the use of thousands of remote physical servers to power up various processes, edge computing needs significantly fewer resources. This is because edge computing processes are designed to run on the device it powers. You can implement edge computing on internet of things (IoT) devices, computers or servers.

A major benefit of edge computing is that it reduces long-distance communication between clients and servers. The result is often quick or immediate access to resources and data. Additionally, an edge process does not need to send raw streams of sensor data. Rather, the process only sends critical information, ensuring that IoT systems filter data before moving it to the cloud. Often, this process ensures that sensitive and private information remains on the device and is not being sent when it is not needed.

Why Use Kubernetes for Edge?

Even though the majority of edge processes occur on devices, edge nodes are essentially an additional layer in the IT infrastructure. Enterprises and third-party service providers need to manage this layer, whether it is added to their on-premises infrastructure, cloud ecosystem or any hybrid data center architecture.

To effectively manage edge processes, enterprises need a management layer that provides dynamic orchestration and automation. However, the entire edge architecture consists of different hardware and software resources. Kubernetes can meet these needs, for various reasons.

Kubernetes provides an agonistic infrastructure capable of seamlessly managing diverse workloads running on different compute resources. Additionally, Kubernetes can orchestrate and schedule resources as they bounce from the cloud to the edge and vice versa. You can also use Kubernetes to use cloud configurations when managing and deploying edge devices.

Typically, control plane and analytics services are set up in the cloud, and data flows back and forth between cloud and edge locations. Kubernetes can provide a common paradigm for automating the processing and execution of instructions, and apply this for all relevant network deployments. This way, you can apply policies and rulesets to your entire infrastructure.You can finetune your policies to serve the unique needs of channels and edge nodes, by defining specific configuration requirements. Additionally, you can use Kubernetes to introduce horizontal scaling for your infrastructure and software development processes. This creates a centralized platform with high availability, which ensures edge nodes are ready to provide low-latency access to different IoT devices. Kubernetes also provides APIs for monitoring and tracking cluster nodes.

Kubernetes Deployment Options for Edge Computing

To run Kubernetes at the edge, you can use the public cloud or server facilities located outside the data center. Alternatively, you can leverage specialized appliances.

Public Cloud

When you choose to run your edge processes in the public cloud, the cloud vendor either hosts your entire edge environment or certain componentes. When you set up the edge an extension of the public cloud, you can increase the interactivity of cloud front ends.

Notable services offered by cloud vendors include on-premises edge devices, such as Amazon Snowball, which integrates with other public cloud services offered by AWS. Be sure to check compatibility between your Kubernetes practices are those of the cloud vendor before committing.

Typically, public cloud edge hosting is supported by the cloud vendor’s hosting options, such as containers, serverless functions and virtual machines (VMs). This ensures that when Kubernetes is connected to the cloud and the edge it does not see them as separate entities. You can easily implement this, but be sure to include Kubernetes hosting policies that associate each edge component with edge resources. You can do this using taints, tolerations and addinities. Make sure your edge resources are not spread too far from the components they control.

Server Facilities Outside the Data Center

You can deploy your edge environment in one or more server facilities that are located outside of your data center. This model is typically implemented for industrial IoT (IIoT) with specific  requirements for edge processing, which often need servers to be located nearby at warehouses or factories.

You can configure your edge hosting points as separate clusters or as part of the main data center cluster. When your edge hosting point serves as a resource pool, you might need to use a specialized Kubernetes distribution, such as KubeEdge, which is designed especially for edge-centric missions. Edge applications can be tightly coupled with data center Kubernetes deployments. In some cases, you can configure the edge and data center to back up each other.

Edge deployments often serve a similar role to that of a client running specialized applications only. In these cases, you might not need to integrate Kubernetes clusters and leverage Kubernetes federation to centralize the deployment policies of edge and data center processes.

Specialized Appliances

Some appliances are specifically designed for factories and processing facilities. You can use these appliances to set up your edge model. For example, the majority of these specialized edge devices use ARM microprocessors instead of AMD or server-centric Intel chips. These devices often come with their own community of controllers and sensors and are tightly tied to IoT devices. This process is often used in smart buildings.

You can also use non-server edge devices, which typically use Kubernetes versions such as K3s to achieve a small device footprint. Alternatively, you can leverage specialized edge devices that do not require any orchestration. Use K3s orchestration for cases when the device can run multiple applications, either separately or concurrently, or if these devices can host cooperative application components. Otherwise, you can load applications onto the devices using either network or local storage, according to needs and demands.

Conclusion

There are plenty of benefits to using Kubernetes for edge computing. Most notable among these benefits is the ability to unify policies across edge devices and cloud environments while centralizing management and standardizing policies. There are three key ways to leverage Kubernetes for edge environments, from using public cloud components through setting up your own server facilities, to using specialized appliances. Each method comes with its own unique benefits, which can serve your unique needs.

Limor Wainstein

Limor is a technical writer and editor with over 10 years' experience writing about topics like cybersecurity, big data, cloud computing and web development. She is the winner of the STC Cross-European Technical Communication Award (2008), and is a regular contributor to technology publications liked DevOps.com, Database Trends Magazine, and Toolbox. She is studying towards a Master's Degree in Technology & Society from Bar Ilan University. Limor is a Senior Technical Writer at Cybereason and has written for Imperva, NetApp, RSA, Lexis Nexis and many other companies.

Limor Wainstein has 4 posts and counting. See all posts by Limor Wainstein