An OpenELB load balancer that plugs into Kubernetes clusters has become a sandbox level project under the auspices of the Cloud Native Computing Foundation (CNCF).
Originally developed by the KubeSphere community, OpenELB is designed to manage traffic flowing between Kubernetes clusters at the Layer 2 level.
Feynman Zhou, head of developer relations for the KubeSphere project, says that approach eliminates the need for a separate hardware platform to run load balancing software at a higher level of the networking stack.
Contributors to the OpenELB project are already working on adding support for high availability based on open source Keepalived software for managing pools of servers running load balancing software and the kube-spiserver interface defined by the Kubernetes technical oversight committee (TOC). Members of the OpenELB project also intend to add support for IPv6, a user interface and EIP/IP Pool configuration.
In general, the KubeSphere community created OpenELB to make it simpler to load balance Kubernetes clusters running on a wide variety of bare metal and virtual machine platforms. However, as Kubernetes is deployed more frequently on edge computing platforms, the need for a load balancer that operates at Layer 2 of the networking stack will become more acute, notes Zhou.
The KubeSphere platform is based on a set of wizards, themselves based on the open source KubeKey installer, that are accessed via a web user interface. These wizards enable IT teams to deploy Kubernetes alongside other components needed to stand up a complete environment. This approach simplifies Kubernetes life cycle management by using a plug-and-play platform architecture to add support for additional components as IT teams see fit. KubeKey uses Docker as the default container runtime to install Kubernetes, but now also supports runtimes such as Containerd, CRI-O and iSula.
In effect, KubeSphere is a distributed operating system for Kubernetes environments that, among other features, includes an instance of open source Jenkins continuous integration/continuous delivery (CI/CD) software, an application store for deploying applications using Helm charts and observability tools along with an instance of Porter load balancing software.
It may be a while before OpenELB officially graduates, but as fleets of Kubernetes clusters become more commonplace in the enterprise networking all those clusters together will become more challenging. At this juncture, it’s not so much a question of whether Kubernetes will be used at the network edge as much as it is a question of degree. There are many classes of edge computing platforms that might host variants of a Kubernetes cluster. Most of them, however, will be running some form of a lighter-weight distribution of Kubernetes.
Regardless of the platform, the one thing edge computing applications will all require—though to varying degrees—is the ability to run a distributed application connected to a backend service. As more data is processed and analyzed at the point where it is created and consumed, the underlying platform needs to be light enough to run at the edge while still being robust enough to process data locally. Many of those clusters will be running distributed workloads that will need to be balanced across multiple clusters.
As 5G wireless networks become more widely available, IT teams should also expect the number of Kubernetes clusters deployed at the edge to explode. The challenge now is finding the best way to manage all those Kubernetes clusters at scale.