A10 Networks Addresses App Load Balancing on Kubernetes

A10 Networks is extending the application load balancing capabilities it makes available on Kubernetes clusters by adding an A10 Ingress Controller that continuously monitors the life cycle of containers associated with the delivery of any application service. In the event there is a change to the status of those containers, the A10 Ingress Controller shares that information with a Harmony Controller from A10 Networks, which in turn can dynamically adjust the A10 Lightning application delivery controller (ADC).

Kamal Anand, vice president of cloud for A10 Networks, says there’s no concept of an application load balancer within Kubernetes itself. There have been some efforts to create a service mesh for managing specific services. Technologies such as the A10 Ingress Controller, which also can apply rules to how network traffic is routed to applications, will prove complementary to service meshes that manage connections within the Kubernetes cluster, he says.

The A10 approach to providing load balancing capabilities is designed around of set of microservices enabled by containers. In turn, these containers make it possible to provide application load balancing using the A10 Ingress Controller within the context of a larger DevOps process, says Anand. The A10 Ingress Controller, the Harmony Controller and the A10 ADC can all be deployed directly on a Kubernetes node, all of which extend the base container orchestration capabilities enabled by Kubernetes, he notes. Lightning ADC provides a complementary set of load balancing capabilities for infrastructure and content-based switching, as well as web application firewall and an application layer of distributed denial of service (DDoS) protection. Lightning ADC’s configuration  is managed by the Harmony Controller, and it can be deployed as a Kubernetes DaemonSet, so adding a node to a Kubernetes cluster will automatically add an instance of Lightning ADC.

The analytics generated within the Harmony Controller then can be accessed via a graphical user interface or application programming interface (API) to inform other management applications. The expectation is that visibility into application services based on containers running on Kubernetes will be more broadly required as a new generation of applications is deployed in production environments. A10 Networks is betting that the complexity associated with moving network traffic between application services will become a more significant challenge. Addressing that issue will require application load balancers that can dynamically scale up and down as traffic flows across a Kubernetes environment.

In many organizations applying Layer 7 policies using an ADC to manage application delivery services in a container environment will finally force the DevOps issue. In environments dominated by monolithic applications, it’s still possible to manage ADCs in relative isolation. But within the context of Kubernetes environment, the ephemeral nature of containers requires a cloud-native approach that can be accessible to developers and IT operations teams alike. The good news is that in many cases, this means many of the developers and administrators working on those teams will, for the first time, see the same application delivery metrics at the very same time.

Mike Vizard

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 358 posts and counting. See all posts by Mike Vizard