Modern expectations for higher performance and reliability are demanding improved development and deployment methods. Envoy-based service meshes, for example, are part of this movement, operationalizing networking tasks developers once were responsible for.
One new brand of service mesh is the open source Kuma by Kong. Kuma is a universal, multi-tenant control plane built on top of Envoy. I recently spoke with Marco Palladino, CTO and co-founder of Kong, to discover what makes Kuma unique compared to other service meshes already on the market.
Below, we provide an introduction into the operational concerns that gave rise to service mesh and then offer an introduction into Kuma to see where it shines.
Microservices Bring Connectivity Concerns
The interest in microservices has certainly made architecture increasingly distributed and decoupled. Microservices bring reusability benefits but can severely complicate operational workloads such as CI/CD pipelines.
“On one end, microservices make it easier to create new services and deliver a more reliable experience,” says Palladino. “On the other end, they introduce new complexity.”
One significant issue teams confront once they transition to a distributed microservices model is a lack of connectivity. Without streamlined connectivity, security and high performance, microservices can quickly lead to a poor end user experience.
“How well we manage connectivity will determine how well we manage the business or not,” he says.
If teams use the same development approach they did for monolithic applications, they may encounter an unmanageable scale of operational tasks. For example, it used to be commonplace for application teams to build networking features. However, with microservices, managing the growing scale of networking required warrants an abstraction layer.
Enter Service Mesh
Service mesh is a new pattern to improve how services are executed at runtime and how they are managed in a centralized way. “Service meshes improve something we have always been doing everywhere in our organizations, but abstract network connectivity away from applications,” Palladino says.
For example, say an enterprise needed to update its TLS across the entire organization. Without a service mesh, this would require serious effort and coordination with every development team across an organization.
The greatest benefit of service mesh, Palladino believes, is that it centralizes connectivity management. With an abstraction layer controllable from a unified place, upgrades such as this become much easier.
Gaps in the Service Mesh Landscape
Kong, the popular open source API gateway, wanted to integrate a service mesh into its offering for some time. Yet, the team found existing meshes to be “hard to use, hard to deploy and hard to configure,” Palladino says.
Kong wanted to build upon an Envoy-based service mesh. It considered Istio, but found it “hard to scale and configure.” CNCF-residing Linkerd appeared to be more usable, but it was not built on Envoy.
In evaluating the service mesh landscape, Palladino says he was stunned by how meshes prioritized Kubernetes, with little thought given to other environments, such as VMs. He also notes a general lack of ease-of-use across service meshes.
In response to these issues, Kong developed Kuma, an open source control plane for modern connectivity. Since its release in September 2019, Kuma has had 100,000 downloads and earned 1,500 GitHub stars. So, what makes Kuma stand out compared to other service meshes? Here are some specific areas:
According to Palladino, service meshes should not just be a Kubernetes-specific concern. Kuma appeals to other platforms and clouds outside of Kubernetes, such as VM deployments. “Different teams use mesh at different times,” he says. “Service meshes need to support the entire organization in a simple, agnostic way,” regardless of whether a team has adopted Kubernetes. This appeals to companies that may not containerize all components, but may still want to integrate them with containers.
Centralized Control Plane
In addition, with Kuma, multiple independent meshes can be operated and controlled from one place. With this design, teams can start one cluster and have 100 meshes, thus reducing overall operational costs. Isolated environments could benefit highly regulated use cases such as financial applications.
Kuma allows teams to set intuitive L4 + L7 policies for security, traffic control, observability, routing and other features. Policies are applied with
kumactl for universal mode,
kubectl for Kubernetes. For example, this YAML code for the Traffic Permissions policy can be utilized to set security rules:
Kong is taking an open governance approach to Kuma. At the time of writing, Kuma has just been proposed to CNCF as a sandbox project. Kong welcomes developers to contribute to its community.
The Ecosystem Play
Another interesting aspect of Kuma is it’s also natively compatible with one of the world’s most popular open source API gateways, Kong, meaning teams could expose services to external developers with quality API management.
Kuma: The Switzerland of Service Meshes?
Service meshes can abstract connectivity for developers, and Kuma seems to be an extensible option. In terms of new advancements, Palladino describes other actions in the pipeline for Kuma:
- Improvement to the GUI that ships with Kuma.
- Working on new user-friendly ways to expose Envoy native policies.
- Hybrid deployment of Kuma across VMs.
One final aspect that makes Kuma unique in the service mesh sphere is that it’s not directly affiliated with any cloud computing vendor. Whereas Istio has major IBM support and AWS owns its App Mesh, Kuma is the “Switzerland of service meshes,” Palladino jokes.
“Kuma has no other agenda but being the Best service mesh control plane out there,” he says.