Linkerd, originally developed by Buoyant, Inc., has been gaining traction as a lighter-weight alternative for deploying a service mesh on top of Kubernetes.
William Morgan, Buoyant’s CEO, says the Linkerd Steering Committee will be made up of end users that employ the service mesh to manage application programming interfaces (APIs) over multiple Kubernetes clusters, instead of a committee made up of IT vendors trying to maintain control over an open source project. In fact, the first steering committee meeting is open to all interested parties, Morgan says.
Linkerd is currently used by Microsoft, H-E-B, EverQuote, HP, Inc., finleap connect, Subspace and Clover Health to provide a layer of abstraction that eases management of hundreds of microservices that expose APIs. Precisely when does an organization need a service mesh? That’s a matter of debate.
Many IT teams rely on proxy servers or a Kubernetes ingress controller to manage APIs. However, employing a service to manage APIs at a higher level of abstraction begins to make more sense as complexity increases.
There are, of course, no shortage of available service mesh options. Linkerd provides a lighter-weight alternative to service meshes such as Istio. Istio was originally developed by Google and IBM, and remains under Google’s control rather than being donated to an industry consortium. Linkerd, in contrast, is being developed under the Cloud Native Computing Foundation (CNCF). The CNCF also controls a Kuma service mesh originally developed by Kong, Inc. Both Kuma and Linkerd are more accessible to IT teams, while Istio provides a richer set of capabilities that tend to appeal to larger enterprises.
Microsoft, meanwhile, is leading an Open Service Mesh effort that provides an interface through which multiple service meshes might be integrated.
Regardless of which service mesh organizations choose, the impact of service mesh technology will go far beyond just managing APIs. A service mesh also provides a layer of abstraction above network underlays in a way that makes it easier for developers to programmatically invoke a network service. That capability means service meshes will soon play a key role in integrating network operations within a larger set of DevOps processes.
Of course, organizations are still working to master Kubernetes clusters. It may take time before deployment of service meshes on top of those clusters is routine. In the longer term, however, service meshes enabled by Kubernetes may have the more profound impact on IT. In the meantime, IT organizations should reevaluate how microservices, and their associated APIs, are managed today, with an eye toward how they might be managed tomorrow.