CNCF Advances Linkerd Service Mesh Project

The Cloud Native Computing Foundation (CNCF) this week released an update to the Linkerd project that makes the open source service mesh platform smaller, faster and more accessible to IT organizations.

William Morgan, CEO of Buoyant, which contributed Linkerd to the CNCF last year, says a new “sidecar” architecture now makes it possible for DevOps teams to apply Linkerd to a single service rather than having to deploy it across an entire cluster. That important because it reduces the amount of risk a DevOps team might face as they begin to master Linkerd, says Morgan.

Morgan also says Linkerd 2.0, now generally available, consumes much less space and, as a result, is also significantly faster than the previous version of the service mesh. Those improvements were achieved by completely rewriting Linkerd using the Rust programming language.

Contributors to the Linkerd project include Salesforce, Walmart, Comcast, CreditKarma, PayPal, WePay and Buoyant.

Other new capabilities include a zero-config, zero-code-change installation process; support for automated Grafana dashboards and “golden metrics” generated using the open source Prometheus container monitoring tools; and automated TLS-based encryption between services that include generation and distribution of certificates.

In addition to making it easier to get started using a service mesh against a specific service, Morgan says that he expects broader reliance on service meshes soon will result in owners of specific services being identified within DevOps teams. That shift will lead to major changes in how IT organizations are organized, predicts Morgan.

It may take a while before there are enough microservices based on containers deployed in production environments to force that level of organizational change. But, Morgan contends, in much the same way developers are now being held accountable for entire applications, we’ll soon see developers taking responsibility for services that will be separated more clearly from one another by the existence of a service mesh.

Having developers responsible for specific services also would go a long way to making it easier for IT operations teams to keep track of dependencies between microservices. Right now, it’s challenging for IT operations teams to predict with any certainty what changes to the underlying physical IT infrastructure might have on any given set of microservices.

In general, most organizations don’t realize they need a service mesh until they’ve deployed large numbers of microservices. But as adoption of microservices based on containers continues to escalate, it’s now only a matter of time before service meshes become more commonly deployed on container clusters. The next big challenge, of course, will be connecting service meshes running on multiple clusters to one another. Those clusters, in turn, will need to be connected to legacy platforms that typically rely on load balancing software, also known as application delivery controllers (ADCs), to segment and isolate IT infrastructure resources.

In the meantime, DevOps teams these days have no shortage of options when it comes to service mesh platforms. Hopefully, there will be more for collaboration between various projects versus efforts that might only lead to ongoing reinventions of the same service mesh wheel.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1612 posts and counting. See all posts by Mike Vizard