The use of service mesh is increasingly seen as an essential tool to manage and orchestrate microservices in highly distributed containerized environments. According to a recent Cloud Native Computing Foundation (CNCF) survey, the use of service mesh in production environments at the organizations surveyed increased to 27% this year, representing a 50% increase. CNCF also reports it expects adoption to continue to grow, based on the fact that 23% of respondents were evaluating the use of service mesh, while another 19% say they plan to integrate it with their operations during the next 12 months.
Among the leading open source service mesh alternatives, Istio and Linkerd have emerged as the most widely adopted. According to the CNCF survey, Istio accounts for 47% of all service meshes used in production, while Linkerd trails slightly behind with 41% share.
The choice of service mesh between the three leading alternatives—Istio, Linkerd and Consul—depends on the specific requirements and individual needs of the organization. For large-scale deployments across multi-cloud environments before Kubernetes has been deployed, for example, Istio is potentially the best choice, especially for organizations with large budgets. On the other hand, Linkerd can be viable for edge software deployments in environments that already run on Kubernetes.
Performance-wise, Linkerd proponents claim the service mesh offers superior results based on a number of benchmark results Linkerd.io recently released based on tests completed by Kinvolk, a cloud-native software and services provider. In some tests, the benchmark results reveal that Linkerd consumed less memory and CPU resources by “an order of magnitude” and improves latency by 400% over Istio when running “real-world” applications at scale.
When it comes to resource consumption, William Morgan, CEO of Linkerd creator Buoyant describes in a blog post how Linkerd and Istio’s CPU and memory consumption falls in the “highest-load scenario” of 2,000 RPS.
“Starting with the control plane, we see that Istio’s control plane usage averaged 837mb, about 2.5x Linkerd’s control plane memory consumption of 324mb,” Morgan writes. Linkerd’s CPU usage was “orders of magnitude smaller,” 71ms of control plane CPU time versus Istio’s 3.7s, he notes.
And it’s possible Istio supporters will challenge these benchmarks (though spokespersons from Istio provider Solo.io could not be reached for comment). “We’ve done our best to publish a clear, honest, and reproducible set of data,” Morgan says, while conceding, “But no benchmark is perfect.”
“We chose to stick to the existing Kinvolk harness, even though we know there are many ways it could be improved,” says Morgan. “I expect some criticism along those lines, and my response would be: rather than arguing about benchmarking minutia, why don’t you come help us improve Linkerd?”
Linkerd is “ideal” for Kubernetes-centric organizations that “want to spend their time innovating on their product and their business rather than on the operational complexities of running a service mesh,” says Morgan. “For organizations that have complex needs from their mesh that Linkerd can’t address, then Istio is a better choice—as long as they are able to afford the team to maintain it,” says Morgan.
Linkerd’s benchmark results indicate that its low power and memory consumption can meet the needs of those users whose organizations seek resource-saving opportunities, such as for edge applications that run on low-power devices. A relatively low barrier to adoption can factor in, as well.
“For us, picking Linkerd was a question of mean-time-to-value; service mesh technologies are a very fast-moving part of the landscape,” says Steve Gray, head of the trading solutions team at Entain. “Something we can become proficient at in minutes versus days while making our application faster and occupying only a very modest footprint was an easy proposition to come to terms with.”