Service Mesh Not 100% Ready for Wide Adoption

A panel of service mesh experts weigh in on how service meshes are maturing

Service mesh is like an electric car—your friends talk about it, it’s hip, but you don’t have one yet. You’d probably rather wait and see how others manage it first. Nonetheless, it feels like the next tech evolution.

Service mesh has garnered a bunch of attention in recent months—they help solve the many problems that arise after adopting a microservices architecture. But is service mesh ready for all enterprises to utilize? It turns out sits own proponents aren’t convinced this technology is 100% ready for broad adoption.

KubeCon + CloudNativeCon North America 2020’s Service Mesh panel discussion, held Nov. 18, featured thought leaders in the field: Matt Klein, Lyft; Manish Chugtu, VMware; Dan Berg, IBM; and Idit Levine, Solo.io. On the whole, the panelists acknowledged positive current developments in the service mesh landscape, such as consolidation around Envoy. They also highlighted its many benefits, which include increased networking, observability and routing capabilities for microservices architectures.

However, we still have yet to see enough service mesh production use cases. And, each mesh framework is still maturing. Trade-offs to using a service mesh include increased latency, steep learning curves and an ongoing maintenance overhead. A lack of clear standardization for inter-mesh interoperability could be another concern (SMI was a proposed format, but now the industry seems more motivated behind Envoy’s xDS API).

All this is not to say it won’t be a pivotal part of a company’s future infrastructure or that it won’t offer helpful capabilities for the use cases that require it. Simply put, service mesh is a burgeoning field (which was why this panel was so fascinating to attend). Follow below as we dive into the latest updates in the landscape. Let’s see where the top luminaries believe the technology is heading.

New Service Mesh Updates

What are the recent progressions in the service mesh space? Well, most panelists now view Envoy as the established data plane proxy for most meshes. We’re also seeing it fitting quite nicely as an observability mechanism for the new multi-cloud paradigm.

“Envoy is data plane layer we’re seeing emergence on,” says Klein. The service mesh and API gateway space are crowded with many solutions, and applications are leaning more and more toward serverless. The future is hard to predict, yet “foundational tech like Kubernetes and Envoy will be the plumbing,” he says.

Levine also champions Envoy for its extensibility features, citing recent strides to extend Envoy with WebAssebnly. “Envoy is the way to go,” she says. “It’s the building block—everything not embracing it will be fading away.” Nevertheless, there is a lot of marketing hype, and executing without the proper guardrails and governance could be dangerous.

Berg similarly believes Envoy is the future and that multi-cloud is the present day. Yet, he describes how difficult it is for teams to implement service mesh. “They are trying to do it, but struggling a lot,” he says. The notions of guardrails and governance are incredibly important. Compounding on this is that many companies only require a fraction of the capabilities that mature technologies, such as Istio, have to offer. He calls for a simplification of usability and deployment—we need “simple solutions to complex problems.”

Chugtu extolls its benefits for addressing new hybrid and multi-cloud patterns. These realities are emerging, bringing the need for a common observability framework across all patterns of deployment. In terms of security, Chugtu sees practical benefits in addressing extensibility for the mesh with things such as threat detection. According to Chugtu, the more intelligence a service mesh gathers, the more it can help define access attributes. Thus, it helps create a zero-trust model.

Trade-Offs Using Service Mesh

Nonetheless, there are still significant hangups. In general, the panel recognized considerable human effort—steep learning curves, usability challenges and ongoing maintenance—required for service mesh implementation. Plus, every action takes up server space, so operational overhead and application latency are practical downsides to consider. Lastly, it may simply introduce too much complexity for scenarios that don’t require it.

“It’s a BIG learning curve,” Berg says. “It’s not a gradual improvement—you start from nothing, and then it’s a big jump.” Companies may adopt it for observability and enterprise-wide MutualTLS. As they improve and realize there is more value, Berg adds, they must comprehend many resources and become programmed to the mesh. Operationally speaking, sidecars do consume resources over time, too. “A lot of people don’t take that into account when they first start building,” he says. He foresees managed solutions as helping open up more possibilities for easier adoption.

“We’re trying to adopt something that is not 100% ready to be adopted,” says Levine. Service mesh is inherently one more thing to operate, manage, and upgrade and maintain. She also acknowledged the added latency intrinsic to a design. Regardless of the potential downsides, Levine still believes the benefits of a centralized configuration plane outweigh the negatives.

“Service mesh is not meeting customers where customers are right now,” Chugtu says. He acknowledges the complexity and inherent learning curve. Yet, he still believes it is worth the effort, as it helps balance team dynamics. Fundamentally, introducing service mesh could shift the ownership of infrastructure back to platform and DevOps teams—allowing developers to focus on core competencies and application development.

Decoupling the platform team from the developers is a benefit Palladino, (Kong, Kuma) stressed in a recent interview: “Application teams become consumers of connectivity, as opposed to the makers to this connectivity,” he says.

Going to the root of the issue, perhaps the question companies really should be asking is: Should we be adopting microservices in the first place? When you adopt microservices, you bring many unintended consequences, which are vastly different than running a monolithic architecture.

Essentially, the industry should not be pushing complexity when it’s NOT warranted. When it is justified, “the sidecar proxy solution is an elegant solution to these problems,” Klein says.

Standards and Service Mesh Interface

As I recently covered, Service Mesh Interface (SMI), spearheaded by Solo.io, could be an exciting proposition to bring interoperability and extensibility benefits to a burgeoning service mesh field. However, the general mood in the panel was a sense of skepticism regarding its use in practice.

Levine backs SMI to form a vendor-neutral standard interface between meshes. In a future multi-mesh world, holding a common configuration pattern for service meshes across the market could empower companies. However, she acknowledges roadblocks for SMI, notably the “lower common denominator” effect of a standard and politics barring its introduction.

Others are not as confident that SMI is necessary. “If you’re swiping out different meshes, that may be a tough pill to swallow,“ Berg says. ”How many implementations are we actually going to have at the end of the day?” he asks. If the main benefit is only to avoid vendor lock-in, SMI may not be worth the investment.

Klein describes SMI as a good idea in theory, yet does not see it being useful in practice. “I’m very skeptical that anything will be portable,” he says. At the end of the day, all service mesh vendors will have their own interpretations. Klein stresses the industry should avoid adding unnecessary layers of abstraction.

Instead of focusing on SMI, some panelists argue convergence around the Envoy xDS API makes far more sense. “If everything is based on Envoy, way not converge there?” asks Klein.

Readying Service Mesh

So, knowing the hurdles involved, when will service mesh be 100% ready? What situation is it a good fit for? Well, the answer varies.

One way to look at it is from the point of view of your existing tech stack. Klein reiterates that companies should have adopted microservices first before service mesh. Don’t embrace service mesh due to “tech envy,” he says. Instead, it should be a customer-first decision based on minimizing operational pains and maximizing productivity.

Another perspective is to consider where you are going into the future. Service mesh may be part of your company or customer vision going forward, says Chugtu. In this case, your historical or current architecture may hold less importance. “A solution is ready as much as the use case you want to solve,” he says.

Another perspective is that no great technology is ever complete. When will service mesh be 100% ready? “Never,” Berg says. There will always be faults. Take Kubernetes, for example: Each release either patches or introduces new issues, and the community is continually improving it. Likewise, each service mesh on the market is maturing. They have their own strengths and weaknesses, and “none of them are perfect,” he says.

When will service meshes mature? To Levine, services meshes need first to be put through the wringer. “None of them are mature enough,” she says. Only when people run it in production can they provide feedback to improve the service mesh. Though “none of them is completely ready,” Levine does foresee a market fit quickly opening for more organizations.

Service mesh is a brand new concept, and more production use cases are required to advance meshes and Envoy extensibility. Early stage adopters should make an educated guess on what is right for their scenario, evaluating all service meshes on the market before choosing one. For later-stage adopters, it appears that future usability improvements and managed service mesh offerings could easily motivate an organization to get meshy.

Bill Doerrfeld

Bill Doerrfeld is a tech journalist and analyst. His beat is cloud technologies, specifically the web API economy. He began researching APIs as an Associate Editor at ProgrammableWeb, and since 2015 has been the Editor at Nordic APIs, a high-impact blog on API strategy for providers. He loves discovering new trends, interviewing key contributors, and researching new technology. He also gets out into the world to speak occasionally.

Bill Doerrfeld has 105 posts and counting. See all posts by Bill Doerrfeld