Red Hat this week announced the general availability of Red Hat OpenShift Service Mesh, which provides a control plane to ease management and orchestration of distributed instances of proxy servers necessary for deploying microservices based on containers at scale.
Red Hat OpenShift Service Mesh combines the Istio service mesh with Kiali tools to visualize the overall environment and Jaeger distributed tracing tools. In addition. Red Hat has developed Kubernetes Operators software, which automates the provisioning and ongoing management of the service mesh. The Red Hat OpenShift Service Mesh Operator will be available via the OpenShift 4 OperatorHub in the coming weeks.
What’s more, Red Hat OpenShift Service Mesh also includes an integrated application programming interface (API) gateway based on the 3Scale platform Red Hat acquired in 2016. That gateway makes it easier to manage traffic flows between application endpoints and the service backend.
Brian “Redbeard” Harrington, principal product manager for Red Hat, says a service mesh makes it easier for teams that don’t have deep Kubernetes expertise to manage a microservices environment. However, the service mesh might impact overall performance, depending on the number of connections between microservices that need to remain open, he says.
While Red Hat joins what has become a cavalcade of vendors throwing their support behind Istio, the service mesh itself has yet to be adopted formally by any standards body. The Cloud Native Computing Foundation (CNCF) oversees the Envoy proxy servers for Kubernetes on which both Istio and its rival (and lighter-weight) Linkerd service mesh is based. There is also a separate effort to establish a Service Mesh Interface (SMI) that defines a set of common APIs to foster service mesh interoperability. Led by Microsoft, that effort has the support of Linkerd Inc., HashiCorp, Solo.io, Kinvolk, Weaveworks, Aspen Mesh, Canonical, Docker, Pivotal, Rancher, Red Hat and VMware. Notably absent are Google, which—along with IBM and Lyft—has led the development of Istio, and Amazon Web Services (AWS).
It’s too early to say how widely service meshes will be employed. However, as long as there is a lack of a consensus concerning service mesh platforms, many organizations will wait before deploying a service mesh in a production environment. The challenge is that as the number of microservices running on top of Kubernetes increases, the challenges associated with managing those microservices also increase. The service mesh issue may result in some organizations limiting the level of scale microservices are deployed on Kubernetes until the current debate gets resolved.
Obviously, most organizations would prefer the best attributes of Istio, Linkerd and other service mesh technologies into a single platform. It’s not at all clear there is enough differentiated value between the service mesh projects to warrant separate initiatives. As is often the case with competing open source projects, however, there is sometimes too much of a good thing.