While some progress has been made in creating a standard network interface for containers, the level of interoperability between containers themselves and other types of computing platforms still leaves much to be desired. At the Gluecon 2017 conference this week, IBM and Google moved to fill that void by launching a open-source project, dubbed Istio, that enables organizations to seamlessly connect, manage and secure networks made up of different microservices.
Developed in collaboration with Lyft, Istio combines elements of container networking technologies that all three organizations have been developing separately, says Angel Diaz, vice president of cloud technology and architecture for IBM. For example, IBM contributed Amalgam8, an open-source unified service mesh that creates a traffic routing fabric and a programmable control plane to address A/B testing, canary releases and to test the resilience of services against failures. Google, meanwhile, contributed its Service Control service mesh that has control plane focused on enforcing policies such as ACLs, rate limits and authentication, in addition to gathering telemetry data. Finally, Lyft developed the Envoy proxy capable of spanning more than 10,000 virtual machines that process more than 100 microservices.
IBM and Google have combined these projects to not only address traffic flow management, access policy enforcement and the aggregation of telemetry data between microservices, but also the networking between containers running on bare-metal servers, virtual machines and platform-as-a-service (PaaS) environment such as Cloud Foundry. Those capabilities, says Diaz, are critical for IT organizations to overcome horizontal scaling and automation challenges associated with deploying microservices as scale.
Istio in its current form runs on top of Kubernetes clusters, which Google and IBM use extensively in their public clouds. But Diaz says the goal is make it possible to run Istio anywhere; Google and IBM opted to make Istio available as a standalone project at least until it becomes sufficiently hardened to present as potential standard. Right now, the standards body taking the lead on container networking issues is the Cloud Native Computing Foundation (CNCF), which announced last week it is working on a standard based on the Container Networking Interface (CNI) originally developed by CoreOS.
There are a lot more industry stakeholders in the container networking space than IBM and Google. But so far progress on container networking standards has been sluggish. Each type of container cluster has its own internal network and each provider of those clusters has its own approach to networking them together. There’s also a small army of networking vendors that have developed network virtualization overlays that span multiple computing platforms, including virtual machines and containers. Eventually, enough pressure will be brought to bear by IT organizations that eschew anything that deemed to be too proprietary.
In the meantime, most cloud service providers have their own way of networking those container environments together. But it may take months, perhaps even years, before container networking interoperability standards are completely fleshed out.