Rancher Labs has launched an open source project intended to make it easier to build a network spanning multiple Kubernetes clusters.
Company CEO Sheng Liang says the Submariner project makes it possible to establish a Layer 3 network between pods residing on different Kubernetes clusters using encrypted IPsec tunnels. However, although IPsec tunnels are defined as the default networking option, support for other network protocols will be enabled in the near future via plug-ins.
The Submariner project also will enable IT teams to register the gateway nodes on their clusters that have been dedicated to run Submariner software. Submariner also provides a service discovery across multiple Kubernetes clusters and is compatible with the Container Network Interface (CNI) defined by the Cloud Native Computing Foundation (CNCF), which oversees the development of Kubernetes.
Liang says Submariner is designed to specifically address networking requirements in Kubernetes environments as part of Rancher Labs’ ongoing focus of enabling IT organizations to more easily deploy and manage Kubernetes clusters residing on multiple clouds.
Most organizations that have embraced Kubernetes are just now getting to the point where networking is becoming a significant issue. In fact, beyond defining CNI, the CNCF thus far has not focused much of its efforts on Kubernetes networking. Making it easier to provision and manage a single Kubernetes cluster has been the organization’s primary focus. That said, most enterprise IT organizations are going to be conservative when it comes to deploying Kubernetes in production environments unless they know for certain how to manage instances of Kubernetes at scale.
Liang says thanks to the rise of service mesh project such as Istio, it’s only a matter of time before lower-level network connectivity issues between Kubernetes clusters will need to be addressed. In fact, he notes that any multi-cloud computing strategy based on Kubernetes needs to address network connectivity issues that would, for example, allow some form of software-defined wide area networking (SD-WAN) software to be layered on top of gateway nodes running in a Kubernetes pod. Those SD-WAN platforms would be able to not only connect Kubernetes clusters to either a public internet connection or a leased line, but also provide WAN optimization capabilities.
It’s not clear who will be responsible for creating those network connections. Historically, networking specialists have jealously guarded their domain. But Kubernetes unifies the management of compute, storage and networking within the cluster. It remains to be seen whether the same individuals who set up the Kubernetes cluster also will look to programmatically create networks between multiple instance of Kubernetes residing on multiple clouds. In some cases, network administrators will look to provide individual DevOps teams with the ability to self-service their networking requirements via a portal that defines, for example, how much bandwidth they can access. In other cases, network operations might be completely subsumed into the DevOps team.
Of course, DevOps teams also will have to figure out how to connect Kubernetes clusters to legacy application platforms. Regardless of approach taken, however, it’s clear that as a network node, Kubernetes is here to stay.