Cloud-native is more than a trend—it’s estimated that by 2023, the majority of new applications will adopt cloud-native components. One aspect of cloud-native is connectivity—networking is essential to enabling containers to talk to one another, especially since multiple containers may comprise a single application. But, it’s not only single containers you have to worry about—architects may want to combine components from multiple clusters, which might be sitting in various clouds or on-premises locations.
Cloud-native networking can help ease this tricky process by creating virtual overlay networks on top of existing networks. And if you’re creating an overlay network for your cloud-native communication, many open source tools are at your disposal. Below, we’ll review some of the projects hosted by the Cloud Native Computing Foundation (CNCF) around cloud-native networking. These tools below use the Container Network Interface (CNI), a CNCF-hosted project that specifies standards for configuring network interfaces to work with pods.
Kubernetes networking based on Open vSwitch
Antrea is an open source Kubernetes-native project that can be used to create overlay networks and enforce comprehensive policies around Kubernetes cluster communication. The tool uses Kubernetes NetworkPolicies and is built upon Open vSwitch, an open multilayer virtual switch designed to enable network automation at a massive scale. The most recent Antrea release adds additional TrafficControl abilities, new certificate support, along with other features. Antrea is an extensible and programmable option for creating cloud-native overlay networks. At the time of writing, Antrea is a sandbox project hosted by the CNCF.
eBPF-based networking, security, and observability
Cilium is open source software that provides networking between container workloads and excels at cross-cluster connectivity. It also comes with observability and security features. Cilium’s data plane uses eBPF for load balancing and Cilium Cluster Mesh provides connectivity between nodes across multiple clusters. Cilium is highly scalable and powers cloud-native networking projects from major cloud service providers like Google Kubernetes Engine (GKE), GitLab and AWS EKS Anywhere. At the time of writing, Cilium is an incubating-level project within the CNCF.
Container Network Interface (CNI)
Interface specification for container networking
As briefly mentioned above, Container Network Interface (CNI) is a networking interface for Linux containers. This incubating-level CNCF project provides libraries and specifications on how to write plugins to access containers. CNI is not explicitly tied to Kubernetes; instead, it was created as an agnostic interface for any container runtime or network. Many different container runtimes use CNI, such as Mesos, Cloud Foundry and CRI-O. A cloud-native infrastructure will typically work with CNI through one of many plugins, which the runtime calls to add a network interface to a container. For further information, read the CNI specification here, or watch the helpful introduction talk on CNI here.
A tool that allows multiple CNI plugins to co-exist at runtime
Now that we understand CNI, check out CNI-Genie. This tool specifically helps operators work with multiple CNI plugins when deploying, making it possible to connect with whatever CNI plugin is installed on the host. This might include reference plugins, third-party CNI plugins or other specialized CNI plugins. Using CNI-Genie, a container orchestrator is no longer bound to using a single CNI plugin. With this flexibility, engineers could optimize for unique performance or application requirements. This layer also helps eliminate redundancy in working with multiple CNI plugins at runtime. CNI-Genie is a sandbox project within the CNCF.
A Kubernetes network fabric for enterprises
Kube-OVN is open-source software that provides a scalable network fabric for large organizations. As its name suggests, the package integrates Kubernetes with Open Virtual Network (OVN) and Open vSwitch (OVS) based networking, which is popular in traditional virtualization. Using Kube-OVN, cloud-native operators can assign a unique subnet to each Kubernetes namespace and then accept or deny traffic from specific IP addresses. Kube-OVN allocates IP addresses, supports multi-cluster networking and multi-tenancy, and comes with many other networking capabilities. It also integrates with Cilium, mentioned above, to bring security and observability. Intel, Huawei, ByteDance and others are using Kube-OVN in practice. At the time of writing, Kube-OVN is hosted by CNCF as a sandbox project.
Network Service Mesh (NSM)
A hybrid multi-cloud IP service mesh
Network Service Mesh (NSM) is defined as a hybrid multi-cloud IP service mesh. The overarching problem NSM attempts to solve is that different runtime environments typically use different networking styles. Kubernetes, for example, uses intra-cluster communication with CNI. Virtual machines adopt virtual networking, and on-premises data centers adopt data center networking. This is further complicated with multi-cloud setups that take varying networking approaches.
To connect these disparate networks, NSM provides a network service that is agnostic of the runtime. This enables pods in separate environments to be networked together, and it can extend to on-premises services and virtual machines. At the time of writing, Network Service Mesh is a sandbox project within the CNCF.
Direct, multi-cluster networking for hybrid and multi-cloud
Submariner is another project that aims to provide a standard networking layer for multiple Kubernetes clusters, regardless of location. This can enable a single application to span numerous clouds or on-premises data centers. Submariner is also CNI plugin-agnostic, meaning it can work with most CNI cluster network providers, such as Flannel, Calico, Weave and OpenShiftSDN. At the time of writing, Submariner is an early-stage project currently in sandbox mode within the CNCF.
Networking For Distributed Cloud-Native Architecture
Using the above tools, engineers can construct CNI-compatible networks and overlay networks on top of pre-existing cloud-native and legacy network architectures. Organizations can reap the rewards of increased connectivity in a distributed architecture by uniting these modes. Such networking strategies will likely continue to be a pressing goal for many groups as they mature their distributed container-based development practices.
Above, we’ve compared various tools that can help construct a container network for your Kubernetes environment. If you enjoyed this article, be sure to check out our other roundups of other CNCF areas, such as CI/CD, persistent storage, orchestration, configuration and key management.