Sidecars can help streamline application development, but each method has its own security considerations
Typical applications share the need for common functionalities such as logging, monitoring, tracing, configuration and security. These functionalities can either be implemented as part of the application code or run outside the application code as separate processes.
There are pros and cons for each choice, but in a modern cloud-native approach the tendency is to decouple those common tasks from the application core functionality code. The rationale of this decoupling is to create consistency in the app-stack common tasks, which is essential in large, distributed applications. It also allows flexibility in the selection of programming languages, as it removes the dependence and the need to maintain a dedicated library for each language.
A container architecture designed for microservices maintained separately and written in different languages saves developers the need to rewrite similar development code to fulfill a single function. If, for example, a development team is writing a primary application in Go and there is an existing functionality written in Python to collect logs and metrics, offloading that Python code into a sidecar is more efficient than asking the development team to rewrite that functionality in Go. This decoupling of common tasks to an independent unified service deployed alongside any core application service is known as a “sidecar” architecture.
Sidecars are heavily dependent on the primary application. The peripheral tasks packaged in the sidecar are only realized when attached to the main application, so for each instance of the application, a sidecar instance is deployed alongside. Each of the peripheral tasks loaded on the sidecar is a separate functionality that can be added or removed independently, written in any language and updated individually, without affecting the main application code. They run independently from runtime and programming languages and can access the same resources as the main application. In Kubernetes clusters, sidecars can be deployed as Kubernetes DaemonSets or sidecar proxies. Each of these options has pros and cons.
The classical approach in Kubernetes is to use a DaemonSet. A DaemonSet is a copy of a pod where all the nodes in the cluster run this pod. When creating a pod or container that contains the shared functionalities, such as logging metrics, performance or configuration, it will run on every node in the cluster and provide these functionalities to the other pods that share that node.
In practice, when collecting metrics, for example, one DaemonSet pod services all the pods sharing the same node, regardless of their type, functionality and whether they are running a replica set or are independent of each other.
Sidecar proxies provide a more granular approach. The functionalities in the sidecar proxy deliver a microservice individually within each pod. The proxy container runs inside the pod containing the microservice and carries only the functionalities needed by that microservice, keeping the proxy lightweight.
DaemonSet vs. Sidecar Proxy
Structural considerations: In an environment where sidecar containers are highly compartmentalized—for example, one container for logging, another for metric collection and another for performance—each pod has to carry three sidecar containers. This leads to inefficient utilization of resources, as the bulk of the resources are completing the same common task instead of serving the core application. Using a DaemonSet instead of multiple containers per pod is more efficient in such cases.
Availability: Deploying a new sidecar container requires restarting the entire pod. With multiple containers in each pod, maintaining delivery efficiency requires tight synchronization between the DevOps team focused on the core services and the DevOps team working on the common tasks and such synchronization. Achieving this synchronization is no doubt difficult to accomplish. When the development cycles are not synchronized, it leads to potential downtime when deploying a new DaemonSet or updating an existing one.
Security With DaemonSets
With DaemonSets, security settings at the container level can be configured with details for privileged definition, volume access rights, resource allocation, binary authorization and anything related to container deployment.
However, in a DaemonSet environment, containers run as privileged containers. Although it simplifies the validation of container deployment or the container’s host protection, it challenges the container behavior monitoring, as identical pods (a.k.a replica-sets) may run on different nodes. When similar policies are applied to multiple containers, lateral movement between containers by malicious actors is a permanent risk. It also fails to address network-based isolation and tunnel encryption for inter-node communication.
Security With Sidecar Proxies
To secure the network layer, sidecar proxies are ideal. Offloaded from the main application, they:
- Are language-agnostic, removing the need to adapt the encryption to every language in the library.
- Enable the creation of unified and/or target specific policies and privileged access.
- Manage tunneling encryption.
- Manage inner-cluster communications.
However, these proxies lack the DaemonSet ability to monitor and validate security settings at the container level.
Maximizing Container Security
To benefit from the best of sidecar proxies and DaemonSet security features, you can use a Kubernetes native mechanism called an admission controller. Combining a dedicated admission controller with a sidecar proxy can create a holistic security suite that addresses all potential container threat options.
With Kubernetes, admission controller users can set fine-grained authorizations for pod creation and deployments. At the container level, it can be leveraged to block containers from running as root containers or ensuring that the container’s root filesystem is locked in read-only mode. It can limit pulling images exclusively from approved specific registries and deny unknown image registries.
Utilizing a Kubernetes Admission Controller and Service Mesh Controller
To enhance runtime security, the use of a dedicated admission controller provides for the management of critical security features such as:
- Binary Authorization: A policy enforcement chokepoint limiting the deployment on your environment to signed and authorized images
- Continuous vulnerability scanning: Before and after deployment, continuous scanning checks for vulnerability beyond a pre-defined threshold
- Configuring Pod Security Policies (PSPs) at pod deployment settings
- Governing Pod deployment with Selinux, Seccom and AppArmor
Next-generation Kubernetes workload protection solutions start upstream, in the CI/CD pipeline, automatically identifying legitimate workloads. Runtime policies ensure only these workloads are deployed to clusters. As such, app security is simplified and accelerated by replacing multiple fragmented firewalls, security groups and ACLs with automated identity-based workload security that is decoupled from the network infrastructure.