Creating the right Kubernetes cluster and container environment is a must for security. Here’s how to do it properly.
Kubernetes is an effective tool for constructing highly scalable systems. As a result, numerous organizations have started or plan to use Kubernetes to orchestrate production services. The complexity of Kubernetes, however, makes this task easier said than done.
It’s important to understand how to properly set up Kubernetes clusters and containers to ensure it’s safe to “flip the switch” and open the network floodgates to your services. We’re presenting here a two-part guide for preparing your Kubernetes cluster and container environments for production traffic. Part 1 provides best practices for setting up and organizing your environments. Part 2 dives deeper into using advanced protocols that will enhance stability and security.
Minimal Base Images
Containers are application stacks built into a system image. They include everything from your business logic to the kernel. Minimal images remove as much of the OS as possible and then you add back only the needed components. Limiting your container to only the necessary software allows you to have less network traffic for images being copied, use less storage and decrease your attack surface. A popular choice with broad support is Alpine Linux.
You can find best practices for hardening your containers and images here.
Registries for Images
Since clusters require images to operate, they can use registries to store these images and have them available for download and launch. When you specify your deployment configuration, you will need to specify where to get these images with a path/:
Image Protection with ImagePullSecrets
If your registry lets your cluster pull images from it, then you need to require authentication. ImagePullSecrets are Kubernetes objects that allow your cluster to authenticate with your registry, so the registry can be selective as to who is able to download your images.
Here is a link to a useful guide to configure your ImagePullSecrets.
The primary benefit of using microservices comes from enforcing separation of duties at a service level, effectively creating abstractions for the various components of your back end. Some good examples are running a database separate from business logic, running separate development and production versions of software, or separating out horizontally scalable processes. The downside of having different services perform different duties is that they cannot be treated as equals. Fortunately, Kubernetes offers several tools to manage this separation:
Namespaces are the most basic but the most powerful grouping mechanism in Kubernetes. Most objects are namespace-scoped, requiring the use of namespaces. Namespaces are the perfect option for isolating environments with different purposes. They also allow you to separate different service stacks supporting a single application, such as keeping your security solution’s workloads separate from other applications. Namespaces should be divided by resource allocation—if two sets of microservices will require different resource pools, place them in separate namespaces.
Importantly, don’t rely on Kubernetes’ defaults—they’re typically optimized for the lowest amount of friction for developers, which often means omitting all security measures.
Labels are the most basic and flexible way to organize your cluster, allowing you to create arbitrary key:value pairs that separate your Kubernetes objects. Labels are used to give a Kubernetes object a reference to a group of objects in some namespace. Since they represent such an open-ended type of organization, simplification is recommended—only create labels where you require the power of selection.
Labels are a simple spec field you can add to your YAML files:
Much like labels, annotations are arbitrary key-value metadata you can attach to your pods. However, Kubernetes doesn’t read or handle annotations, so the rules regarding what you’re able to annotate a pod with are fairly loose and can’t be used for selection.
Annotations help you track important features of your containerized applications (e.g., version numbers). They, in the context of Kubernetes alone, are a somewhat powerless construct. However, they can be an asset to your developers and operations teams when used to monitor important system changes.
After the various components of the environment are set up and organized, it’s critical to ensure that these components are secured as they move into production with the necessary controls to maintain stability throughout. In Part 2 of this series on building Kubernetes clusters and containers, we will outline several proactive actions you can take to secure and stabilize your environment.