With the rise in popularity of containers, development and DevOps paradigms are experiencing a massive shift. As a result, security admins increasingly struggle to figure out how to secure this new class of assets and the environments they reside in. Unfortunately, introducing good security hygiene into the container ecosystem, such as Docker containers for a SaaS application, is not a simple task. It means integrating security into the container life cycle and ensuring that security is considered and implemented at each stage of the container pipeline.
What Are Containers, and What Benefits Do They Provide?
At a high level, containers are lightweight, self-contained application bundles that include everything you need to run them from the code to the runtime, system tools, system libraries and settings. They’re not virtual machines in that they share the kernel of the host OS that they’re running on, so many containers tend to be small. Also unlike VMs, they start up almost instantly.
Containers themselves are instances (running or stopped) of container images, which consist of multiple image layers. The bottom layer is usually a parent image layer that includes the OS, such as Ubuntu, CentOS, or Alpine Linux. Each subsequent layer in the image is a set of differences from the layers before it. All image layers are read-only.
A container’s life cycle consists of three phases: build, distribute and run. As a security admin, you must understand the container life cycle at your organization and dig into each of these phases at a granular level. I would recommend mapping out the life cycle for your own organization, including the specific tools and technologies that are used to build, distribute and run them. You may choose to use a pipeline-based build approach, which means that container creation, distribution and deployment are automated from the moment the code is checked in to when it is deployed in production.
Securing the Pipeline
To secure the pipeline, the first thing we can do is bring a security assessment tool into the build process. All images should be pushed to that utility for an assessment of vulnerabilities and misconfigurations. Based on a policy of your organization’s choosing, the image should pass or fail; only passed images should be pushed into a production-ready private image registry. All connections to that registry should be SSL-enabled to protect in-transit images. It’s also a good idea to set up authentication on and continuous monitoring of those registries.
So what lessons can we take away from this? Don’t use your CI tool for container deployment. Use it to build images and push them to a registry. Then use a separate orchestrator to pull those images and deploy them. You should assess your orchestrator against benchmarks such as CIS to ensure that it’s configured securely.
Securing the Stack
Next up is ensuring that you’ve secured the entire container stack. What I’m calling the stack in this case is all of the layers or components involved with a running container on a host system. This means securing the platform itself, which means ensuring that your AWS or Azure accounts are configured securely. You should ideally use an automated assessment tool that can continually assess your accounts to ensure they are in compliance with best-practices.
While it’s probably not surprising to know you need to secure the host OS that the containers are running on, it’s critical all the same.
To reduce the attack surface as much as possible, your host OS should be designed for the sole purpose of running containers. It should be lean. That means no services running and no packages installed that aren’t specifically used for running your containers.
Additionally, as Docker itself is composed of multiple components, all of these elements need to be hardened. CIS has worked with the Docker community to create a benchmark policy that include best practices for securing both the Docker daemon and container runtime.
Securing the Life Cycle
How do you fix discovered vulnerabilities or misconfigurations in your running containers?
Ideally, you would strive to take an immutable approach to your container strategy. This means that you should never make changes directly to your running containers. Don’t change configuration settings, don’t install new packages, and do not upgrade existing packages (even to fix a security vulnerability). The containers you have running in production should be exactly what you expect them to be based on the images that went through your container pipeline. You should also consider trying to make your containers read-only, track the uptime of your running containers in production, periodically destroy your running containers, and replace them with new spun up containers to keep them fresh.
Once you have running containers in production, you need to continually assess those containers to check for drift from a known baseline.
I’ve touched on a lot of different components to consider when securing your own container stacks, life cycles and pipelines. Hopefully, I’ve got you thinking far beyond securing just the containers themselves so that you can start the process (if you haven’t already) of mapping out your own container life cycles and pipelines so you can begin developing a more comprehensive strategy for securing your containers.
About the Author / Gabe Authier
Gabe Authier is a senior product manager at Tripwire, a leading provider of security, compliance, and IT operations solutions for enterprises, industrial organizations, service providers, and government agencies. He has over 15 years of experience in Product Management and Information Technology, with certifications in Agile practices and Pragmatic Marketing methodology, and is passionate about software development that brings solutions to the marketplace to solve customer problems. Gabe received a BS in Systems Engineering from University of Arizona and an Executive MBA from the University of Oregon.