Containers promise to help software development organizations save time and money by providing a mechanism for making applications much more portable across development, testing, and deployment environments. The ubiquitous nature of containers also introduces challenges for enterprise adoption, including those around container security. To this point, a Forrester thought leadership study commissioned by Red Hat revealed that 53 percent of IT operations and development decision-makers identified security as their highest concern for adopting containers.
Companies looking to adopt containers should look closely at how they plan to secure containers, with focuses on provenance, container contents, isolation, and trust.
Use trusted sources
More than 30 percent of official images in the Docker Hub contain high priority security vulnerabilities, according to a May 2015 study by BanyanOps. Certification—by digital signatures, for example—adds a level of security by confirming who created a container and for what purpose.
When you have a record of ownership for a work of art or an antique, you validate it’s authenticity or quality. Introducing untrusted container contents into your datacenter can do far more damage than hanging a forged piece of artwork in the office. This is why Red Hat and other industry leaders are working to establish standards and practices around container certification. Certification helps to ensure that:
- All components come from trusted sources
- Platform packages have not been tampered with and are up-to-date
- The container image is free of known vulnerabilities in the platform components and layers
- Containers are compatible and will run across certified host environments
What’s INSIDE the container matters
Verifying where a container came from is part of the battle, but verifying what is inside the container image is as, if not more important. Similar to the way that deep packet inspection looks inside network packets for malicious content, Deep Container Inspection (DCI) looks beyond the container image format to the container content. Having visibility into the code inside your containers is critical to maintaining security during and post development.
Once you have container-based applications made up of trusted containers, you need to ensure that they are not compromised by other container images on the same host. The reality is that containers do not actually “contain” applications the way armored cars contain money. It’s more accurate to say that containers package an application’s code along with its dependencies.
If you imagine containers as having walls, realize that those walls are thin. Malicious content in one container can break through to another container or the host operating system. Every single process running inside a container talks directly to the host kernel. For all containers on that host, the kernel acts as a single point of failure. A vulnerability inside the Linux kernel could allow those with access to one container to take over the host OS and all other containers on that host.
This is why it’s important to rely on a host OS that is maintained by trusted kernel engineers and frequently updated with the latest security fixes. Containers built on a weak host inherit the compromised security model provided by that host. The kernel should include functionality that provides appropriate levels of isolation and separation like SELinux, Seccomp, Namespaces, and more.
Containerized applications require the same security precautions as traditional applications. Many organizations are looking at the combination of containers and virtualization for securing multi-tenant environments.
Container trust is temporal
While you might have trusted a container image when it was first produced, that same container and its contents become stale over time. New vulnerabilities are identified daily, and your container image is only as secure as the code and dependencies it contains. For example, Red Hat identified and fixed 66 critical, important, and moderate vulnerabilities in the JAVA Runtime Environment over a 315 day period. It only takes one vulnerability to compromise your container and, potentially, your entire infrastructure stack.
While speed and agility are key drivers for container adoption in the enterprise, they should not be achieved at the expense of security. This is why enterprise-level DCI, combined with certification, policy and trust, will be integral to the development, deployment, and management of containers. To make the most of the benefits that containers offer while ensuring the security of the containers and their contents, organizations need better ways to determine container:
- Provenance. Before moving a container onto your network, be sure you know what’s inside and where it originated. Investigate validation technology and certification options for trusting container sources.
- Contents. Deep container inspection can look beyond the container image format to the container content to identify and mitigate vulnerabilities.
- Isolation. Consider isolating the container’s execution path using SELinux. In multitenant environments, consider coupling containers with virtualization for an added layer of protection.
- Trust over time. Inspect your container contents regularly to validate mitigate security risks. Use runtime tools like those in Red Hat OpenShift Enterprise to detect and patch vulnerabilities.
Kimberly Craven, senior product marketing manager, Platforms and Containers, Red Hat