Regardless of the IT infrastructure element in question, if it connects to a network – even through proxies or human intermediaries – eventually someone will find a security flaw and exploit it. Compatibility issues with new applications or infrastructure elements will emerge. Because of these reasons, all IT infrastructure elements need to be managed, without exception.
Despite the sometimes extraordinary hype they receive, containerization is not a magic technology. Like all other IT infrastructure elements, containers need to be managed.
Management of containers is about more than simply deploying and destroying containers. Containers and their contents have diverse management needs across the entire lifecycle:
- Imaging time: Before a container is deployed, the containerization software itself needs to be managed. Administrators need to know that the containerization software is up-to-date, stable, operating on a host that is adequately secured, properly configured, and that monitoring is in place to detect if any of these variables change.
The contents of containers (a.k.a. images) need to be examined. Images might include third-party libraries and also custom-developed code. The libraries can be an attractive attack surface – for well-known libraries, there are many individuals and organizations dedicated to uncovering and exploiting vulnerabilities within them, and some are done purely for nefarious purposes.
The libraries may need to be updated. Custom code needs to be examined for errors and vulnerabilities. Checks need to be run on the images prior to deployment. Administrators and/or developers need to be made aware of any issues and updates need to be performed.
All this needs to be done with extraordinary care so as not to impose a layer of “security tax” to the DevOps efficiency enabled by containers.
- Runtime: For containers in production, you need to periodically (or as needed) run checks against these containers to ensure that libraries are still up to date and that newly arose vulnerabilities do not affect your running containers.
In addition, you must monitor production containers to detect anomalies and compromises, a task that is made more complex when containers obscure applications inside and have for all intends and purposes prohibited agent-based approaches.
Automation Is Key
Managing and securing containers is simple when there are only a handful of containers in play. But automation becomes key when you have hundreds of thousands of containers across thousands of servers.
Even if most of the containers in use are copies of a single central master, automation still plays a central role because a simple change in the version of a library can have dramatic effects. For one example, consider changing the encryption in use by a container for performing communication with clients.
Changing the encryption in use could alter the resource consumption per request or transaction. It can also change the amount of data sent for each request or transaction. If there is a security layer in place that is serving as a man-in-the-middle inspection point that is decomposing the secure data streams and scanning them for compromises, the change in encryption could radically alter the resource consumption of those services or even break connectivity with them altogether.
For container security, you need to consider automation in two aspects:
Automated monitoring/testing: Live monitoring of containers and automated analysis of monitoring results for security purposes is crucial. Because containerized applications can be compromised and new vulnerabilities can arise, you must exercise continuous monitoring of containers in production for out of date libraries, binaries that don’t match their expected hashes and unexpected network traffic patterns.
Detecting compromise is not easy. Containers make it harder still because the whole purpose of containers is to provide isolation of both executable code and the data upon which that code operates from all other elements of the infrastructure. Traditional methods like running an agent inside the containers won’t work as you can’t easily update such an agent with new content.
One saving grace of containerized computing is that once an image is baked, it’s immutable (save for the small amount of metadata and runtime overhead). This means for standard application images, such as MongoDB, NGINX, etc., one can predict its runtime behaviour fairly accurately. This can lead to more targeted security monitoring in runtime and potentially more precise results. However, you will still need technologies to automate the derivation of expected behaviour and incorporate that into runtime monitoring.
Automated remediation/update: Once a compromise is detected administrators and/or developers need to be alerted. Compromised containers need to be isolated, disabled, or taken down immediately, with clean/patched containers returned to service in their place.
Automated update is easier said than done. Rolling out updated containers to across thousands or tens of thousands of servers is not a trivial feat. For these reasons, container security must integrate tightly with container orchestration capabilities so security remediation tasks can be carried out seamlessly.
Even with these considerations, automated remediation tasks for containerized applications and microservices applications are still much more simpler than trying to update a traditional monolithic application. Pantheon, a web management platform company that routinely manages millions of containers and runs tens of thousands Drupal and WordPress sites simultaneously, was able to patch the Heartbleed vulnerability across their infrastructure in 12 hours. Their CTO’s post on this subject perfectly illustrates the power of the container infrastructure.
It’s not as scary as it first appears
Containers are predominantly implemented by organizations that have embraced DevOps automation. The automation of IT infrastructure elements – including containers – is part of that approach. The automation of container security is thus a natural consideration, though the tools to accomplish this at any reasonable scale are only just now emerging.
Docker made deploying containers easy. As a result, an entire ecosystem is emerging to take care of the various management tasks around creating, testing, deploying, managing, and securing containers. Despite this, container computing is still a young practice. There is much innovation ahead of us on container management and security in general.
That said, it is absolutely the right time for you to be thinking about container security — organizations that are able to bring together that rare combination of security and container expertise will propel themselves to be at the forefront of the innovation wave and will be able to innovate at scale faster than the rest of the industry.
(Chenxi is the Chief Strategy Officer for Twistlock, a container security provider. She writes for Container Journal, the RSA conference, and a number of other venues.)