Does the word “container” intimate containment, suggesting that containers are inherently secure? If it does, any such assumption of security may be the broadest Docker vulnerability to date.
“One of the biggest threats I see with Docker is its positioning and the implied security in the language. The reality is that these containers don’t contain anything,” says Aaron Cois, Researcher, CERT Division, Software Engineering Institute, Carnegie Mellon University. Yet, that is the implication.
Just as those who thought Linux or VMs were secure enough on their own were mistaken, so those who believe that containers put a lid on security will be sorely disappointed. Today, Linux environments require network, OS / host security, Internet security, and web application security measures similar to those used with other platforms. Tools like security auditing / PEN testing, firewalls / WAFs, anvi-virus and anti-malware tools, DLP, IDS/IPS, remediation tools, and really the gamut of security measures similar to what Microsoft environments require are increasingly needed to defend data in Linux environments. “Likewise, operations can give developers tools to log into the VSphere console to create and change VMs while limiting their privileges,” says Cois.
And so, containers also require appropriate security measures. “Developers and non-admin operations staff don’t need to log into the host command line to work, and no one in security wants them to,” says Cois. But today’s Docker workflow not only permits but requires it.
The Root of the Problem
Docker security has a significant flaw in that users have default root-level access in containers, which leads to the risk that users in the docker group can create containers, mount any host file system, and access it as root, explains Cois. Docker has mitigated this with its documentation guidance that “Only trusted users should be allowed to control your Docker daemon.” This is not sufficient since any attacker could use social engineering to steal the credentials of trusted users. With Docker uptake high and increasing, this constitutes a serious threat to the enterprise right now.
According to a survey of 750 enterprise virtualization and cloud computing respondents http://stackengine.com/docker-vmware-survey/, over 70-percent of respondents are either using Docker or evaluating it in their organization. Forty-nine-percent of that same survey population believe that one of the biggest challenges to adopting Docker is its security model. Another 49-percent believe one of the biggest challenges is the lack of operational tools for production, which would restrict unreasonable access.
Without the proper tools to structure IT operations around Docker, Docker is open to administrative control by anyone who can get to the host.
Use Only Trusted Repositories
Another big issue is that containers come from repositories, which is the version control system workflow for users and containers. “So a developer has a repository that contains their container images and versions going back,” says Cois. But how do people working in devops approach repositories? Repositories can be dangerous if they cannot be trusted.
Users must address random Docker files and images from repositories as if they were random downloads from the Internet, as if someone could have injected something malicious into them. “They have to vet those images,” says Cois; “using those images is the same as running random code on your system.”
A coder could download and use an infected docker container image easily enough. “A developer wants a dockerfile to run standard service A, for example. Let’s say the Docker Hub doesn’t have a container image for service A yet, but they find someone on github who has built and published one,” says Cois. The coder downloads the image, executes it, and so is able to move on quickly, having eliminated a potentially disruptive lag in development time. It’s all good, or so he thinks.
But now he has trusted that developer who created that image, without even considering the consequences. The image could be running other commands in the background while it runs the desired service, and the coder might never know.
And as lines blur between who controls the containers, whether IT operations or development, so the lines blur as to which of these two groups has responsibility for vetting these Docker images. “Hopefully everyone is doing it together. Otherwise you’re opening up a new vector for pulling random stuff from the Internet that no one has vetted and running it on systems in a way that people may not think of as running code,” says Cois.
David Geer’s work has appeared in ScientificAmerican, The Economist Technology Quarterly, CSO & CSOonline, FierceMarkets, TechTarget, InformationWeek, Computerworld, Byte.com, ITWorld.com, IEEE Computer Society’s Computer magazine, IEEE Distributed Systems Online, Government Security News, Laptop, Smart Computing, Technical Support, The Hosting Standard (Canada), TechWorld.com (UK), SIGnature, Processor, and the Engineering News-Record. David served as a technician at CoreComm in Cleveland, OH prior venturing into writing.