There is an ongoing discussion today about how containers impact security. Most every development team is implementing (or at least talking about implementing) containers because there are so many benefits to containers when it comes to an enterprise’s agility and building a modern microservices architecture. But how do containers affect security? And how can IT teams do everything they need to do in order to ensure the containers in use are as secure as they reasonably can be?
At the RSA Conference 2016 David Mortman presented Docker: Containing the Security Excitement, so we thought he’d be the perfect expert to ask such a question. Mortman has been involved in information security for about 20 years and is currently a contributing analyst at the security research firm Securosis, and he is also former chief security architect and distinguished engineer at Dell Software. Mortman was also recently served as the director of security and operations at C3. Previously, Mortman was the CISO at Siebel Systems.
Mortman speaks regularly at Black Hat, DEF CON, RSA and other conferences. Let’s talk with him briefly about containers and security.
Container Journal: What do you see as the impact of containers on security efforts? Is it a hindrance, a help, a little bit of both?
Mortman: It’s a little of both. The big question I often start with is to ask: “What’s the big deal about containers? Why are containers suddenly such a big deal? In a lot of ways, containers are technology we’ve had for a long time in various forms. Why suddenly are they making such a splash?”
The reason they’re a big deal is the metadata. It’s not just a container in and of itself, but it’s a container that has all this associated metadata. This metadata tells you what’s in the container and defines the contents of the container. These days, with Docker especially, it’s a secure listing of what’s in the container, so you actually know what you’re getting. What you now have is a next generation package management system for applications.
This is huge from an operational perspective, and actually from an app development perspective. Now, rather than building your application and creating a tar file or a Debian file or an RPM or whatever packaging format you’re using. And then you may have multiple of these things, and you now have a single package that has all of the dependencies associated with it. What it means is that you now, when someone builds something in development and hands it off to quality assurance and eventually production, you suddenly have significantly reduced the chances of breaking underlying dependencies because it’s all in a neat package.
It’s an intractable problem, generally, but a lot of modern approaches like DevOps and virtualization make it easier to solve and lower the dependencies pain points. However, containers really reduce that even further, because now you’re handing off everything, all of the dependencies and everything in one nice package. Operationally, this is enormous. It accelerates your speed and your quality by a wide margin. That’s why containers matter so much.
Container Journal: At lot of these concerns sound familiar to the same challenges we’ve faced for some time?
Mortman: Right. From a security perspective, the vast majority of the security concerns that you’re going to have with containers are the same exact security concerns you’re going to have with cloud and virtualization and standard operating systems. You still have to deal with all of that stuff. The good news is that and there are ways that you actually get security improved, especially from a non-local user perspective. As soon as you deal with local user, everything goes haywire, because containers are explicitly designed to keep things in, not keep people out.
But it’s still not container specific issues that are the concern, it’s operating system issues you have to worry about. The great news is that, from an application perspective, you’ve just dramatically reduced the attack surface, because you only put the little bits you need into that container to make that application run.
Container Journal: What are some of the steps enterprises can take to secure their containers?
Mortman: What are some things enterprises can do to produce a reasonably secure (but still usable) container to run? Last year CIS and Docker published a security and best practices framework benchmark and released an open source tool that’ll compare containers against the benchmark to rate how efforts are doing.
One of the focuses of the benchmark is reducing the container attack surface. For example, one of the things they advise you in the benchmark is never run SSH inside the container, unless having an SSH server is the goal of the container. It’s a secure protocol, but it’s complex. For instance, it escalates itself to root level privileges to do things and then drops the root level privileges when it doesn’t need them anymore. Activities like that just create a lot of activities that expand the attack surface of a container. For the most part, you never need interactive access to a container if you’ve architected it properly.
Also, one of the things that enterprises should be doing is logging everything that is occurring in the container, both from the container and from the application perspective. You can use Introspection to see the actual processes running in a container. You don’t even need to be logged into the container itself to see what’s going on.
Additionally, one of the important recommendations, in my view, is one process type per container. The idea is you that you either run a web server in the container or you run your app server in the container or you run your database or your whatever you need to run – but you only have one. You don’t run your entire multi-tier application inside one container. You could, but it defeats the purpose of separating processes.
Container Journal: This strikes me as, fortunately, also being aligned with microservices and modern architecture.
Mortman: Exactly, and it fits nicely in with DevOps and microservices and distributed systems philosophies. This immutable infrastructures notion isn’t that new. Netflix has been doing this for quite a while. When they started doing it, and I think they still are today, they build new AMI’s every time, because then they know exactly what’s in it, and you don’t have a configuration drift problem. One of the dirtiest secrets in the configuration management space is that if you’re doing anything at massive scale, you can’t use configuration management tools, because they just can’t keep up. They can’t converge all the systems at the right rate, which is why the folks like Netflix and Facebook do have more of an A/B system where they spin up the new systems, once it’s up and running and stable, they shift all the balancers to point to the new ones.
This is great guidance as well, because it means that should you start migrating over to a new system and it fails on you, you’re not rolling back configurations, you just change your old balancers to point to the old servers again, and you’re back on the old code, and you know exactly what it looks like on both sides.
Container Journal: Are there things security-wise that are unique to containers that are worth considering?
Mortman: Yes, and I’m glad that you actually brought that up. There’s a couple of key aspects. One of the things about containers is that the boundaries in terms of containers are much less rigid than they are with virtualization. Containers are a kind of virtualization, but traditional hypervisor based virtualization, the level of separation, the level of the ability to introspect and things like that is much harder, the walls are much thicker. One of the big differences is that when you’re using containers, they’re all talking to the same kernel at the operating system level, and this exposes you to some interesting problems that you need to deal with.
Some of this is dealt with namespaces. Namespaces are really cool because they let you have a separate network stack, separate process stacks, and things like that, so that way you continue that level of virtualization so that way I can’t write some malicious code that says, “Show me process ID 329,” and get the actual host level, which may be in someone else’s container. So namespaces are an important piece of container security. Until a couple of months ago now, there was no user namespace that was working within Docker, which meant that if you could convince Docker that you were user ID 0 and you could break out of the container, then you would be root on the host operating system as well, or the user ID of another user elsewhere in the system.
Which is why one of the recommendations is to run one container per host operating system, or if you’re going to run multiple containers try to have them be of all the same types, like all webservers that have access to the same class of data, or something like that, so should someone get out, you’re not compromising additional data, kind of like the old days when we first started doing virtualization.