Container Security is Dead (At Least, As You Probably Know It)

The container realm requires new thinking about security. Legacy tools that enterprises try to bring forward to secure their new container environments simply are not up to the challenge. And worse yet, many of the new container-specific security products are limited in scope, which means organizations that go that route will end up with a parcel of new siloed tools that require too much manual correlation.

Container environments are opaque, change continually and have many more moving parts than traditional environments. If legacy systems are narrow, deep and static, container environments are shallow and dynamic, and when you put the containers together, the environment is wide—wider and more dynamic still when you consider it is likely an increasing number of your containers will end up spread across multiple clouds and expand and contract as needs shift. All of which changes the security calculus on many levels.

Regardless of where you stand in terms of container adoption, given the tremendous resources cloud providers are putting into containers, the writing is on the wall: Containers are the next logical step beyond virtual machines, the next big shift. But most approaches to container security are piecemeal fixes. Make the wrong decisions now, and you’ll be throwing those investments away in a few years when you have a better understanding of the new challenges.

Container Security: The Key Differences

As enterprises start out with containers, the natural inclination is to try to leverage existing security, monitoring and forensics tools to manage the new environments. But legacy tools can’t see into containers, they can’t keep up with the pace of change and they can’t help you figure out what went wrong after an event because most of the time containers will have since evaporated (85% of containers live less than a day).

Besides being short-lived, containers are different than virtual machines in that they tend to be smaller and more task-specific, which means you require more of them to accomplish the same work. When replacing virtual machines, it isn’t uncommon to end up with 10x the number of containers. If you’re going all-in with a full microservices architecture, the multiplier is greater still and your attack surface is even larger.

And of course, container environments have many components, such as image registries, orchestration tools and build environments, adding complexity that further complicates the security picture.

All of which has led to the emergence of specialized container security tools. But the last thing you want in container environments is a collection of tools for individual use cases because it requires too much correlation to get an adequate view of what happened and how. The particular nature of containers—exponentially larger attack surfaces, greater complexity and constant churn—requires a holistic approach to security that is different from the niche approaches that have been common to date. That’s why container security as you know it is dead.

Delivering Visibility

The first thing that becomes clear, for example, is you’re not going to want to instrument containers multiple times to support different tools. Given the number of containers you are likely to end up with, that simply doesn’t scale. Instead, a lightweight approach is to run one container on each physical host whose job it is to watch all container system calls on that box, and then centrally collect that data to feed different use cases.

Done right, this enables you to see in detail everything every container is doing, simplifying the job of securing your containers.

Consider the job of homing in on indicators of compromise, which are typically created by piecing together various data points when you believe you’re under attack. Many indicators are things you should be monitoring to understand the health of your system, such as network utilization. If you see a large spike in network utilization, something may be amiss.

In the container world, a similar indicator might be utilization of container shares, such as how much CPU a given container is allowed to use. If you see a large spike for a particular piece of software or a particular service, chances are you have been compromised.

To identify indicators of compromise, you need to collect metrics (numerical values tracked over time). You also need to be able to trend, analyze and visualize the metrics, which means you need a monitoring tool.

Container security tools don’t collect metrics—they focus instead on events (what occurred when and why). These are discrete things that happened on your systems, but they don’t happen very regularly, and typically there is little value in trying to aggregate events for mathematical analysis. That’s the exact opposite of metrics, which are taken over time and the numbers aggregated for comparison.

The point is you need both, and even though the types of systems you build for monitoring and security are different, data is at the root of everything. If you continually collect data, you can use it to support metrics requirements, event needs, forensics efforts and even some use cases you probably haven’t thought of yet (more on that later).

Once you have that “ah-ha” moment and realize so many enterprise operations problems are simply data problems—you need to collect metrics, you need to watch events, you need to build insights and observe what is going on—you’ll see that many problems addressed by special, standalone container security tools can be solved more easily in other ways.

Power of the DevSecOps Approach

Once you concentrate on data as the answer, ancillary benefits unfold.

Scanning software for vulnerabilities, for example, typically is thought of as a security function. But while you’re at it you can check other items you might want to control. For instance, are developers using GPLv2 licenses? If so, stop the build—or prevent a developer from using non-best practices, such as a large image with different dependencies, which might cause configuration drift.

So, as the DevOps person in charge of deploying this software, you can use scanning to not only make the software more secure, but also more reliable once it is pushed into production.

Another example involves auditing. Auditing typically is thought of as a security function. If something goes wrong, you have to go back and audit what happened to try to determine the root cause of the problem. With certain technologies, however, auditing can play a larger role. The DevOps user can leverage the audit function to provide contextual relevance to what is happening in their environment.

Containers demand a new approach to security, but don’t box yourself in with one-off tools that provide limited functions. Data is key to understanding what is happening in the ephemeral container realm, and insight is key to securing these dynamic environments.

Apurva Dave

Apurva is the Chief Marketing Officer at Sysdig. He’s been helping people analyze and accelerate infrastructure for the better part of two decades. He previously worked at Riverbed on both WAN acceleration and Network Analysis products, and at Inktomi on infrastructure products. He has a computer science degree from Brown University and an MBA from UC Berkeley.

Apurva Dave has 1 posts and counting. See all posts by Apurva Dave