June 25, 2017

As technology becomes better, it tends to become more complex and harder to manage. That’s certainly true in the case of Docker containers, which are probably the most complex software framework yet invented. Here’s why.

If you think about the history of technology, it’s easy to see the inverse relationship between complexity and ease-of-management.

Take houses, for example. Two hundred years ago, your main maintenance responsibility in your house was making sure water didn’t get in. Then along came plumbing, centralized heating systems, electricity and so much more. These systems make today’s houses much more comfortable and sophisticated. They also give homeowners a lot more to worry about.

You could make the same point about cars. Many decades ago, cars had relatively little under the hood. In contrast, today’s cars are computerized and rely on complex networks of sensors and tiny components. These improvements make cars more reliable and efficient, but they also make it harder for someone without extensive knowledge of their intricate systems to maintain them.

This trend certainly holds true when it comes to computers, of course. Before hard disks, monitors, networking, audio devices and so on, there was less code to write and fewer components to maintain.

Modern computers might be more stable than their predecessors because hardware has grown more reliable and the software has become more stable. But today’s computers have many more moving parts.

In some cases, your underlying software systems or libraries can abstract away much of this complexity; this is why you can write a web application without having to think about how the user’s computer will acquire an IP address. Still, if you dig down deeply enough, or want to write something sophisticated enough, you have to wrangle with all of the complexity that modern computers entail.

Containers and Complexity

That complexity reaches its pinnacle if you are trying to develop or deploy software for a containerized environment. Containers require you to address the following types of challenges:

  • Everything in a containerized environment is ephemeral. Containers spin up and down. Services and container images are updated. Network and security configurations are overwritten.
  • Persistent storage is not built into container frameworks. You have to figure out a way to add it on.
  • Containers are generally immutable. You can’t update them using the approach you would take to update a traditional application.
  • Containers take scalability to a whole new level. You have to plan for thousands of containers in your environment, rather than the few dozen servers that you would have in most large legacy environments.
  • Traditional security paradigms break down in containerized environments. You can’t count on there being strict isolation between different containers, or between a container and the host. Plus, security tools that were not designed with containers in mind usually can’t peer inside containers, because from the host it’s impossible to see what is running inside a container by looking only at process information.

These challenges are the tradeoffs for the enormous degree of scalability and agility that containers enable. They’re challenges worth solving, but they are considerable.


You wouldn’t want to live in a house without plumbing or drive a car that doesn’t warn you when you’re running out of fuel. Even though those features present extra maintenance burdens, they’re worth it.

For the same reason, you don’t want to run your apps on legacy software that lacks the agility of containerized environments, despite the unique challenges that containers create.

Christopher Tozzi

Christopher Tozzi has covered technology and business news for nearly a decade, specializing in open source, containers, big data, networking and security. He is currently Senior Editor and DevOps Analyst with Fixate.io and Sweetcode.io.