Containers: Passing fad or tech nirvana?

Linux containers are hot right now in software development. But are they just a passing fad or something more substantial? I’m reminded of a childhood experience that might help make the answer more clear.

One year a craze of unsized proportions swept over my small-town middle school, immediately capturing the hearts and minds of what seemed like every prepubescent boy: pogs. Pogs were amazing–they were modeled after cardboard juice caps: small, coin-sized, and perfect for collecting. As your collection grew, you could put them in cool little plastic cylinders and show them off to your friends. You could even use pogs to play against each other in a kind of stacking battle.

In such a fad, I was usually a laggard. I realized, a little late, that pogs were cool. Inevitably, I bought some. But as a late adopter, I joined right when it became illegal to play pogs; in elementary school a trend reaches mass adoption when the principal cracks down and regulates all joy associated with the fad.

After only owning pogs a few weeks, they were no longer cool, and they sat untouched in my dresser drawer. A new collectible was about to dominate our imaginations and wallets–Magic: The Gathering cards. And so the cycle repeated.

Are Linux containers a grown-up version of an elementary school fad? They are clearly the talk of the software industry right now and dominate many of the conference talks, blog posts, and new open source projects. They seem to have definite benefits, but is that enough to last the test of time? Or are Linux containers going to leave us no better off than we were in the beginning . . . or worse off as we are saddled with yet another layer of ever-increasing abstraction that we have to maintain. What are the downsides?

You might also be wondering if you’re too late. Has the adoption curve already crested and you’re just starting into containers when the next big thing is about to start? Why waste your time if containers are going to be a blip in time?

Having learned the hard lessons of life as a boy, I asked myself these questions. After attending industry conferences, reading copious amounts, and using the technologies myself, here are some of the answers I’ve come up with.

What are the main benefits of Linux containers?

There are two main categories of containers: host-based and application-based.

Host-based containers are akin to lightweight VMs. They run an entire “host” of applications inside of them and function like little machines. Their initial benefit is they have less overhead than virtualization technologies like KVM or VMWare. You can more fully utilize your hardware resources by cramming more containers onto a host box than you could with VMs.

Application-based containers run just a single process or, for the slightly impure, a tight collection of processes that provide a single service. These containers have been declared by enthusiasts as perfect for the equally-trendy microservices. One of their benefits is isolating services’ deployment and implementation. The application developer has complete control over the library versions and code in the container, and being encapsulated, the containers run on any capable host machine. Equally important, the developer doesn’t have to worry about extra dependencies or software conflicts. Need to use Python 3 vs 2? No problem! It runs in its own container so you won’t clash with all the other Python 2 applications. Think of an application container as an ultimate packaging system.

Since a container’s root filesystem only contains the code that your application or host needs to do its job (e.g., forget about the extraneous OS files), their disk footprint can be much smaller. This is a clear win for higher density workloads.

Another benefit that I have perceived with both types of containers is their portability and speed. While there has been a lot of work with VMs to make them portable using packaging standards, it is still a non-trivial process to get a VM to run under multiple hypervisor technologies. Containers are much simpler, some consisting of just metadata and a tarball of the root filesystem. Popular container formats are already being passed around from one public cloud provider to another quite easily. Try this with a VM by exporting an EC2 AMI and importing it into GCE–it isn’t seamless.

Containers are fast because there is no virtualized hardware to boot. You start the container and it is almost instantly running. In many cases we are talking sub-second start-up times. Because there is no virtual hardware to pass through, they don’t experience a CPU or I/O penalty.

A final benefit in my mind is that containers are being realized foremost by open-source developers and tools. Unlike other data center trends of past years, open-source appears to have the momentum in the leading force behind Linux containers.

What are the downsides to Linux containers?

The kernel tech used to enable Linux containers has been around for a while, but the tools to easily manage them are nascent and rapidly changing. Best practices are still being formed–some may even turn out to be anti-patterns. Betting important projects on a toolset that may soon become extinct is a definite downside.

Some areas in which there are still no battle-tested solutions for containers include networking and persistent storage. There are lots of ways to do this, but they are all very new, and there are no clear winners. And what about security? When VMs first came out, there were (still are?) concerns about lurking vulnerabilities that allow people to break out of the guest OS and infiltrate the host. Likewise, containers and related tools are still building up their security defenses. There are known security concerns and likely more than a few undiscovered risks.

Lastly, how does one sanely manage a data center filled with tens of thousands of containers? These things, by design, are supposed to run in higher densities than VMs. Think about multi-tenancy, resource management, service discovery, advanced scheduling, data locality, troubleshooting, etc. In my estimation, we don’t yet have tools that help us maintain this volume of complexity.

Don’t get me wrong, these problems are being worked on right now by very motivated people. Most of these downsides will be addressed sooner than later.

Are Linux containers here to stay?

Yes, they are. For one reason, containers have already been around for a long time. Upstream kernel support for Linux containers may be new to the party, but AIX lpars, Solaris zones, and BSD jails have been in use for years. Even on Linux, OpenVZ has enabled containers as a staple of multi-tenant web hosting.

These non-Linux platforms have proven the utility of this technology. Containers, if nothing else, will be another tool that will always be used to help develop, test, deploy, and isolate software. Time will tell if they become indispensible staples like VMs or be relegated to lighter use. My prediction is they will be heavily utilized and, overtime, supplant VMs for many use cases.

There are a lot of tools popping up to help conquer the container landscape (e.g., LXC, Docker, Kubernetes, rkt, LXD, Mesosphere, etc.). Some of these will not survive. They will not gain necessary adoption rates or will suffer anemic investment. Eventually, one methodology may win out over another. It is still much too soon to foresee how things will play out.

Should I jump on this bandwagon–or is it too late?

Although you may feel like a late adopter by getting started with containers now, we are still in the early adoption phase. It is, in some ways, almost too early to fully jump on the bandwagon. If you do, expect the tooling and best practices for using containers to shift around a lot over the next year or two. In other words, get ready for further disruption.

If you are learning about containers to stay current in your skills, now is a great time to dig in. If you are adopting containers for your enterprise, it may be wise to experiment with smaller greenfield projects first. Going all in on any of these not-quite-yet-1.0 tools might prove chaotic down the road.

Now is also an exciting time to join an open-source project around containers and help contribute! Since the projects are new and there is a frenzy of interest, there is much to be done.  Maintainers are eager for assistance from newbie and veteran contributors alike.

Unlike my traumatic experience from youth, I feel relatively safe about getting into containers. The bubble fad is not going to burst right after you buy in like it did with my beloved pogs. There is real value to containers, even though some of the benefits may morph over time.

The hype cycle tells us that at some time we will hit an inflection point of “inflated expectations” before diving into a “trough of disillusionment.” There is a good chance we haven’t yet gone over the edge into this trough, so the container bandwagon may have some rough roads ahead. But on the other end is the “slope of enlightenment” and finally the pleasant “plateau of productivity.” If this is the case, I’m betting on the excitement of helping the industry out of the ravine and experience firsthand the possible container nirvana.

Josh Butikofer

Josh Butikofer has been developing software for the past two decades. He has held a variety of engineering and management roles over the years. Although he considers himself a full-stack engineer, his specialty interests include high-performance computing, large-scale distributed systems, and data center automation. Josh is currently a Sr. Software Architect at Adobe where he helps scale the industry-leading Digital Marketing Cloud. He is also a committed husband and father. Josh graduated summa cum laude from Brigham Young University with a BS in Computer Science. When elusive spare time finally comes his way, he enjoys reading, writing, and cycling.

Josh Butikofer has 1 posts and counting. See all posts by Josh Butikofer

3 thoughts on “Containers: Passing fad or tech nirvana?

  • why do you say “Because there is no virtual hardware to pass through, they don’t experience a CPU or I/O penalty.”. Can you elaborate

  • Right, but it’s not containers or VM’s right now, it’s a mixture of both depending upon your needs; there is even still space for dedicated without containers right now, and obviously VPS on dedicated is always going to be an option.

    Containers are no magic hat, the machine will still only have X power, it’s just a really novel way of being able to distribute and utilize as much power as possible, or manage distribution of that power with potentially under-powered apps.

  • >Think of an application container as an ultimate packaging system.

    Why not actually use a good package manager that solves the dependency hell, like Nix?

    Containers are great for cheap isolated processes, but are no real solution for the dependency hell, more a bad workaround that introduces more problems.
    The current docker containers are build from base images that are whole linux distributions. So you have still a lot of files you never need and they may have vulnerabilities.

    With Nix you can create docker images that just contain the application and it’s dependencies. And this images can be just 25 MB!

    But you could also use native Containers on NixOS (a linux distribution built around Nix) and profit from the declarative configuration. That is also better than Ansible/Puppet/Chef/… because you get always the same result. When you delete the configuration of a user, it will not get created in a system rebuild.

    Nix/NixOS is not perfect, but it gets better every day because of it’s active community. So try it if you are interested and may contribute!

Comments are closed.