How process isolation became viable for production deployment

When we first conceived of what later became Kentik Detect — our clustered SaaS for network visibility using BGP and NetFlow — we thought of it as being hosted in the cloud. But it soon became apparent that a number of important customers would require that their deployment be private. That meant that we needed a seamless and flexible way to deploy and update Kentik Detect for on-premises customers without completely bifurcating our development and deployment process.

At the time, Docker was just gaining traction as being more than a buzzword. Was it production ready? And just as important, could it address the security concerns of our on-premises customers by implementing our micro-services architecture in isolation? To better understand the conclusions we reached, let’s take a look back at the history of process isolation and the development of Docker as a solution.

Mass Process Incarceration

Maintaining security while allowing multi-tenancy on a single machine has always been a source of frustration for systems administrators. When you allow anyone else to run code on a machine you must take strict precautions to avoid a security breach. Case in point: as a teenager in 1999, one of my first experiences with BSD was through a CGI Perl script that I wrote to allow me command-line access to a shared hosting provider through a browser. Around this same time, as shared hosting providers were popping up in droves, an active FreeBSD committer named Poul-Henning Kamp came up with an answer: why not limit the access of each process to each of the underlying resources? It sounds simple in concept, but until this point fine-tuned inter-process communication restriction had not been fully implemented in most flavors of UNIX (a holdover from an era when a network of trust was implicit in accessing and running code on a UNIX machine).

Kamp proposed “Jail,” his process isolation approach, in 2000. Constraints on files and user environments had been in place since the chroot utility was first implemented by Bill Joy in 1982, most likely to help facilitate a build system for the OS using only files contained under a given directory tree. Chroot logically sets the top level working directory, or “root” to a specified directory. Jail took this a few steps further. By applying root instantiation, network, and filesystem resource restriction on a per-process or process-group level, the ability of any one process to negatively affect a system could be precisely curtailed. The following sample shows a chroot into a basic busybox-based environment:

root@werewolf:/home/ian/work/initrd# tree
.
├── bin
│ ├── busybox
│ └── sh -> busybox
├── etc
│ └── mdev.conf
├── init
├── newroot
├── proc
├── sbin
└── sys
root@werewolf:/home/ian/work/initrd# chroot ./ /bin/sh
/ #
/ # ls -R
.:
bin etc init newroot proc sbin sys
./bin:
busybox ls sh
./etc:
mdev.conf
./newroot:
./proc:
./sbin:
./sys:

As with most great applied abstractions, Jail created a small but useful ecosystem of utilities, such asezjail and qjail, that helped operators maintain and implement the new feature. Other notable implementations of Jail soon followed in other UNIX derivative operating systems, each extending the idea a bit further.

Containers, zones, and cgroups (oh my!)

In 2004, Solaris released containers, which applied a rigorous definition to the concept of jails, and allowed an administrator to easily create an entirely isolated virtual server. Instead of targeting just a few processes within a running system this methodology contended that it was useful to run an entire system under a single jail. Not long after coining the term container, this approach was renamed to Zones. Solaris Zones enjoyed quite a bit of success among first adopters and in the enterprise. Companies like Joyent were founded around the thesis that this approach could help enable Platform as a Service (PaaS) and Hybrid Cloud.

In September of 2006 Rohit Seth of Google upstreamed a set of patches that introduced containers to the Linux kernel. Seth’s goal was to enable efficient resource utilization by agnostically running disparate workloads. With hardware becoming increasingly faster, wasteful use of machine resources could be minimized by redistributing software processes. Resource utilization was improved by using containers to facilitate resource partitioning.

By 2007, the Linux community had coined the term “cgroups” (control groups) to describe the set of improvements previously referred to as containers. At the same time, Linux emerged as a clear victor in adoption among Unix variants and was establishing dominance in the server market as well. In the process, containerization became available to an ever-wider audience. But it still existed with little acknowledgement or fanfare outside of a few early adopters.

Humble Beginnings

In 2010, a small company named dotCloud, fresh out of Y Combinator, was struggling to find a market fit for their next-generation PaaS product. As founders Solomon Hykes and Sebastien Pahl discovered, a PaaS is notoriously difficult to get off the ground; the mere mention of one may cause a venture capitalist to palpitate. Key to dotCloud’s approach to providing an extensible and flexible platform was a utility that blended Copy On Write, AuFS, NAT, and Containerization on Linux. When it became clear that dotCloud’s PaaS would never quite take off, Hykes decided (to the dismay of the board) to open-source this utility. Docker was born.

The initial commit of Docker was around 600 lines of code by Andrea Luzzardi in January of 2013. It consisted of a light wrapper written in Go to setup, manage, and execute LXC containers in Linux. Similarly the original implementation of Jails in FreeBSD’s suser API was surprisingly simple, modifying 350 lines of code and adding 200 more.

Over the next year Hykes worked with Luzzardi to rapidly add features to the project, particularly in two main areas: revision management, enabled by the COW properties of AuFS, and the registry, a centralized image repository that enables an administrator to easily control the dependencies of the applications running in containers. These additions increased the portability of the application while making the underlying technologies more accessible to a wider audience.

Docker Killed the PaaS Star

Perhaps most importantly, Docker addressed one of the biggest pain points of Linux: sprawl. For years Microsoft caught slack for what became known as .dll-hell, where application dependencies would build and drift on a system, rendering a program in an ambiguous or non-functioning state. Cruft like this naturally builds up across OS version releases as programmers fail to adapt, or users want to run legacy code. Linux was not immune to this issue by any means. There are as many dogmas and differing implementation views on Linux as there are system calls. Anyone with a new idea is free to start a distribution of Linux and run headlong in any direction, with or without the community. As a result the chance that any dynamically compiled program will run for any given Linux user is less than it should be. Docker addressed this problem head-on by decoupling the kernel from stateful application dependencies.

Instead of trying to create a walled garden where services are neatly served up by a PaaS, dotCloud altered the paradigm of the day by embracing the development community to help them solve their own problems. The great beauty therein being the parallel between the technological abstractions of Docker that made it so successful and the successfully executed abstraction, or pivot, of dotCloud’s own business model. By 2014 Docker had over 2.75 million downloads even while it’s 1.0 release was advised as not production ready.

The initial reaction to Docker from a large segment of the community was that it was a passing fad. But what most of this segment failed to grasp was the significance of the improvements and the value of hegemony. Despite sentiments that the backing technology was immature, the reality was the underlying kernel features had reached maturity over the years. As such Docker was primed to seize the day, as evidenced by over 100 million downloads and the communities of developers it now holds. Carpe diem crās.
Docker is already moving toward the next step: cut out the kernel middleman.

Logically, the next step for Docker would be to cut out the kernel middleman for some applications. Why have a single monolithic kernel which may fail for reasons unrelated to your application? Why introduce the overhead of many interdependent subsystems where you don’t need any? As Moore’s law fades away and CPU functions scale out horizontally onto faster interconnects, the kernel will have to be efficient and mutable. (For further context, read any of the research by Ron Minnich on Microkernels and 9P.) Docker is already taking these steps with its acquisition of Unikernel Systems.

 

About the Author/Ian Applegate

ianIan Applegate is the co-founder and chief architect of Kentik. Ian was previously at CloudFlare, where he focused on operational efficiency and researched new applications of congestion control and RDMA-to-HTTP. Long an avid Unix and Linux hacker, Ian joined a team out of Lawrence Berkeley National Lab at age 16 that contributed to the Warewulf and Perceus HPC provisioning utilities.