One way of measuring Docker’s impact is to consider which IT Ops problems have become irrelevant thanks to containers. The yum versus apt debate is one such problem that admins no longer have to worry about because of Docker.
Yum vs. Apt
In the days of my youth—which is to say, the 2000s—Linux users spent a long time worrying about the differences between yum and apt.
For the uninitiated, here’s a brief description of those differences: Yum was a package manager used by Red Hat and related Linux distributions. It worked with software packaged in the RPM format. (Yum has since been replaced by Red Hat’s new dnf package manager, but dnf is still designed to work only with RPMs.)
Apt, meanwhile, was the package manager on Ubuntu and other Debian derivatives. It worked only with Debian, or deb, packages.
This created a problem for Linux users because it meant that the Linux universe was divided into two halves. In some cases, an application you wanted to install might only be available as an RPM, but you were running Ubuntu and apt, so you couldn’t install it. Or you could only find a deb package but needed to install it with yum.
There were some tricks that you could try to get around these problems; for example, there was a tool called alien that theoretically could convert RPMs to Debian packages, although it rarely worked perfectly. You also could try compiling your application from source, but that was a huge pain, and it made updates difficult.
If I had a dime for every hour I spent in the frigid cold of my employer’s over-air conditioned basement data center trying to deal with problems caused by the differences between yum and apt, I would have been able to afford much nicer sweaters to keep me warm while I was down there.
Docker: The Universal Package Manager
Back in the 2000s, almost no one was talking about containers. Certainly no one was talking about Docker, which did not yet exist.
And that’s a shame, because if Docker had been around, it would have made the lives of Linux users everywhere much easier.
The reason is that Docker essentially serves as a universal package manager for Linux.
With Docker, you can pull a container image and run a containerized application on any Linux distribution. You don’t have to worry about whether the container image was created to support your particular flavor of Linux, because Docker works on any type of Linux.
Plus, Docker makes updates trivially easy. Arguably, updating containerized applications is even easier than updating applications using yum or apt.
For IT Ops folks, this is a big deal. It means that the days of trying to shoehorn Debian packages onto Red Hat servers, or vice versa, are long gone.
True, there are some limitations to Docker’s ability to function as a universal package manager. The biggest is that Docker doesn’t work especially well for applications with video-based graphical user interfaces, so it’s not ideal for installing software on desktop Linux. But in Linux server environments, where you typically don’t use GUIs, this is not really an issue.
In general, Docker introduces huge efficiencies to the way software is installed and managed on Linux. That’s probably not the first benefit that comes to mind when people think about Docker, but for server administrators, it is a huge advantage.