“Oh, Docker may be able to rule the commercial container ecosystem”
That was the conclusion that Bernard Golden and I came to after discussing immutable infrastructure over breakfast the 24 Diner in Austin. In the past, we’d assumed that Docker was not going to be able to monetize the container tools and would be forced to create for-pay orchestration; however, there’s a less obvious and more pervasive monetization avenue: the Dockerhub.
How does a download site become a paywall? It’s related to how immutable infrastructure works.
The fundamental idea of immutable infrastructure is that we never build, upgrade or install application components in situ; instead, we retrieve a ready-to-go image from a central repository (by default, Dockerhub). How can we run software without an install? We don’t. The actual install ends up being more of a compile process where all the components are statically linked together.
Immutable sounds pretty simple: you use a master build to create a pristine copy and then clone it. That’s what we would call immutable because the contents are generated as a complete artifact and then never changed. That does not mean zero configuration, there are still pre-execution steps required, but most of the install, dependency resolution and generic configuration is done during container creation (aka packaging).
The reality is much more complex and interconnected because container images are layered so there’s no true “single source” for a container. A well designed container will draw from other container layers as a shared history. The best way to reduce image footprint is to leverage shared layers. For example, the 12+ Digital Rebar containers all start from a shared base with common libraries and clients that is built on top of other common layers.
That containers are highly interconnected layer does not make them monetizable. It’s the frequency of change that creates opportunity.
In the immutable pattern, we are constantly (re)creating instances of our sources. This is not necessarily image churn, it’s instance churn and faster instances cycles make a system more resilient. So our ideal container platform will be starting and destroying images at an incredibly fast pace. This intentional churn helps ensure that our application always has the latest code because old images are quickly cycled out of the system.
That also means that our system is constantly checking to see if there are newer images. We may rarely have to download an update compared to our cycle rates; however, we have to check the registry before instantiation. In fact, the registry runs the danger of becoming critical path for every cycle.
The entire container/microservice architectural model makes registries an essential keystone.
Since this immutable model requires constantly checking with the registry, the registry owner gets a lot of data and control even. In addition, the layered composite nature of the containers means that even secondary registries also get data and control.
We get data because the frequence of “has my image changed” requests directly correlates to the actual use of the container. While popular but stable containers may not require many downloads, they will get a lot of “updated?” pings. That translates into real data for the provider.
We get control because the registry has the option to say, “yes there’s an update” but “no you cannot download it.” That type of control can become a paywall or usage licence. I see this model as the new software delivery model: access to registries becomes the new vendor control point.
The very nature of the “freshness required” system ensures an ongoing relationship between users and vendors.