June 29, 2016

Does the container ecosystem need more standards? If you look at the list of different container platforms and products — from Docker and CoreOS to LXD and Project Atomic — it can be easy to think so. Here’s a look at the importance of standards for container adoption, and what stakeholders are doing to increase standardization of containers.

First off, we should make clear that the container platforms mentioned above are not all completely different. Docker and CoreOS are distinct container platforms, but in some ways — such as at the orchestration level — they remain compatible. In addition, LXD and Atomic Host aren’t competitors with Docker. They’re environments on which you can run Docker containers.

And, of course, LXC — the userland virtualization environment built into Linux — remains at the core of all of this. In that sense, the container ecosystem still shares a common foundation.

Still, as container products have matured over the past two years, the trend has been toward less interoperability and a lack of standardization. Docker and CoreOS used to use the same types of containers, but CoreOS has now implemented its own, homegrown solution in the form of Rocket (or rkt for people who don’t like long words). Container deployment platforms like LXD (Canonical’s solution for Ubuntu Linux hosts) and Atomic Host (Red Hat’s enterprise container solution) are growing increasingly distinct. Microsoft is building its own container solution in the form of Hyper-V containers, which will make LXC less important as a common foundation for all types of containers.

In light of all of this, it’s clear that different types of container products and platforms are diverging from one another. And with divergence comes proprietary standards, lack of interoperability and the risk of vendor lock-in. Usually, none of this is good for encouraging adoption of a new type of product.

But that doesn’t mean companies will soon have to commit to a certain type of container solution and never be able to look back and consider other ones. Developers are already at work trying to address the growing standardization challenges of containers. While they have not yet implemented complete solutions, they are likely to arrive soon.

The most important of these container standardization efforts is the Open Container Initiative (OCI). Launched in June 2015, this is a project sponsored by the Linux Foundation to build a standard, open-source industry specification for containers. The specification in the works is based on code that Docker contributed to the OCI last summer, but the standard will be compatible with any type of container product or solution — at any level of the container software stack — that is designed to work with it.

The OCI has broad industry support. It includes container-focused companies like Docker and CoreOS, but also much bigger organizations that are interested in deploying containers on a massive scale, like Facebook and Google. Software providers developing container products, such as Red Hat, SUSE and Microsoft, are also part of the effort. (Interestingly, Canonical is currently only a silver-level member of the project, in contrast to these other container stakeholders, which support the OCI at the highest level.)

This commitment from a variety of backers provides a decent level of assurance that the OCI will succeed in developing a standardized container specification. That’s a good thing for container adoption.

Of course, naysayers may point out that, for now, the OCI has yet to exert much of an actual impact on the industry. Its specification remains in development, and solutions providers are not waiting around for it to be complete as they create their own container products. They are releasing them without necessarily worrying about sticking to the OCI standards. But that doesn’t mean their products can’t be standardized once the OCI specification is complete.

Plus, the fact that so much of the code being used to build container products is already open source is encouraging. Whether the OCI succeeds or not, the open-source nature of most container solutions means that it won’t be especially difficult for developers to make one container product compatible with another vendor’s solution, even if they don’t collaborate directly with that vendor.

That makes the container ecosystem different in a fundamental way from, for instance, the virtualization market. When virtualization took off a decade ago, most of the enterprise-ready solutions, like VMware, were primarily closed-source. (Open-source hypervisors like KVM didn’t mature until somewhat later.) That led to different types of virtual machine images and environments that were difficult to port, and that encouraged vendor lock-in. Over time, the virtualization ecosytem grew more standardized. But it’s still not necessarily easy today to migrate a data center’s virtual servers from one type of virtualization environment to another.

Things are different with containers. Whether you adopt Docker or CoreOS rkt as your container format — and whether you choose to deploy containers using LXD, Atomic Host or whichever other type of environment you like — most of the code you are relying on will still be open-source. With a little work, you can build in integrations if you need to, since you have access to the underlying code.

This is to say that container compatibility is unlikely to become a serious issue in the long run. But for now, organizations interested in adopting containers should think hard about the potential challenges that could arise from the current lack of container standardization — and keep their fingers crossed that the OCI reaches production-level quality soon.

Christopher Tozzi

Christopher Tozzi has covered technology and business news for nearly a decade, specializing in open source, containers, big data, networking and security. He is currently Senior Editor and DevOps Analyst with Fixate.io.