Now that the containerd project has formally reached a 1.0 status under the auspices of the Cloud Native Container Foundation (CNCF), one of the hallmarks of 2018 will be a significant expansion of the number and types of platforms that can host container images.
The containerd project provides a mechanism for transferring container images in addition to controlling how a container is executed and supervised on both Linux and Windows systems. Thanks to ongoing contributions from Docker Inc., Google, NTT, IBM, Microsoft, AWS, ZTE, Huawei and ZJU, additional containerd contributions have created a storage and distribution system that supports both OCI and Docker image formats, an events system and a more sophisticated snapshot model to manage container filesystems.
Within the container community the containerd project is already being employed within the cri-containerd project, which enables users to run Kubernetes clusters using containerd as the underlying runtime. The containerd project also developed an application programming interface (API) using gRPC and exposes metrics in a format that can be consumed by Prometheus container monitoring software.
The containerd project is intended to ensure a vibrant ecosystem emerges around a consistent implementation of a container runtime. But Patrick Chanezon, chief developer advocate at Docker Inc., says the most significant aspect of the containerd project might prove to be the degree to which it will foster development of a wider variety of OEM systems capable of running multiple types of container images. As part of that trend, Chanezon says Docker Inc. expects containerd to play a significant role in expanding the number of containerized applications deployed in internet of things (IoT) applications. Collectively, those hardware platforms should greatly expand opportunities for developers with container expertise.
The amount of money being invested in IT platforms that go well beyond traditional data center environments is expected to explode in 2018. Organizations of all sizes are discovering that being able to run analytics applications at the edge of the network, for example, enables them to respond more adroitly to changing business conditions. However, the amount of compute resources available on edge devices tends to be limited. There’s also a lot more diversity in those environments in terms of the types of processors being employed.
Containers provide a means to build and deploy lighterweight applications that can be deployed equally well on top of either a hypervisor or a bare-metal IoT gateway. Best of all, containers make it easier to build distributed applications spanning everything from the edge of the network to a public cloud computing service.
It’s too early to say how significant the role containers will play in IoT applications in 2018. But containers provide an ideal mechanism for deploying and updating IoT applications. The challenge and opportunity now will be determining to what degree the providers of these systems can optimize to support multiple stacks of container infrastructure that are all based on a common containerd runtime.