Best of 2022: 3 Steps to Prepare for Dockershim Removal From Kubernetes

As we close out 2022, we at Container Journal wanted to highlight the most popular articles of the year. Following is the latest in our series of the Best of 2022.

By now you’ve probably heard the news; the April 19 release of Kubernetes v1.24 will remove the dockershim component. If you haven’t already, it’s time to communicate with your dev teams and map out a plan. Will the deprecation impact you? Potentially—but even if it does, it’s nothing to panic about. Let’s look at how you can identify any impacts and stop the deprecation from breaking your clusters.

First, some history. Originally, Kubernetes offered compatibility with only Docker. When cluster operators wanted to use other container runtimes, Kubernetes designed the container runtime interface (CRI) to provide that flexibility. However, Docker existed before that interface, which led to the creation of dockershim as an adapter component. Dockershim was originally intended as a stopgap solution and it became a burden to maintain as time went on; some newer features didn’t work well with it. Its removal allows developers to build out those features more extensively. Those are just a few reasons the dockershim deprecation makes sense.

Let’s review how the removal could affect you—and how you can check to see if your workloads are affected. The good news: This is not a difficult fix, even if they are impacted. You’ll just need to work through these three steps.

Step One: Check Your Clusters for Docker Dependencies

Start your checklist by assessing any privileged pods. You’ll want to confirm that they don’t modify Docker-specific files or restart Docker service. Also, confirm they (and any scripts and apps running on nodes outside Kubernetes) don’t execute Docker commands. Next, check for private registries or image mirror settings in the Docker configuration file. You’ll need to reconfigure those for your new container runtime.

After that, identify any indirect dependencies on dockershim behavior, such as tooling that reacts to Docker-specific behaviors. If you find any, you’ll want to test the behavior before migration. Also, examine any third-party tools that perform similar operations. For instance, a monitoring agent might collect logs and metadata through Docker.

At this point, you should have an accurate idea of how impacted you’ll be, and if your team needs to tackle changes in your product’s codebase.

Step Two: Move to Another CRI

After the deprecation, you’re free to continue to use Docker locally on your own device to develop or test containers, even while using other container runtimes for your Kubernetes clusters. The good news is that your container images from Docker will work with all CRI implementations—because technically, they’re not really Docker images as much as they are Open Container Initiative (OCI) images. That’s because all CRI runtimes support the same configuration used in Kubernetes. You can also turn to a replacement adapter for Docker Engine: cri-dockerd.

However, you’ll still want to migrate to another compliant container runtime, and you have several options. Moving to containerd is a straightforward choice and one that could enhance performance while trimming your overhead. The CRI-O runtime is another popular choice. (You can find out more about using containerd and CRI-O by checking instructions on Container Runtimes.) But before you make a decision, explore other options to see which works best for you.

Step Three: Tweak Your Kubernetes Infrastructure as Needed

After you migrate, you’ll want to test and confirm that everything is working as it should. The underlying containerization code is the same between Docker, containerd and other CRIs, but you could still encounter a few issues. Keep an eye on elements like runtime resource limitations, logging configuration or tools that require direct access to Docker Engine. Double-check that any special hardware integrates correctly with your runtime and Kubernetes; finally, test any plugins that require docker CLI or the control socket and any node provisioning scripts that use Docker via their control socket.

Did you customize your dockerd configuration? You’ll need to tweak that configuration to fit your new container runtime. And if you depended on the Docker socket as part of a cluster workflow, you won’t be able to use it after moving to a different runtime. Instead, try solutions like kanikoimg and buildah—or the crictl tool as a drop-in replacement for any system maintenance workflows.

Building a Foundation for a Dockershim-Free Future

Hopefully, none of this sounds daunting; it doesn’t need to be. You only need to be thorough. Talk to your dev leads now, read Kubernetes blogs, make sure your teams are informed and do all the right testing. Then you can be confident in your clusters going forward—whether you turn out to be affected by the dockershim deprecation or not.