5 Common Container Mistakes to Avoid

With the increase in containers comes an increase in container mistakes. Here are some worth highlighting

Containers are all the rage as organizations strive for faster software development and more efficient infrastructure management to carry out their digital transformations. According to a recent 451 Research report, about half of enterprises are either using containers now or plan to in the next two years.

That number is likely to keep growing, and why not? Containers package up and isolate all the artifacts needed to run an application and work uniformly regardless of the runtime environment. This increases portability among public clouds, hybrid cloud and on-premises infrastructure, reduces costs and fosters an agile DevOps culture.

The emergence of Kubernetes, open source software that automates the deployment, orchestration and management of containerized applications and is by far the most popular tool of its kind, is aiding the adoption of containers at scale.

So what could go wrong? Plenty, if you’re not careful. Here are the most common container mistakes we are seeing.

  1. User misconfiguration of containers or applications in those containers.

When moving to containers from the traditional workloads, various things change: the IP addresses, the privilege level and many other things as well. The application maintainer has to be aware of all of those alterations. Otherwise, misconfigurations can happen, leading to unexpected results such as application failures, performance degradation, security exposure, etc.

To deal with the misconfiguration, the application maintainers should familiarize themselves with the container orchestration platform and take necessary actions to prepare the applications for the migration.

  1. Overcommitment of resources because of a lack of capacity management and planning around containers.

Although containers usually consume fewer resources than traditional virtual machines, this doesn’t mean that users can create thousands of containers on a single physical server. Resource management and capacity monitoring always should be considered when implementing the container strategy. Otherwise, the available physical resources can be overcommitted relatively easily.

Fortunately, leading container orchestration platforms such as Kubernetes provide resource management capabilities allowing, for example, to specify how much CPU and RAM each container gets. In turn, various open source tools can be used for capacity monitoring purposes.

  1. Lack of consideration for Day 2 operations.

We have seen some customers trip up over a lack of consideration for Day 2 container operations such as patching, upgrades, scaling and infrastructure-as-a-service (IaaS) integration.

It is essential to think about this before using containers because each machine is effectively an operating system with its own set of libraries and binaries. Over time, they may become susceptible to security vulnerabilities and other bugs related to the specific pieces of software running in each container.

Most container runtimes just share the kernel between each container providing only limited isolation, which means that if one container is exploited, there’s a high probability of other containers and the underlying host will be compromised. Containers need to be updated and patched regularly.

There are ways to mitigate this challenge: CI/CD pipelines with automated tests can be used to rebuild, test and redeploy applications automatically with the latest fixes. Moreover, container runtimes such as the open source Kata can provide more isolation for containers by using the functionality of the underlying hardware—i.e., CPU virtualization instructions. Third-party products can help with security scanning not just during deployment, but also at the development stage as well.

  1. Attempting to port workloads to containers that are not suitable for containers or microservice architecture.

It is very common for organizations to make decisions about the migration to containers while completely ignoring the fact that their existing workloads are not ready for the migration. Attempts to put legacy monolithic applications inside the containers usually do not end up very well. The migration to containers should be a strategic decision of the organization and cover both the infrastructure and applications. Before migrating to containers, the existing workloads should be redesigned based on the microservice architecture.

Moreover, some applications are simply not suitable for neither containers nor for the microservice architecture. Examples of such applications include services that require certain hardware extensions (e.g. virtualization) or stateful services that store data locally.

  1. Underestimating the possibility of container incompatibility.

Despite the portability of containers, in some circumstances, there may be incompatibilities between a container and the underlying operating system or platform as a service (PaaS) an enterprise is using. One example is if an application within the container invokes or relies on kernel functionality that is not yet available in the kernel provided by the underlying host OS or PaaS.

Also, certain vendors have proprietary CLI tools and extensions to open APIs such as the one provided by Kubernetes. That makes moving workloads from one PaaS to another difficult. It also makes multi-cloud setups even more difficult if one PaaS solution can only be used in a single cloud.

We recommend following the principles of the Twelve-Factor App, a methodology for building modern software-as-a-service apps. Platforms such as Kubernetes are designed around these architectures, so gaining a good working knowledge of these is recommended.

Containers are the future, and increasingly the present, of application development. It’s important to develop a set of best practices and avoid container mistakes so every enterprise can benefit from the technology’s unique value.

Tytus Kurek

Tytus Kurek

Tytus Kurek is Product Manager at Canonical, the publishers of Ubuntu, for OpenStack and Kubernetes with a primary focus on the telco industry. Prior to joining Canonical, Tytus held positions at Pegasystems and Antenna Software in engineering roles. He holds a PhD in cloud computing and a Master of Science in Telecommunications.

Tytus Kurek has 1 posts and counting. See all posts by Tytus Kurek