A closer look at container monitoring

Along with orchestration and security, monitoring is a key challenge facing organizations that have adopted containerization technologies. Especially in production environments, you have to ensure that all your microservices perform well and provide their services as designed. But beyond that, how much do you really need to know about what’s going on inside your containers? The answer to this question varies from organization to organization. In this article I’ll help you get an overview about your container-monitoring options.

By design, containers run programs in isolation. This makes it difficult to monitor what’s going on inside containers. Most traditional Linux monitoring tools don’t work well within containers because they are designed to be run on a single host and driven off the analysis of log files on disk. However monitoring single-container apps requires a different approach because disk contents aren’t persisted once containers are shut down (unless they’re written to a data storage).

Docker Stats: The black-box monitoring approach

The Docker Stats command allows you to live stream a container’s runtime metrics. Only need to know about CPU usage, memory usage, memory limit, and network IO metrics? If so, Docker Stats is all you need. This is a great first step towards understanding the dynamics of containers in your environment.

Docker Remote API

Many monitoring tools use the Docker Remote API to capture host-resource consumption metrics for each container. This is valuable information that operators can use when allocating host resources to containers. By querying the API of all Docker engines, you can capture details of the container dynamics in your environment. For example, you can learn which hosts run containers that use a certain image.

Microservices bring special needs

With the current trend towards microservices, this becomes even more valuable. Docker images are built for each service and you need to know on which machines the containers for each service are running.

Microservices and Docker are a perfect fit: Docker containers and orchestration technologies offer a means of deploying, running, and scaling applications and microservices. The Docker ecosystem is a perfect enabler for running microservices in dynamic cloud-based environments. This is one of the reasons why all major public cloud providers now support Docker.

But how can you know if the services you’ve deployed are available and performing well? And how can you know if they’re working as designed?

Cargo scanning: It’s what’s inside that counts

So, with the Docker Remote API you can see the resources that containers consume, but how can you know that your containerized microservices are working as designed? To gain deeper insights, you have to monitor the cargo within containers (i.e., your applications and services) in addition to the containers themselves. Such in-container monitoring provides deep insights into how your services perform. You can leverage these insights to improve your software architecture and deployment.

Why you need an agent in each container

For in-container monitoring you need agent software that collects data from within the container. There are two ways to achieve this: (1) modify your Docker images by adding agents to your build scripts, or (2) separately provide your monitoring agents with the base images that you use to build your containers. This approach requires you to touch and rebuild your Docker images each time you update your monitoring agents.

How to monitor without breaking the seal

A more sophisticated way of monitoring applications in Docker containers is to integrate the monitoring solution with existing Docker environments. In such scenarios, the monitoring solution runs on the Docker host and detects the creation of new containers. The required agent monitoring software then automatically hooks into the container.

With a monitoring agent placed inside each container, your monitoring solution can detect new containers automatically and monitor dynamic multi-container applications.

Higher levels of container monitoring

If you’re running highly distributed clusters within Docker containers, you’ll also need to monitor your cluster management solution (Docker Swarm, Kubernetes, etc) and each cluster node. You also need to ensure that the Docker daemon running on each cluster node remains available.

As you can see, the level of insight you need determines the required sophistication of your monitoring solution. Monitoring on the container-level is easy thanks to Docker Stats and the Docker Remote API. But application-centric monitoring of what’s running inside your containers requires a more sophisticated solution. Your own requirements should be your guide.

Gerald Haydtner

Gerald Haydtner works for Dynatrace, the leader in digital performance management that enables companies to manage thousands of servers with very small IT monitoring teams. Gerald is a trained software developer with a passion for performance optimization.

Gerald Haydtner has 1 posts and counting. See all posts by Gerald Haydtner

One thought on “A closer look at container monitoring

  • Microservies should create statsd-style statistics. Every host should have a statsd-like daemon running which forwards the statistics. That way you can easily add statistics to your code. You could also use something like New Relic. If the containers run a single service, then you can just watch container start/stop events (assuming the Docker daemon is working properly). If you want to add an agent for each container. Instead of adding it to the container you can also deploy it as the ‘sidecar’ pattern (multiple containers sharing the same namespace). You don’t need to add it to the container.

Comments are closed.