Containers have revolutionized software delivery and deployment with their ease of configuration and deployment, increasing development speed and agility. Likewise, the culture of DevOps is changing with the speed revolution containers are bringing. But while containers provide plentiful advantages, they also create visibility and monitoring challenges that make it difficult (or even impossible) for DevOps teams to keep their applications humming.
For example, application monitoring within containerized environments is much more challenging than managing applications deployed on virtual machines or bare-metal servers, due to the complexity of containers including orchestrators, registries, runtimes and more.
From automatic deployment and configuration to machine-assisted service level management and troubleshooting, everything has to change for DevOps to effectively manage the complex, dynamic modern application. DevOps teams require full visibility and immediate feedback whenever a change to an application is deployed, especially because of new containers.
DevOps teams need to know answers to questions including:
- Which microservice is running where?
- Was the latest allocation “successful”?
- Did the latest allocation impact service quality?
- If quality was impacted, what is the root cause?
- Is my compute infrastructure over-allocated (endangering performance) or under-allocated (wasting money)?
These questions (and more) must be answered as quickly as possible. Maintaining visibility and optimizing application performance within containerized environments is uniquely challenging, making it difficult for DevOps to answer these types of vital questions.
To deploy an effective approach for DevOps to monitor containerized applications, we must first address why visibility and optimizing application performance within containers are challenging.
Container Monitoring Challenges
Here are some of the more obvious challenges with container monitoring:
- Container platform monitoring consists of only basic monitoring functionality, which is insufficient for monitoring large-scale production environments. Container environments require more sophisticated monitoring than other application technologies, due to the speed and frequency of structural application updates.
- While container orchestrators make excellent provisioning tools, they are not performance monitoring tools. Orchestrators focus on container and host resources only; they don’t provide insight into the performance and quality of APIs, microservices, middleware and applications.
- Traditional application monitoring tools don’t support distributed microservices environments. Containerized environments are composed of services that are hosted on clusters of servers and usually employ a microservice architecture. Traditional application monitoring tools are designed to handle monolithic apps that are mapped to static individual servers, so they won’t work across dynamic environments and applications consisting of containers and microservices.
- Containers host a wide variety of workloads and are dynamic and unpredictable. A container sometimes lives only a few seconds before it is shut down. Traditional tools aren’t able to trace problems if a container no longer exists. With all this unpredictable behavior, it’s difficult to effectively set up, configure and maintain a monitoring tool in this environment.
- Containerized environments can be built using diverse technologies and languages. Organizations can choose from a range of different orchestrators, registries, runtimes and languages. Traditional tools that only support specific constructs of a containerized environment simply won’t work.
As outlined above, the world of applications running in containerized environments is quite dynamic and poses many challenges. DevOps teams need precise, real-time visibility, situational awareness and understanding into how applications are performing, aligned with suggestions and advice on how to optimize the complex technical structures found in today’s systems. Conventional monitoring tools won’t be effective or successful in this dynamic container world so a new and different approach is needed.
A New Approach to Application Monitoring
The sheer number of components, the complexity of dependencies and the dynamic nature of containerized apps create problems for traditional tools, yet all exist in container environments.
Monitoring tools for these environments must automatically visualize and monitor an application’s performance, require zero human configuration, deliver a precise picture of the application’s structure, and provide an immediate understanding of its health. Achieving this requires complete monitoring automation and the application of true artificial intelligence (AI) to handle the dynamism of containerized environments.
The new monitoring tool also must be container-aware with the ability to automatically look inside containers, understand the context of the environment in which they are running and be able to accommodate rapid changes within that environment. The tool also should discover every application component and map the dependencies and interactions between them. And the map must be updated automatically and continuously with new dependencies, while visualizing exact service quality information as variations transpire.
Every request made between a service and microservice must be monitored and traced end to end, capturing performance and flow detail, with dependency information about what middleware and hosts were involved for every segment of the request.
Another monitoring requirement is speed. I don’t mean the response time (of course we get that); I mean how quick the monitoring tool informs you of issues. Short-lived performance spikes are critical pieces of information missing from traditional tools and hinder the operations team from recognizing bad rollouts when they occur. This information must be accurate and actionable, all within seconds after a change occurs. This is where applying real-time AI to the problem has a powerful impact: Alerts must be aligned to problems affecting the business and predict impending service outages.
Even troubleshooting and problem resolution need updating in the container world. Artificial intelligence should be applied to both critical capabilities to help the system—and the human operators—better understand what is happening in production.
The revolution that containers have created must also result in a revolution in monitoring containerized environments. The visibility challenges must be solved with a new and different monitoring approach. To have complete control and visibility of their environments, regardless of language and infrastructure, organizations must manage the performance of the complex applications that run across distributed environments with modern solutions that are automatic and AI-powered to deal with the ever-changing world of dynamic applications running in containerized microservices.
About the Author / Pavlo Baron
Pavlo Baron is a co-founder and the chief technology officer at Instana. Pavlo has spent 20 years radically changing the IT world one step at a time with new innovations. He has spoken at international conferences such as QCon and GOTO, and has written four books: “Big Data for IT decision makers,” “Erlang/OTP,” “Pragmatic IT Architecture” and “Fragile Agile.” You can connect with Pavlo on LinkedIn and Twitter.