As microservices adoption increases and the technology matures, Weaveworks is betting that most IT organizations will need to support container applications running on two or more container platforms.
The latest release of Weave Cloud service makes it possible to monitor Docker, Kubernetes and Mesosphere DC/OS deployments on-premises or, in the case of Amazon EC2 Container Service (ECS), on a public cloud via what Weaveworks CEO Alexis Richardson describes as a single operational backplane for every major container platform.
To achieve that goal, Richardson says, Weaveworks has implemented the open-source Prometheus monitoring tools across multiple container platforms. The latest release adds support for both automated alerts that can be routed to specific developers or members of the IT operations staff based on rules defined by the IT organization. Those rules can be as broad or as limited as an IT organization prefers. For example, only important alerts pertaining sustained problems might be sent to minimize “alert burnout.”
Richardson says Weave Cloud is unique in that it aggregates metrics across a cluster and from all layers of the stack, including the network, container, host and microservice. The alerts can be sent via multiple notification systems, including email or applications such as PagerDuty and Slack.
As much as IT organizations prefer to set standards, multiple teams within the same organization are likely to employ different container management platforms. In fact, Richardson notes, it’s just as likely a container platform will be deployed on-premises as it will be on a public cloud. Weaveworks provides developers and IT organizations with a centralized monitoring service that can be invoked as required, says Richardson.
In the rush to embrace microservices in containers, many developers overlook what’s required to maintain any environment based on containers in a production environment. Given the number of containers that can be deployed on any given cluster, the opportunity for contention is high. Regardless of who’s accountable for the performance of that microservice—the developer or a separate IT operations team—they need to be able to identify performance degradation before it becomes a bigger systematic issue.
Of course, there’s no shortage of tools for monitoring containers these days, as demand for this capability is expected to increase considerably. But while containers provide developers with a lot more flexibility, they also increase the number of moving parts within the IT environment. Containers are notoriously ephemeral; those driving a specific microservice at any given moment might be replaced in favor of another. The container monitoring service must be able to dynamically detect when new containers are injected into the IT environment—especially when the processes being used to update that environment may not be especially formal.
Arguably, monitoring is foundational to any approach to DevOps. The challenge is figuring out to what degree any organization wants to dedicate IT infrastructure to monitor containers themselves versus relying on a service that is optimized specifically to do just that.