Weaveworks Adds Prometheus Monitoring to CaaS

Although the open-source Prometheus project has gained a lot of traction in becoming an industry standard for monitoring containers, many DevOps teams need to ask themselves whether it’s worth the trouble to deploy, since, by all accounts, setting up Prometheus requires a fair amount of engineering skill.

As a primary contributor to the Prometheus project, Weaveworks is betting that most IT organizations will instead want to consume container monitoring as a service. To that end, the company has set up a Prometheus Monitoring service within Weave Cloud, which makes use of open-source extensions to Prometheus developed by Weaveworks to push monitoring data from a Kubernetes cluster out to Weave Cloud.

The open-source extensions developed by Weaveworks is agent software dubbed Weave Cortex. Weaveworks CEO Alexis Richardson says Weave Cortex makes it possible to benefit from the collective innovation being poured into Prometheus via a managed service, rather than take on a Prometheus implementation that can be fairly intimidating for the average IT organization.

These days there’s no shortage of commercial container monitoring offerings. Prometheus is officially managed by the Cloud Native Computing Foundation (CNCF), which also is encouraging the development of a managed service provider ecosystem around Kubernetes.

The issue confronting many IT organizations in 2017 is that adoption of containers is far outpacing their ability to manage them. Of course, one of the first steps to be able to manage anything is being able to monitor it. That means over the next few months many IT organizations will be making decisions concerning the best way to monitor those containers that are showing up much more frequently in production applications.

However, once containers are introduced, the potential for I/O contention issues increases exponentially. Where before there might have been only 10 or 20 virtual machines contending for storage resources, soon there may be hundreds of containers contending to access the same limited amount of storage resources. IT operations teams will have a keen interest in invoking monitoring tools to better understand which sets of containers are generating the most I/O requests.

Other aspects of the container environment that bear monitoring include interaction with various network virtualization overlays as well as the various types of databases that are expected to be deployed within the context of a stateful application based on containers. In fact, before too long many IT operations teams will find themselves trying to manage containers are have sprawled rapidly across an enterprise that embraces microservices architectures.

Very few IT organizations are currently prepared to meet that challenge. Many of them—for the immediate future, at least—will find that relying on managed services to deploy and manage containers is nothing less than the better part of valor. After all, developers who embrace containers will have little patience for being told they can’t deploy their application because the internal IT organization doesn’t have access to tools needed to keep up.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1614 posts and counting. See all posts by Mike Vizard