Making the Case for Plugging into Container Management

Any time a disruptive technology such as containers starts to gain mainstream adoption a debate over how best to go about managing that technology naturally ensues. More often than not that debate usually starts with how best to monitor that disruptive technology because, after all, no one can manage what they can’t see.

There’s already been a number of monitoring tools for containers launched. But not everyone is convinced that building and deploying separate monitoring tools for containers is a good idea. In fact, Simon Taylor, senior vice president and general manager for Comtrade, a provider of software engineering services and plug-in software for IT management platforms, contends most IT organizations will be far better off having someone build a plug-in for monitoring containers into their existing IT management framework.

At the root of that contention, says Taylor, is the fact that a separate management framework winds up creating another pane of glass through which the IT organization is trying to make sense of the overall IT environment. In contrast, a plug-in provides information about container performance within the context of a management framework such as Microsoft System Center that enables the IT organization to correlate container performance data against the rest of the IT infrastructure environment. From a DevOps perspective Taylor says that approach is going to be fundamentally more efficient than trying to correlate information across two disparate IT monitoring tools.

As it is Taylor notes that most organizations already have too many IT monitoring tools installed. Much of the chaos associated with IT operations today stems from the fact that there are too many of these tools. Most of the debates taking place between internal IT teams winds up being over who should be absolved for being responsible for a problem than actually fixing it. As a disruptive technology that tends to be fairly ephemeral, Taylor says it’s hard enough to keep track of what containers are running where, much less the impact that container might actually be having on any given user experience.

In fact, Taylor notes that while a dedicated container monitoring tool might be able to identify there is a performance issue, its ability to identify the location of the bottleneck is by definition going to be severely limited.

While there’s a temptation to think of containers as an alternative lower level virtual machines, in reality Taylor says containers run higher up the application stack. But just because they run at that higher level it does not follow that an IT organization should go to the trouble of acquiring, deploying and then learning to use a dedicated set of container performance monitoring tools.

Naturally, there may come a day when containers running on bare metal servers is the dominant use case. But in the meantime, most IT organizations are opting to layer containers in on top of what has gone before. As such, Taylor says it makes both a lot more technical and economic sense to extend existing IT monitoring frameworks to embrace containers versus adding yet another console for IT managers to swivel between every time there might be a real or imagined problem.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1620 posts and counting. See all posts by Mike Vizard