New Relic announced this week that it is extending its monitoring capabilities to include the ability to peer inside Kubernetes clusters.
Ramon Guiu, director of product management at New Relic, says the company’s application performance monitoring (APM) service now provides full monitoring of the container orchestration software in addition to being able to monitor containers and the hosts they are running on. Metrics to monitor data and metadata for nodes, Namespaces, Deployments, ReplicaSets, Pods and containers are all included.
Available as public beta to organizations that have a professional-level accounts, APM currently supports instances of Kubernetes running on-premises as well as Amazon Web Services, Microsoft Azure, Google Cloud Platform and IBM Cloud services.
Guiu says IT organizations often encounter multiple issues as they move to deploy Kubernetes in production environments, such as:
- Running out of resources because pods have been automatically scheduled to spin up;
- Containers crashing because of insufficient memory; and
- The master application programming interface (API) being unable to respond because it’s receiving too many requests.
When it comes to configuring Kubernetes, it’s still relatively easy to make a mistake, Guiu notes.
In fact, he says, the introduction of containers and Kubernetes requires organizations to rethink APM. Containers are ephemeral in that they tend to be replaced rapidly, so APM tools need to be able to instrument more entities and relationships. At the same time, says Guiu, IT organizations must be able to track the performance of the overall application rather than focusing only on containers or Kubernetes.
Guiu says one reason IT organizations are reluctant to deploy Kubernetes sometimes is the lack of visibility into the platform. In the event there is an application issue, IT operations teams need to be able to rule out all potential causes; otherwise, Guiu says, the IT operations team is essentially flying blind. Providing visibility within the context of an application helps allay those concerns, he says.
DevOps teams, meanwhile, want to make certain that whatever APM tool is invoked is tightly integrated with the overall continuous integration/continuous delivery (CI/CD) platform being employed. That capability makes it simpler to prioritize fixes and updates to applications that might be encountering any number of issues.
It’s unclear what impact Kubernetes will have on DevOps. Kubernetes uniquely combines compute, storage and networking services in a single cluster. In many IT organizations, those individual infrastructure services are still often managed separately. Kubernetes makes it possible to holistically manage all those resources either via a graphical user interface or programmatically using an API.
There’s also a fierce debate emerging to what degree Kubernetes will function—as a complement to existing hypervisors or eliminating the need for multiple compute, storage and network virtualization technologies provided by vendors such as VMware, for example, by deploying it on bare-metal servers. Most IT organizations will end up managing a mix of applications deployed on top of hypervisors and bare-metal servers for years to come, with Kubernetes often providing a common layer of orchestration across private and public clouds based on both hypervisors and bare-metal servers.
In the meantime, the first step to managing anything always starts with visibility. After all, no IT organization on the planet can manage what they can’t see.