This week, Granulate will add a free gMaestro tool to its existing portfolio of tools for optimizing workloads on Kubernetes clusters. The new tool makes it possible to reduce overprovisioning of infrastructure by automatically rightsizing workloads. This addition is the first extension to the company’s portfolio since Granulate became a unit of Intel earlier this year.
Granulate CEO Asaf Ezra says the addition of gMaestro will enable IT teams to eliminate overprovisioning of Kubernetes clusters which, in turn, will reduce the cost of Kubernetes infrastructure by up to 60%. The gMaestro tool, which only adds a single line of code to their Kubernetes cluster, also surfaces recommendations to reduce costs, he adds.
In the wake of the economic downturn, more organizations are becoming increasingly sensitive to IT costs, notes Ezra. As a result, more finance departments are applying pressure to reduce those costs, he added. The gMaestro tool complements an existing open source profiling tool, dubbed gProfiler, that enables IT teams to investigate the behavior of multiple Kubernetes clusters. The gProfiler tool works across different regions, different versions of Kubernetes and in the cloud or on-premises and can examine clusters down to the container name, hostname or Kubernetes deployment object level.
Despite Kubernetes’ autoscaling capabilities, overprovisioning of Kubernetes clusters remains an issue. Many developers are simply used to provisioning the maximum amount of IT infrastructure possible whenever they deploy an application. But as the number of Kubernetes clusters used continues to expand, the cost of all that unused IT infrastructure starts to add up. There is a clear need for greater visibility across what has become a complex distributed computing environment based on Kubernetes clusters. IT teams of all sizes are investing in a wider range of observability tools as part of an effort to better understand the interactions occurring across those environments. The expectation is that those tools will enable IT teams to identify and resolve issues before they become acute as the number of Kubernetes clusters being deployed in production environments grows.
Ultimately, it’s not clear whether those Kubernetes clusters will be managed by DevOps teams or traditional IT administrators armed with graphical tools. However, no matter who is tasked with managing Kubernetes clusters, the need to run them efficiently is becoming more pronounced. Each application owner may prefer to have their own cluster, but the underlying infrastructure is, in most cases, going to be increasingly shared. That’s especially challenging when multiple applications are sharing access to the same underlying I/O storage resources, notes Ezra.
The challenge, as always, is finding the best way to balance those workload requirements to reduce costs without adversely impacting application performance. Kubernetes may make it easier to dynamically scale resources up and down as required, but IT teams still need a tool that makes it simpler to automatically invoke that capability across multiple workloads deployed across fleets of Kubernetes clusters.