Analyzing Kubernetes Workload Costs? Compare to the Benchmark

Moving to the cloud is a critical business initiative for many organizations. According to Flexera’s 2022 Tech Spend Pulse, 65% of respondents state that cloud and cloud migrations are one of their top priorities for the year ahead. As organizations press forward on digital transformation plans, they are moving more applications and services to the cloud. However, when adopting cloud and cloud-native technologies like Kubernetes, managing costs becomes increasingly difficult. The Cloud Native Computing Foundation’s most recent report on cloud financial management (FinOps) makes it clear—most organizations are seeing their Kubernetes costs increase (68%). Many either do not monitor Kubernetes spend (24%) or only review monthly estimates (44%). Conventional wisdom tells us that what gets measured gets managed, so isn’t it time to analyze the cost efficiency of your Kubernetes workloads and compare yourself to other organizations?

Using data gathered from over 150,000 workloads and hundreds of organizations, Fairwinds put together the 2023 Kubernetes Benchmark Report to look at current trends in 2022 and compare them to the previous year. A year ago, a CNCF report indicated that 96% of respondents were using or evaluating Kubernetes, and adoption has continued to grow. For many organizations, however, aligning to best practices remains a challenge. This lack of alignment has real consequences: Cloud cost overruns, heightened security risks and increased unreliability of cloud apps and services.

So, what can organizations do to ensure that Kubernetes clusters are as efficient as possible? Setting resource limits and requests correctly can make a significant difference, but this can be easy to overlook if you are just getting started. When memory requests and limits are set too low on an application, for example, Kubernetes kills your application because it violated its limits. But when your limits and requests are set too high, it results in over-allocated resources. While your application is now available and reliable, you will also be confronted with a higher cloud bill.

As Kubernetes adoption rises in organizations of all sizes, is K8s cost efficiency trending in the right direction? Analysis of the data can help us find out more.

Setting CPU Requests and Limits

According to the benchmark data, 72% of organizations are only setting up to 10% of their workload limits too high for CPU requests and limits. At the other end of the spectrum, just one percent of organizations had 91-100% of their workloads impacted by CPU limits that were set too high. When you set CPU requests and limits, you can put more pods on fewer Kubernetes workloads, saving time and money.

Very few organizations are setting CPU limits too low; 94% of organizations are setting just 0-10% of workload limits too low. This is good because setting limits too low slows the response time of your applications.

Setting Memory Limits Too High

Like the previous year’s benchmark report, in 2022, organizations are setting memory limits too high for nearly 50% of workloads. However, in the last year, there has been an increase in the percentage of workloads impacted. Just 3% of organizations saw 51-60% of workloads impacted based on the data from 2021. That number has skyrocketed to 30% of organizations having at least 50% of workloads impacted by setting memory limits too high. Unfortunately, that translates to a significant amount of wasted cloud resources. To address these, organizations need to adjust memory limits based on workload needs. This can help you to control and minimize an inflated cloud bill.

Setting Memory Limits Too Low

This year’s report shows that 67% (a slight dip from 70% in 2021) are setting memory limits too low on at least 10% of their workloads. Although the number of workloads impacted is relatively low, setting memory limits too low reduces the reliability of clusters. For more reliable applications, adjust memory limits for your applications to prevent them from failing under pressure. Right-sizing these limits appropriately also helps to minimize the waste of cloud resources.

Setting Memory Requests Too Low

Another issue with configurations for Kubernetes workloads is setting memory requests too low. This can have a serious impact on application reliability. Once again, the analysis from this year showed consistent results: 59% of organizations show that this only impacts up to 10% of their workloads, compared to 55% in the previous year.

To avoid efficiency and reliability issues stemming from setting memory requests too high or too low, there are open source tools available that analyze usage and make suggestions for how to adjust memory requests appropriately.

Setting Memory Requests Too High

Many more organizations are setting memory requests too high on workloads— 82% this year on at least 10% of their workloads compared to 34% of organizations in the previous year. Requests are the minimum amount of a resource (memory, in this case) that is reserved for a container, and it is how Kubernetes determines which node to schedule a pod on. Setting appropriate memory requests can help Kubernetes distribute containers across multiple nodes effectively.

Analyzing Your Kubernetes Workload Costs

Most of the efficiency-related Kubernetes workload settings were consistent compared to the previous year, but the trends are not as positive as you may desire. As more organizations build and deploy cloud-native applications, it is important to measure costs so that they can be more effectively managed. The promise of the cloud is to deliver applications and services that are scalable and reliable—while saving money. Unless we put best practices in place to analyze and manage workload resource usage, we may not achieve that. The benchmark report contains more insights into Kubernetes workload security and reliability, so stay tuned for more details.

Danielle Cook

Danielle Cook is the vice president of marketing at Fairwinds, a Kubernetes governance and security company. She can be reached at [email protected]

Danielle Cook has 5 posts and counting. See all posts by Danielle Cook