7 Best Practices for Kubernetes Cost Allocation

Surging Kubernetes deployments have been accompanied by surging Kubernetes spend—and all of the Kubernetes-specific complexities that come along with trying to understand that spend. In a recent survey from the Cloud Native Computing Foundation (CNCF) and the FinOps Foundation, 68% of respondents reported that their Kubernetes costs increased in the past year, with bills skyrocketing more than 20% year-over-year for a majority of organizations. The rising spend is also getting attention from more stakeholders—including finance, engineering and management teams—that want more (or more accurate) reporting on how the costs of multitenant clusters are being allocated to their respective owners and cost centers.

But the process of cost allocation can prove to be easier said than done. Kubernetes involves transient workloads—metered in seconds—that rely on shared and external resources. Furthermore, the powerful level of abstraction provided by Kubernetes makes tracking the resources used by a workload (and tying those resources to a line-item on a bill) even more difficult. To overcome these complexities, here are seven things to understand as you get more granular (and actionable) with your Kubernetes cost allocation:

DevOps Experience

1. The workload cost calculation formula and the units of resource consumption

Allocating tenant costs in Kubernetes starts with determining the cost to operate a container. The formula expression for calculating the cost of a workload has three variables:

  • The units (or amount) of the consumed resource (e.g., 1000 millicores of CPU)
  • The price of the resource (e.g., the hourly price of a CPU core)
  • The time that a Kubernetes component consumes a resource (e.g., 1.2 hours)

You must multiply the three variables to get the cost of a given infrastructure resource used by a workload.

2. The units of resource consumption

The units of resource consumption differ by resource type, and each type requires a different method for deriving the units used in the cost calculation formula. The table below summarizes these methods:

 

Resource Type

Measurement unit

CPU, Memory, GPU

The greater of requested resources and used resources

Storage

The value of any persistent volume claims (PVC)

Network

The bytes that ingress and egress cloud zones and regions

Load Balancer

The duration of use plus the volume of connections and bytes

 

3. The price of the resources

 The second component of the cost calculation expression is price. The pricing model for public cloud providers is readily available and tends to be similar across providers. The differences are usually in the prices themselves rather than in the pricing model. Kubernetes clusters are backed by virtual machines which are billed in detail: providers meter usage in seconds and charge only for provisioned and used resources, then document this data in a billing log. Financial statements can use the information derived from the monthly billing logs as an operating expense.

Things get trickier with Kubernetes clusters hosted in a data center (a private cloud or on-premises) since there isn’t a public pricing model to reference. An internal pricing model must be built on a combination of assumptions and calculations—such as the amortization of the purchased hardware assets and the labor used to install them—as well as networking and data transit costs.

4. Granular usage data

Kubernetes administrators and application owners must know their daily costs in real-time to react quickly to overspending, thus avoiding a costly surprise at the end of the billing cycle. Financial managers, on the other hand, often expect monthly cost reports and can use them to “chargeback” Kubernetes costs to the teams incurring them along with using them in their ongoing financial reporting and projections. When measuring the use of Kubernetes resources, the cost allocation model must factor in the granular reporting needs of application owners while also generating reports that are useful for higher-level stakeholders.

The Kubernetes concept of resource “requests” means that a cost allocation model must record both request data from the Kubernetes API and the actual use of the running workloads. Kubernetes workloads appear and disappear quickly and can have highly variable resource use over their lifespans. Therefore, the cost monitoring system must tally the seconds of use into hourly and daily values with sufficient precision. The cost allocation model must then be able to roll this data up into summaries (e.g., daily or monthly) that, when combined with the cost of the resources, display the overall cost of Kubernetes workloads. 

5. Allocating idle costs

A Kubernetes cluster will undoubtedly have some idle capacity that is requested but unused—resulting in financial waste. By tracking each workload’s resource use and comparing that with the cluster’s available resources, the cost allocation model can identify idle capacity. The cost associated with that idle cluster capacity may be distributed among the running workloads, or not. If the objective is to reduce wasted expenditure on idle resources, idle costs should be assigned to the team(s) managing the Kubernetes infrastructure. If the objective is to assign the total costs of the cluster to the workloads running on the cluster, the idle cost can be distributed by a chosen algorithm among the workloads on the cluster.

6. Allocating external costs

More often than not, Kubernetes applications rely on resources that reside externally to the Kubernetes cluster that hosts them. For example, suppose an application relies on a hosted object storage service such as S3. When reporting on the application costs, the application owners would appreciate a comprehensive view of the costs in a single report that includes resources beyond the Kubernetes cluster. In this case, the cost reporting system must access the billing details containing the costs of the external resources and include those expenses in the cost allocation report, matching the costs incurred with the workloads that used the resources.

7. Allocating shared costs

Kubernetes components such as namespaces and pods may share certain costs, such as cluster-level infrastructure tools that support all workloads (e.g., a Prometheus monitoring stack). Teams might wish to allocate the cost of running such tools among their workloads, since these tools benefit the entire cluster. Another example of a shared cost is a database instance: Applications that use a common database should each bear a portion of the cost of that database.

Aggregating all costs into a cost reporting system

With an understanding of the variables, intricacies and, perhaps, cultural shifts involved in allocating Kubernetes costs, you’re on the path toward eliminating cost inefficiencies and meaningfully reducing your spend without impacting performance. Your cost reporting system must aggregate the infrastructure cost categories (CPU, memory, GPU, storage, network and load balancer) in different dimensions that include the following:

  • Container
  • Pod
  • Cluster
  • Namespace
  • Controller
  • Service
  • Label

This level of aggregation is essential when allocating costs across business concepts such as applications, projects, products (SaaS products hosted in a cluster), teams, environments (production versus testing), departments or cost centers. Any business unit to which costs are being allocated should map to a Kubernetes component (such as a namespace) or a label. The resulting aggregated values and reports would then support the implementation of showback and chargeback.

As Kubernetes deployments accelerate, it’s increasingly critical that unnecessary spending is reined in—and executing on a comprehensive cost allocation plan is a critical step on the path toward improved efficiency and reduced waste for teams adopting Kubernetes.


To hear more about cloud-native topics, join the Cloud Native Computing Foundation and cloud-native community at KubeCon+CloudNativeCon North America 2021 – October 11-15, 2021

Michael Dresser

Michael Dresser is a Full Stack Engineer at Kubecost. Prior to Kubecost, Michael worked at Google contributing to the Kubernetes project, focusing on tooling for Kubernetes developers.

Michael Dresser has 1 posts and counting. See all posts by Michael Dresser