Kubernetes v1.23 Is Here. Are You Ready?

Kubernetes’ final release for the year 2021 is ready: Version 1.23.

The Christmas edition of Kubernetes comes with 45 new enhancements to make it more mature, secure and scalable. There are some critical changes grouped into the Kubernetes API, containers and infrastructure, storage, networking and security in this latest release.

Let’s start with the “face of Kubernetes,” which makes it scalable and expandable.

Kubernetes API

There are three significant changes, from api-machinery, CLI and autoscaling SIGs, that will be released as part of 1.23:

The Kubectl Event Command

Using kubectl get events makes it easier to monitor the cluster’s overall state and solve problems. However, it’s limited by the options and data collection approach of the kubectl get command. That’s why there’s a new command being released as an alpha feature in 1.23, kubectl event.

The new command will be beneficial for:

  • Viewing all events related to a particular resource
  • Watching for specific events in the cluster
  • Filtering events by their status or type in a specific namespace

Until feature graduation, you can check out the design document for the upcoming features in subsequent releases. You can start using the kubectl events command immediately after installing the new kubectl version.

Graduating the HPA API to General Availability

Horizontal Pod Autoscaler (HPA) is a central component of Kubernetes that automatically scales the number of pods based on metrics. HPA can scale many different resources up or down, including replica sets, deployments or stateful sets with well-known metrics like CPU utilization. It has been part of the Kubernetes API since 2015, and it’s finally graduating to general availability (GA).

If you’re already using HPA in your clients and controllers, you can start using v2 instead of v2beta1. This graduation also means that you can use HPA long-term since it’s production-ready and is now a core component of the Kubernetes API.

CRD Validation Expression Language

CustomResourceDefinition (CRD) is the robust abstraction layer that extends Kubernetes and makes it work with all possible custom-defined resources. Because users define the new custom resources and their specifications, the validation could be tricky with webhooks, controllers and client tools.

Thankfully, there is a proposal to facilitate using an inline expression language, such as Common Expression Language, that can be integrated into CRD for validation.

With the 1.23 release, validation rules are provided as an alpha feature so that you can add x-kubernetes-validation-rules; similar to the following example from the Kubernetes Documentation:



      type: object



          type: object


            – rule: “self.minReplicas <= self.replicas”

              message: “replicas should be greater than or equal to minReplicas.”

            – rule: “self.replicas <= self.maxReplicas”

              message: “replicas should be smaller than or equal to maxReplicas.”




              type: integer


              type: integer


              type: integer


            – minReplicas

            – replicas

            – maxReplicas 

Let’s assume you want to create the following custom resource instance where it violates the second rule:

apiVersion: “stable.example.com/v1”

kind: CronTab


  name: my-new-cron-object


  minReplicas: 0

  replicas: 20

  maxReplicas: 10

The Kubernetes API will respond with the following error message: 

The CronTab “my-new-cron-object” is invalid:

* spec: Invalid value: map[string]interface {}{“maxReplicas”:10, “minReplicas”:0, “replicas”:20}: replicas should be smaller than or equal to maxReplicas.

If you are using CRDs in your cluster, you must also use validation mechanisms in your OpenAPI schema and your controllers. With this new release, you can start migrating them to x-kubernetes-validation-rules—and let the Kubernetes API do the cumbersome work for you.

Containers and Infrastructure

In this release, we have found two noteworthy features from Windows and node SIGs. Those are ephemeral containers and Windows privileged containers.

Ephemeral Containers

Ephemeral containers are temporary containers designed for observing the state of other pods, troubleshooting and debugging. This new feature also comes with a CLI command to make troubleshooting easier, kubectl debug. The new command runs a container in a pod, whereas the kubectl exec command runs a process in the container itself.

With v1.23, you’ll be able to add ephemeral containers as part of pod specification under PodSpec.EphemeralContainer. They are similar to a container specification, but they do not have resource requests or ports because they’re intended to be temporary additions to the pods. For instance, you’ll be able to add a Debian container for the my-service pod option and connect interactively for live debugging, as shown below:

$ kubectl debug -it -m debian my-service — bash

[email protected]:~# ps x


    1 ?        Ss     0:00 /pause

   11 ?        Ss     0:00 bash

  127 ?        R+     0:00 ps x

Ephemeral containers were already available in their alpha state in v1.22, and they’ll graduate to beta in the 1.23 release. If you haven’t tried them yet, it’s good to create your debugging container images and start including the kubectl debug command in your toolbox.

Windows Privileged Containers and Host Networking Mode

Privileged containers are potent container instances, as they can reach and use host resources—similar to a process that runs directly on the host. Although they pose a security threat, they’re beneficial for managing the host instances and are used heavily in Linux containers.

With the 1.23 release, privileged containers and the host networking mode for Windows instances will graduate to beta. If you have Windows nodes in your cluster or plan to include them in the future, review the design document for capabilities and the GA plan.


There is one essential change that we want to emphasize for v1.23 from the storage SIG: Volume ownership change during volume mounts.

Currently, before volume binding, volume permissions are recursively updated to the fsGroup value in the pod specification. When the volume size is large, changing ownership could lead to excessive wait times during pod creation. Therefore, a new field, pod.Spec.SecurityContext.FSGroupChangePolicy, has been added to allow users to specify how permission and ownership changes should operate.

In v1.23, this feature has graduated to GA and you can specify the policy using one of the two following options:

  • Always: Always change the permissions and ownerships to match the fsGroup field.
  • OnRootMismatch: Only change the permissions and ownerships if the top-level directory does not match the fsGroup field.

If you’re using applications sensitive to permission changes, such as databases, you should check the new field and include it in your pod specifications to avoid excessive wait times in pod creation.


IPv6 is a long-awaited feature from the Kubernetes team, especially since it was added as an alpha feature in Kubernetes v1.9. In the latest release, dual-stack IPv4/IPv6 networking has finally graduated to general availability.

This feature consists of awareness of multiple IPv4/IPv6 addresses for pods and services and it also supports the native IPv4-to-IPv4 communication in parallel with IPv6-to-IPv6 communication to, from and within clusters.

Although Kubernetes provides dual-stack networking, you may be limited by the capabilities of the underlying infrastructure and your cloud provider. This is due to the fact that nodes should have routable IPv4/IPv6 network interfaces and pods should have dual-stack networking attached. Thus, you also need a network plugin that is aware of the dual-stack networking capability to assign IPs to pods and services.

Some CNI plugins already support dual-stack networking, such as kubenet, and ecosystem support is on the rise with kubeadm and kind.


There is one essential enhancement from the v1.23 release auth SIG that we’d like to note: graduation of pod security standards to beta.

In the previous release, pod security standards were provided as an alpha feature to replace the PodSecurityPolicy. They created a way to limit pod permissions with the help of namespaces and labels, and to implement policy enforcement—which we wrote about in our blog post for the v1.22 release.

Now that the feature has graduated to beta, it’s a good time to include it in your deployments for greater security in your pods and clusters.


With the last release of 2021, Kubernetes comes with more scalable and reliable APIs and infrastructure enhancements. Furthermore, the improvements in storage, networking and security make Kubernetes faster and future-proof, establishing it as the leading container orchestration platform in the industry.

To learn more about the latest enhancements, reference the Kubernetes blog and release notes.

Amir Kaushansky

Amir Kaushansky is an innovative and creative technology leader with 20+ years in all the areas of product life cycle: Product Management, Business Development, Technical Services , Quality Assurance, IT and Software Engineering. He Excels in bridging technical and business worlds, excellent communication skills and interpersonal relationships with peers, customers and suppliers and has a proven track record in development and deployment of complex products and solutions, especially in the field of cybersecurity.

Amir Kaushansky has 3 posts and counting. See all posts by Amir Kaushansky