Using and Managing Kubernetes DaemonSets 

Kubernetes (also known as K8s) is a portable, open source, extensible platform to manage containerized workloads and services. It provides both automation and declarative configuration. You can cluster multiple nodes and Kubernetes helps you efficiently and easily manage them. It’s an ideal platform, as Kubernetes clusters can span hosts across public, on-premises, hybrid or private clouds. 

Additionally, Kubernetes offers many features and deployment options to run containers. One of these resources is DaemonSet. In this article, we’ll be discussing the function of DaemonSets, one of Kubernetes’ central resources.

What is a Kubernetes DaemonSet?

DaemonSet is a container tool that ensures that all nodes (or a specific subset of nodes) run a copy of a pod. When you create a cluster, you add nodes to it to expand your ecosystem. A Daemon automatically adds pods to the nodes; deleting a DaemonSet cleans up the entire pod.

DaemonSets are useful for deploying ongoing background tasks that do not require user intervention. Three typical use cases of DaemonSets involve:

  • Running a node-monitoring Daemon on each node
  • Launching a log collection Daemon
  • Running a cluster storage Daemon

If you want to run a single pod on all the nodes you created, DaemonSets let you add instances of the pod on all the nodes in a cluster. For example, you can define a pod with a logging component on all nodes of a cluster. After that, you can create a DaemonSet cluster, where the DaemonSet controller will monitor and manage the pods running on every node. 

How to Create a DaemonSet

You can configure DaemonSets using a YAML file. Let’s have a closer look at the key components the file contains:

  • apiVersion 
  • Kind—Should be DaemonSet
  • Metadata 
  • spec.template—Involves a Pod definition to run on all nodes
  • spec.selector—A Pod selector, managed by the DaemonSet. This value is the label specified in the pod; it cannot be changed once created. 
  • spec.template.spec.nodeSelector—Used to run on a subset of the node that matches the selector
  • spec.template.spec.affinity—Used to run on a subset of the node that matches the affinity 

Every DaemonSet that you construct will contain these elements; how you specify them changes the utility of your DaemonSet.

How do DaemonSets Work?

A DaemonSet is managed by a controller that operates on a tuning control loop. You declare which particular state you want and the controller compares the desired state to the currently observed status. The DaemonSet controller creates a matching node if the monitored node does not have one. 

Normally, the Kubernetes scheduler selects the nodes that a pod runs on. However, DaemonSet pods are scheduled and created by the DaemonSet controller, which can create some issues:

  • Inconsistency
  • Pod preemption
  • Communication errors

Let’s touch on these briefly.

Inconsistent Pod Behavior

This describes a state where normal pods, waiting to be scheduled, are created and are in a pending state, but the DaemonSets pods are not created. 

Pod Preemption

The default scheduler handles pod preemption. However, upon enabling the preemption, the DaemonSet controller makes the scheduling decisions without considering preemption and pod priority. 

If you want to use a default scheduler, you can use the term ‘ScheduleDaemonSetPods’. Instead of using the term ‘.spec.nodeName’, you can add the ‘NodeAffinity’ term to your DaemonSet pods. With this alteration, the default scheduler will bind the pod to the target host.  

Communicating With Daemon Pods

Here are the patterns involved in communicating with pods in a DaemonSet: 

  • DNS: Creates a headless service with the pod selector and retrieves multiple records or discovers DaemonSets using the endpoints resources. 
  • Push: In a DaemonSet, pods send updates to another service.
  • NodeIP: Pods use a HostPort in a DaemonSet to become reachable via the node IPs. 
  • Service: Use a pod selector to create a service and use it to reach a Daemon on any random node. 

How to Update a DaemonSet

When attempting to update a DaemonSet, you’ll use one of two main strategies:

OnDelete

OnDelete is a default update strategy to ensure background compatibility. When you change the node labels, the DaemonSet adds pods to newly matching nodes and deletes them from non-matching nodes. A user can modify pods that a DaemonSet creates, but may not be able to update all the pod fields. 

Moreover, the DaemonSet controller uses the original template when a user creates a node again. With the OnDelete strategy, once the DaemonSet template is updated, the new pods will be created when the user manually deletes the old DaemonSet pods. 

RollingUpdate

With a RollingUpdate strategy, when you update a DaemonSet template, the older pod gets deleted and a new DaemonSet pod is created automatically. However, this automation comes with two limitations. 

First, DaemonSet rollout is not entirely supported or documented. Second, the DaemonSet rollback is not supported in kubectl directly. Users have to roll back by updating DaemonSet pods to match the previous versions. 

How To Diagnose Unhealthy DaemonSets 

If a DaemonSet does not have one pod running per eligible node, it’s considered unhealthy. You can use the following steps to quickly diagnose the unhealthy DaemonSets.

Step One: Make a list of pods 

First, you need to make a list of pods in the DaemonSet using the command ‘kubectl get pod -l app=[label]’. Look for the Pods with the status pending, crashloopbackoff or evicted.

Next, you need to gather more information about the pods you’ve collected by using the command ‘kubectl describe pod [pod-name]’.

Step Two: Resolve the nodes running out of resources 

The lack of resources allocated to a pod causes crashloopbackoff. You have to identify which node the pod is running on. To do that, use the command ‘kubectl’ get pod [pod-name] -o wide’. 

Now to resolve the issue, follow these steps:

  • Free up the space on relevant nodes
  • Reduce the memory of the DaemonSet
  • Scale nodes vertically
  • Run taints and tolerations, as it can prevent the DaemonSet from running on nodes that lack sufficient resources to run the pods. 

With that, you’ll have successfully recovered your system.

Final Thoughts

DaemonSet is an incredibly useful tool within the Kubernetes ecosystem. When used correctly, you can manage your storage with ease, enhance logging services and boost the reliability of your Kubernetes clusters.

Hazel Raoult

Hazel Raoult is a freelance tech writer and works with PRmention. She has more than six years of experience writing about technology, entrepreneurship and all things SaaS. Hazel loves to split her time between writing, editing and hanging out with her family.

Hazel Raoult has 1 posts and counting. See all posts by Hazel Raoult