Intentional Kubernetes Pod Scheduling - NodeSelector

Michael Mekuleyi - Jul 31 '23 - - Dev Community

Introduction

In this article, we will be discussing deliberate pod scheduling in Kubernetes with a focus on using Node Selectors. First, we will focus on the default mechanism for pod scheduling in Kubernetes, then we will justify the need to deliberately schedule pods to specific nodes, outlining the different methods of pod scheduling. Finally, we will do a run-through on scheduling a pod to a specific node on an actual cluster using the nodeSelector method. This article requires that you have a strong working knowledge of Kubernetes and that you are conversant with using the kubectl.

This is a 3-part series that focuses on different methods of pod scheduling, This is part 1, focused on using the NodeSelector method, other parts will be focused on Node Affinity with Interpod-Afffinty and Anti-affinity and finally, Taints and Tolerations.

How does Kubernetes Schedule Pods by default?

Kubernetes schedules pods with a component of the control plane called the kube-scheduler. The Kube-scheduler is responsible for selecting an optimal node to run newly created or not yet scheduled (unscheduled) pods. The kube-scheduler is also built to allow you write your own scheduling component when you need to.

Nodes that meet the pod requirements are called feasible nodes, the kube-scheduler is tasked with the responsibility of finding feasible nodes for a pod among the nodes in a cluster (in a process called filtering), running a set of algorithms on the feasible nodes to pick the node with the highest score (this process is called scoring) and finally assigning the pod to that node. If there are no feasible nodes to run a pod, the pod remains unscheduled. If there is more than one node with the same high score, the kube-scheduler selects a node at random to run the pod.

At the end of this selection process, the kube-scheduler notifies the kube api-server about the final node selected to run the pod in a seperate process called binding, and then the pod is deployed to that node.

Why deliberate pod scheduling?

Deliberate pod scheduling is most important when you have multiple node pools in a customized cluster. For example, if you want a pod to run on a node that has SSD for faster processes, you can schedule the pod to run on just that node. Perhaps you want to co-locate pods to run on a particular node in the same zone for less latency or you want strongly related services to run on the same node, deliberate pod scheduling can be used to write your scheduling algorithm and override the default scoring process used by the kube-scheduler.

There are a number of distinct ways to deliberately schedule pods on nodes, below are a few of them;

  • NodeSelector
  • Node Affinity
  • Inter-pod affinity and Anti-affinity
  • NodeName
  • Taints and Tolerations

NodeSelector

In this article, we will focus on using a NodeSelector and we will explore other options in a later article. Using nodeSelector is the simplest recommended form of node selection constraint. You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have.

Kubernetes only schedules the Pod on nodes that have each of the labels you specify. NodeSelector is a Pod attribute that forces kube-scheduler to schedule a pod only against a node with a matching label and corresponding value for the label.

Setting up NodeSelector

The first thing to do in setting up NodeSelector is to view the labels already on your intended node, you can use kubectl to do this



michael@monarene:~$ kubectl get nodes


Enter fullscreen mode Exit fullscreen mode

Result of kubectl get

Next, you select the intended node and view the labels on the cluster using kubectl,



michael@monarene:~$ kubectl describe nodes ip-172-31-28-239.us-east-2.compute.internal


Enter fullscreen mode Exit fullscreen mode

Describing the Selected node

You then add the labels on the intended node node by using kubectl , note that the structure for the command should be kubectl label nodes <node-name> <label-key>=<label-value>



michael@monarene:~$ kubectl label nodes ip-172-31-28-239.us-east-2.compute.internal platform=web


Enter fullscreen mode Exit fullscreen mode

Verify that the label was added to the node using the kubectl describe.



michael@monarene:~$ kubectl describe nodes ip-172-31-28-239.us-east-2.compute.internal


Enter fullscreen mode Exit fullscreen mode

label added to node

You can see the new label added to the node.

Assign a pod to the node you just labeled, Save this spec below to the test-pod.yaml



apiVersion: v1
kind: Pod
metadata:
  name: httpd
  labels:
    env: prod
spec:
  containers:
  - name: httpd
    image: httpd
    imagePullPolicy: IfNotPresent
  nodeSelector:
    platform: web


Enter fullscreen mode Exit fullscreen mode

Go ahead to deploy the pod using the kubectl create command.



michael@monarene:~$ kubectl create -f test-pod.yaml


Enter fullscreen mode Exit fullscreen mode

Finally, verify that the pod is scheduled on the right node by using the kubectl get command.



michael@monarene:~$ kubectl get pods -o wide


Enter fullscreen mode Exit fullscreen mode

Pod running on labeled node

Security Constraints

To prevent malicious users from scheduling pods to their own nodes, ensure to choose label keys that the kubelet cannot modify. This prevents a compromised node from setting those labels on itself so that the scheduler schedules workloads onto the compromised node.

Kubernetes has a NodeRestriction admission plugin that prevents kubelets from setting or modifying labels with a node-restriction.kubernetes.io/ prefix, ensure to Add those labels under node-restriction.kubernetes.io/ prefix to your Node objects, and use those labels in your node selectors.

Conclusion

In this article, we discussed intentional Pod scheduling in Kubernetes and we also explored using a NodeSelector to schedule pods on nodes, If you enjoyed this article feel free to like, share and subscribe.Thank you!

. . . . . . . . . .
Terabox Video Player