Task: Schedule a Pod Manually Without the Scheduler
In this task, we’ll be exploring how to bypass the Kubernetes scheduler by directly assigning a pod to a specific node in a cluster. This can be a useful approach for specific scenarios where you need a pod to run on a particular node without going through the usual scheduling process.
Prerequisites
We assume you have a Kubernetes cluster running, created with a KIND (Kubernetes in Docker) configuration similar to the one described in previous posts. Here, we’ve created a cluster named kind-cka-cluster
:
kind create cluster --name kind-cka-cluster --config config.yml
Since we’ve already covered cluster creation with KIND in earlier posts, we won’t go into those details again.
Step 1: Verify the Cluster Nodes
To see the nodes available in this new cluster, run:
kubectl get nodes
You should see output similar to this:
NAME STATUS ROLES AGE VERSION
kind-cka-cluster-control-plane Ready control-plane 7m v1.31.0
For this task, we’ll be scheduling our pod on kind-cka-cluster-control-plane
.
Step 2: Define the Pod Manifest (pod.yml)
Now, let’s create a pod manifest in YAML format. Using the nodeName
field in our pod configuration, we can specify the exact node for the pod, bypassing the Kubernetes scheduler entirely.
pod.yml:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
nodeName: kind-cka-cluster-control-plane
In this manifest:
- We set
nodeName
tokind-cka-cluster-control-plane
, which means the scheduler will skip assigning a node, and the Kubelet on this specific node will handle placement instead.
This approach is a direct method for node selection, overriding other methods like nodeSelector
or affinity rules.
According to Kubernetes documentation:
"nodeName is a more direct form of node selection than affinity or nodeSelector. nodeName is a field in the Pod spec. If the nodeName field is not empty, the scheduler ignores the Pod and the kubelet on the named node tries to place the Pod on that node. Using nodeName overrules using nodeSelector or affinity and anti-affinity rules."
For more details, refer to the Kubernetes documentation on node assignment.
Step 3: Apply the Pod Manifest
With our manifest ready, apply it to the cluster:
kubectl apply -f pod.yml
This command creates the nginx
pod and assigns it directly to the kind-cka-cluster-control-plane
node.
Step 4: Verify Pod Placement
Finally, check that the pod is running on the specified node:
kubectl get pods -o wide
The output should confirm that the nginx
pod is indeed running on kind-cka-cluster-control-plane
:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 28s 10.244.0.5 kind-cka-cluster-control-plane <none> <none>
This verifies that by setting the nodeName
field, we successfully bypassed the Kubernetes scheduler and directly scheduled our pod on the control plane node.
Task: Login to the control plane node and go to the directory of default static pod manifests and try to restart the control plane components.
To access the control plane node of our newly created cluster, use the following command:
docker exec -it kind-cka-cluster-control-plane bash
Navigate to the directory containing the static pod manifests:
cd /etc/kubernetes/manifests
Verify the current manifests:
ls
To restart the kube-controller-manager, move its manifest file temporarily:
mv kube-controller-manager.yaml /tmp
After confirming the restart, return the manifest file to its original location:
mv /tmp/kube-controller-manager.yaml /etc/kubernetes/manifests/
With these steps, we successfully demonstrated how to access the control plane and manipulate the static pod manifests to manage the lifecycle of control plane components.
Confirming the Restart of kube-controller-manager
After temporarily moving the kube-controller-manager.yaml
manifest file to /tmp
, we can verify that the kube-controller-manager has restarted. As mentioned in previous posts, I am using k9s, which does clearly show the restart, but for readers without k9s, try the following command
Inspect Events:
To gather more information, use:
kubectl describe pod kube-controller-manager-kind-cka-cluster-control-plane -n kube-system
Look for events at the end of the output. A successful restart will show events similar to:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Killing 4m12s (x2 over 8m32s) kubelet Stopping container kube-controller-manager
Normal Pulled 3m6s (x2 over 7m36s) kubelet Container image "registry.k8s.io/kube-controller-manager:v1.31.0" already present on machine
Normal Created 3m6s (x2 over 7m36s) kubelet Created container kube-controller-manager
Normal Started 3m6s (x2 over 7m36s) kubelet Started container kube-controller-manager
The presence of "Killing," "Created," and "Started" events indicates that the kube-controller-manager was stopped and then restarted successfully.
Cleanup
Once you have completed your tasks and confirmed the behavior of your pods, it is important to clean up any resources that are no longer needed. This helps maintain a tidy environment and frees up resources in your cluster.
List Pods:
First, you can check the current pods running in your cluster:
kubectl get pods
You might see output like this:
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 35m
Describe Pod:
To get more information about a specific pod, use the describe command:
kubectl describe pod nginx
This will give you details about the pod, such as its name, namespace, node, and other configurations:
Name: nginx
Namespace: default
Priority: 0
Service Account: default
Node: kind-cka-cluster-control-plane/172.19.0.3
Delete the Pod:
If you find that the pod is no longer needed, you can safely delete it with the following command:
kubectl delete pod nginx
Verify Deletion:
After executing the delete command, you can verify that the pod has been removed by listing the pods again:
kubectl get pods
Ensure that the nginx pod no longer appears in the list.
By performing these cleanup steps, you help ensure that your Kubernetes cluster remains organized and efficient.
Creating Multiple Pods with Specific Labels
In this section, we will create three pods based on the nginx image, each with a unique name and specific labels indicating different environments: env:test
, env:dev
, and env:prod
.
Step 1: Create the Script
First, we'll create a script that contains the commands to generate the pods. I am using a script for 2 reasons:
- I want to learn bash,
- If I need to create 3 nodes again I only have to run the file instead of type it all out again.
Use the following command to create the script file:
vi create-pods.sh
Next, paste the following code into the file:
#!/bin/bash
# Create pod1 with label env=test
kubectl run pod1 --image=nginx --labels=env=test
# Create pod2 with label env=dev
kubectl run pod2 --image=nginx --labels=env=dev
# Create pod3 with label env=prod
kubectl run pod3 --image=nginx --labels=env=prod
# Wait for a few seconds to allow the pods to start
sleep 5
# Verify the created pods and their labels
echo "Verifying created pods and their labels:"
kubectl get pods --show-labels
Step 2: Make the Script Executable
After saving the file, make the script executable with the following command:
chmod +x create-pods.sh
Step 3: Execute the Script
Run the script to create the pods:
./create-pods.sh
You should see output indicating the creation of the pods:
pod/pod1 created
pod/pod2 created
pod/pod3 created
Step 4: Verify the Created Pods
The script will then display the status of the created pods:
Verifying created pods and their labels:
NAME READY STATUS RESTARTS AGE LABELS
pod1 0/1 ContainerCreating 0 5s env=test
pod2 0/1 ContainerCreating 0 5s env=dev
pod3 0/1 ContainerCreating 0 5s env=prod
At this point, you can filter the pods based on their labels. For example, to find the pod with the env=dev
label, use the following command:
kubectl get po -l env=dev
You should see output confirming the pod is running:
NAME READY STATUS RESTARTS AGE
pod2 1/1 Running 0 4m9s
Tags and Mentions
- @piyushsachdeva
- Day 13: Video Tutorial