DevOps Task - Kubernetes Self-Hosted GitHub Runners

Damilare Ogundele - Aug 20 - - Dev Community

This documentation outlines the process of setting up self-hosted GitHub Actions runners on a Kubernetes cluster as part of the "Stage 7 DevOps Task." The goal was to establish scalable and manageable CI/CD pipelines using the GitHub Actions Runner Controller.

1. Environment Setup

This section details the setup of the Kubernetes environment required to host the GitHub Actions runners.

1.1. Kubernetes Cluster Setup

We utilized Minikube for its simplicity and ease of use in creating a local Kubernetes cluster.

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
minikube start --driver=docker
Enter fullscreen mode Exit fullscreen mode

Cluster verification was performed using kubectl cluster-info to ensure it was operational.

1.2. Helm Installation

Helm simplifies Kubernetes application management. Installation was achieved through the following commands:

curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
Enter fullscreen mode Exit fullscreen mode

1.3. Docker Installation

Docker is essential for containerized applications within the Kubernetes cluster.

sudo apt-get update
sudo apt-get -y install ca-certificates curl
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo usermod -aG docker $USER
newgrp docker
Enter fullscreen mode Exit fullscreen mode

1.4. kubectl Installation

kubectl is the command-line tool for managing Kubernetes clusters.

KUBECTL_VERSION=v1.29.0
wget https://storage.googleapis.com/kubernetes-release/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
kubectl version --client
Enter fullscreen mode Exit fullscreen mode

2. GitHub Actions Runner Controller Setup

This section focuses on deploying the GitHub Actions Runner Controller, which manages the runners.

2.1. Namespace Creation

A dedicated namespace, github-actions-runner, was created to isolate the runner controller and its resources.

kubectl create namespace github-actions-runner
Enter fullscreen mode Exit fullscreen mode

2.2. Runner Deployment

The YAML configuration below defines the RunnerDeployment, specifying the desired number of replicas and the runner image to use.

apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
  name: kahuna-runner
spec:
  replicas: 1
  template:
    spec:
      repository: Stage7-GitHub-Runner/hng_boilerplate_nestjs
      labels:
      - Kahuna
Enter fullscreen mode Exit fullscreen mode

Deployment was achieved using kubectl apply -f runnerdeployment.yaml, and the pod status was verified using kubectl get pods -n github-actions-runner.

2.3. Service Account and Permissions

A service account (runner-sa) was created and granted the cluster-admin role to ensure the runner had the necessary permissions to access Kubernetes resources.

kubectl create serviceaccount runner-sa -n github-actions-runner
kubectl create rolebinding runner-rb --serviceaccount=github-actions-runner:runner-sa --clusterrole=cluster-admin -n github-actions-runner
Enter fullscreen mode Exit fullscreen mode

3. Runner Configuration

This section details the configuration and deployment of the GitHub Actions runners.

3.1. Runner Registration Token

The runner registration token was retrieved from the GitHub repository. This token is used to authenticate the runners with GitHub.

3.2. Runner Spec Definition

The runner specifications were defined in the YAML file, including labels (Kahuna) and resource requests.

3.3. Runner Deployment

The Kubernetes manifests were applied using kubectl apply -f runnerdeployment.yaml to deploy the runners.

3.4. Verification

Runner logs were monitored using kubectl logs -f kahuna-runner-jkbv6-lpdqp to ensure successful registration and operation.

Image description

4. Testing and Validation

This section describes the testing and validation process for the self-hosted runners.

4.1. Workflow Creation

A sample GitHub Actions workflow was created to utilize the self-hosted runners. This workflow included steps for code checkout, dependency installation, linting, building, and testing.

name: Lint, Build and Test

on: workflow_dispatch

jobs:
  lint-build-and-test:
    runs-on: Kahuna
    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Set up Node.js
        uses: actions/setup-node@v3
        with:
          node-version: '20'

      - name: Install dependencies
        run: npm install --include=dev

      - name: Run lint
        run: npm run lint

      - name: Build project
        run: npm run build

      - name: Run tests
        run: npm run test
Enter fullscreen mode Exit fullscreen mode

4.2. Workflow Execution

The workflow was executed to verify that the self-hosted runners were correctly processing jobs.

4.3. Monitoring

Runner performance was monitored, and any issues were debugged.

4.4. Scalability Testing

Scalability was tested by adjusting the number of workflows and observing how the runners scaled up or down.

5. Conclusion

Self-hosted GitHub Actions runners were successfully set up on a Kubernetes cluster, providing a scalable and manageable solution for CI/CD pipelines. The runners were thoroughly tested, validated, and documented, demonstrating proficiency with Kubernetes and CI/CD tools.

. . . . . .
Terabox Video Player