Create your K3S lab on Google Cloud

Jérôme Dx - Oct 21 - - Dev Community

If you study Kubernetes, it may interest you to have a little laboratory to experiment the concepts you learned.

You can use a local solution, like Kind, fast and easy to handle, but it may be interesting too to make it work on a Cloud provider, closer to production context.

Our choices

Google Cloud

Google Cloud is the only one among the major providers to offer a free tier for life (unless the terms and conditions change, which remains to be seen, but for now we can enjoy it).

Among the services eligible for the Free Tier, here are those which will interest us :

  • Compute Engine : 1 non-preemptible e2-micro VM instance in South Carolina (us-east1), the region closest to France
  • Cloud Storage : 5 GB-months of regional storage (us-east1) per month, which is more than enough storage capacity for a homelab
  • Artifact Registry : 0.5 GB storage per month
  • Secret Manager : 6 active secret versions per month
  • Cloud Logging : Free monthly logging allotment
  • Cloud Monitoring : Free monthly metrics allotment
  • Cloud Shell : Free access to Cloud Shell, including 5 GB of persistent disk storage

There are 3 regions eligible for this free tier but living in France, I personally chose the closest geographically : us-east1.

In any case, the prices are interesting on this provider, we will see that we can also use spot instances to reduce costs.

This lab will also be an opportunity to improve your skills on Google Cloud.

K3S

K3S is a Kubernetes distribution made by Rancher, made to be as lightweight as possible while being compatible with Kubernetes production standards.

This is the perfect tool for our lab, because we want to have the lightest instances possible to keep costs down, while having the ability to test production concepts.

Even GKE (Google Kubernetes Engine) will cost you at least around $70 per month.

With K3S you can simply setup an e2-micro instance, with the free-tier (but it will be very limited).

If you want more capacity, you can create an e2-small instance, which will cost you less than $10 per month (with spot instance).

K3S will still be interesting for all the usecases where you want to reduce the size of the instances.

OpenTofu

If you haven't been following the news, Hashicorp has changed the license of Terraform and has had some issues with the community.

OpenTofu is a fork of Terraform, aimed to remain open-source and the changelog is driven by the community.

Currently, at version 1.8, OpenTofu is slightly the same as Terraform, just change the binary, you can install it with Tenv (differences will appear as the respective roadmaps will evolve).

This lab is therefore an opportunity to familiarize yourself with the use of this tool.

Setting up the lab

Now we will see how to implement this solution, the usecase is to deploy the resources on Google Cloud, create the K3S cluster on the VM, then create an API accessible from outside.

Create the Google Cloud project

First, if you haven't already, you need to create a Google Cloud project.

At the creation of the project, you can activate a 90-day $300 Free Trial offer, on a large panel of services, with some limitations though.

After this period ends, you will still be eligible for the free tier we already mentioned.

Deploy the infrastructure

Here are the resources you will create with OpenTofu :

// Compute
// ----------------------------------

// The instance for K3S
resource "google_compute_instance" "k3s" {
  name         = "k3s-vm-1"
  machine_type = "e2-small" # This instance will have 2 Gb of RAM
  zone         = var.zone

  tags = ["web"]

  // Set the boot disk and the image (10 Gb)
  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-12"
      size  = 10
    }
  }

    // Configuration to be a Spot Instance, to reduce costs
  scheduling {
    preemptible                 = false
    automatic_restart           = true
    provisioning_model          = "SPOT"
    instance_termination_action = "STOP"
  }

  // attach a disk for K3S
  attached_disk {
    source      = google_compute_disk.k3s_disk.id
    device_name = "k3s-disk"
  }

  network_interface {
    network = "default"

    access_config {
      // Ephemeral public IP
    }
  }

  labels = {
    env       = var.env
    region    = var.region
    app       = var.app_name
    sensitive = "false"
  }

  metadata_startup_script   = file("scripts/k3s-vm-startup.sh")
  allow_stopping_for_update = true
}

// Firewall
// ----------------------------------
resource "google_compute_firewall" "allow_http" {
  name    = "allow-http"
  network = "default"

  allow {
    protocol = "tcp"
    ports    = [
      "80", "443", // http/https
      "30080"      // ports opened to access the API via NodePort
    ]
  }

  source_ranges = ["0.0.0.0/0"]
  target_tags   = ["web"]
}

// Storage
// ----------------------------------

// The disk attached to the instance (15 Gb)
resource "google_compute_disk" "k3s_disk" {
  name = "k3s-disk"
  size = 15
  type = "pd-standard"
  zone = var.zone
}

// The bucket where you can store other data
resource "google_storage_bucket" "k3s-storage" {
  name     = var.bucket_name
  location = var.region

  labels = {
    env       = var.env
    region    = var.region
    app       = var.app_name
    sensitive = "false"
  }
}

// Registry
// ----------------------------------

// The Artifact Registry repository for our app
resource "google_artifact_registry_repository" "app-repo" {
  location      = "us-east1"
  repository_id = "app-repo"
  description   = "App Docker repository"
  format        = "DOCKER"

  docker_config {
    immutable_tags = true
  }
}

// Env vars
// ----------------------------------

variable "env" {
  type        = string
  default     = "dev"
  description = "Environment"
}

variable "region" {
  type        = string
  default     = "us-east1"
  description = "GCP Region"
}

variable "zone" {
  type        = string
  default     = "us-east1-a"
  description = "GCP Zone"
}

variable "app_name" {
  type        = string
  default     = "<name-of-your-cluster>"
  description = "Application name"
}

variable "project_name" {
  type        = string
  default     = "<name-of-your-gcp-project>"
  description = "GCP Project name"
}

variable "bucket_name" {
  type        = string
  default     = "<name-of-your-bucket>"
  description = "Bucket name"
}

// Provider
// ----------------------------------

// Connect to the GCP project
provider "google" {
  credentials = file("<my-gcp-creds>.json")
  project     = var.project_name
  region      = var.region
  zone        = var.zone
}

terraform {
  # Use a shared bucket (wich allows collaborative work)
  backend "gcs" {
    bucket      = "<my-bucket-for-states>"
    prefix      = "k3s-infra"
  }

  // Set versions
  required_version = ">=1.8.0"
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = ">=4.0.0"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

The startup script (in “scripts/k3s-vm-startup.sh”), which will install K3S automatically :

#!/bin/bash

# Format the disk if not already formatted
if ! lsblk | grep -q "/mnt/disks/k3s"; then
    mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/disk/by-id/google-k3s-disk
    mkdir -p /mnt/disks/k3s
    mount -o discard,defaults /dev/disk/by-id/google-k3s-disk /mnt/disks/k3s
    chmod a+w /mnt/disks/k3s
fi

# ensure only run once
if [[ -f /etc/startup_was_launched ]]; then exit 0; fi
touch /etc/startup_was_launched

# apt install
apt update
apt install -y ncdu htop

# helm install
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
/bin/bash get_helm.sh

# bashrc config
rc=/root/.bashrc
echo "alias l='ls -lah'" >> $rc
echo "alias ll='ls -lh'" >> $rc
echo "alias k=kubectl" >> $rc
echo "export dry='--dry-run=client'" >> $rc
echo "export o='-oyaml'" >> $rc

# Install k3s and configure it to use the persistent disk for data storage
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--data-dir /mnt/disks/k3s" sh -
Enter fullscreen mode Exit fullscreen mode

Create a ServiceAccount to connect to GCP with OpenTofu, restricted to this services :

(For least privileges compliance, you must restrict the elements with IAM conditions)

  • Artifact Registry Administrator
  • Artifact Registry Repository Administrator
  • Cloud Functions Admin
  • Compute Admin
  • Compute Instance Admin
  • Compute Network Admin
  • Compute Security Admin
  • Secret Manager Admin
  • Service Account User
  • Storage Admin

Then, the commands to create the infrastructure :

# OpenTofu setup
tofu init -get=true -upgrade
tofu workspace new dev
tofu workspace select dev

# Plan (to preview what will be changed)
tofu plan

# Apply (to create the infrastructure described in the IaC code)
tofu apply
Enter fullscreen mode Exit fullscreen mode

Build the app

Create a Dockerfile for your app, we will use the HttpBin API which allows to test all the request we can make to a Rest API :

FROM python:3.12-slim

# Install dependencies
RUN pip install --no-cache-dir gunicorn httpbin

# Expose the application port
EXPOSE 80

# Launch the application
CMD ["gunicorn", "-b", "0.0.0.0:80", "httpbin:app"]
Enter fullscreen mode Exit fullscreen mode

Make the build and push the image to Artifact Registry

# Build
docker build -t httpbin .

# Push to the registry
gcloud auth configure-docker us-east1-docker.pkg.dev
docker tag httpbin us-east1-docker.pkg.dev/<my-project>/app-repo/httpbin:v1
docker push us-east1-docker.pkg.dev/<my-project>/app-repo/httpbin:v1
Enter fullscreen mode Exit fullscreen mode

Set the secret to connect to the registry

Create a Service Account, with the right “Artifact Registry Reader”.

It will allow you to pull the image from Kubernetes.

First, connect to your instance :

gcloud compute ssh --zone "us-east1-a" "k3s-vm-1" --project "<my-project>”
Enter fullscreen mode Exit fullscreen mode

Then, store the credentials on Kubernetes like this :

echo -n "<json_value>" > registry_key.json

k create secret docker-registry artifact-read \
    --docker-server=us-east1-docker.pkg.dev \
    --docker-username=_json_key \
    --docker-password="$(cat registry_key.json)" \
    --docker-email=valid-email@example.com
Enter fullscreen mode Exit fullscreen mode

Deploy the app on Kubernetes

Here is the code to deploy your app :

# The deployment that will pull the image on Artifact Registry
apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpbin
spec:
  replicas: 1
  selector:
    matchLabels:
      app: httpbin
  template:
    metadata:
      labels:
        app: httpbin
    spec:
      imagePullSecrets:
      - name: artifact-read
      containers:
      - name: httpbin
        image: us-east1-docker.pkg.dev/<my-project>/app-repo/httpbin:v1
        ports:
        - containerPort: 80
---
# The Service, configured as a NodePort to allow external access on some ports
apiVersion: v1
kind: Service
metadata:
  name: httpbin-service
spec:
  type: NodePort
  ports:
    - port: 80
      nodePort: 30080
  selector:
    app: httpbin
Enter fullscreen mode Exit fullscreen mode

Then deploy it :

# deploy the app
k apply -f httpbin.yaml

# Ensure the app is running
k get po -w
Enter fullscreen mode Exit fullscreen mode

Access the API

Once it’s running, you can access the API inside the VM :

curl -I http://localhost:30080/get
# HTTP/1.1 200 OK
Enter fullscreen mode Exit fullscreen mode

Or outside, from your local machine :

curl -I http://<my-ephemeral-ip>:30080/get
# HTTP/1.1 200 OK
Enter fullscreen mode Exit fullscreen mode

Conclusion

We’ve just seen how to deploy quickly and with minimal costs a modern lightweight Kubernetes on your Google Cloud project.

There will be other workshops that may be interesting to improve this lab, like :

  • Add a Traefik reverse proxy to access to several apps
  • Add Cloudflare to protect exposed urls
  • Set a multi-masters, multi-workers pattern
  • Set a Backup/Restore strategy
  • Setup Cilium
  • Add monitoring and alerting tools

And many other things.

Looking forward to exploring these different topics with you next time.

. . . . . . . . . . . . . . . . . . . .
Terabox Video Player