MetalLB and KinD: Loads Balanced Locally

Tyler Auerbeck - Oct 8 - - Dev Community

When You Need LoadBalancer Services On The Go, MetalLB and KinD Are There For You

There are some blogs that no matter how many times you do something, you always come back to. Whether it’s because it’s complicated OR the fact that they’re just good and you decide that means you don’t need to commit them to memory — they just end up as part of your toolbox.

I recently had to revisit one of these oldie, but goodies and it struck me; tech blogs tend to age more like a bad cheese than a fine wine. Things are constantly moving. And while concepts will hold up, configs are generally a different story.

In my case, I wanted to spin MetalLB up on KinD to support some testing that I was doing. The task at hand was simple. All I needed was the following:

  • A Kubernetes Cluster
  • A CNI
  • The ability to create some LoadBalancer services

So back to my dear old friend I went. But this time, things didn’t just work. I, as someone who frequently uses MetalLB, should have expected this. In versions since 0.13.2 there has been a switch from using a ConfigMap for configuration towards using a number of Custom Resource Definitions (CRDs). And yet, there I was.

Stumped.

So, in order to bring my future self some peace when I need to refer back to how do to this again, I figured it would be helpful (at least to myself, if nothing else) to write this down.

So let’s dive in.

Step 1: Kubernetes Cluster

The first thing I needed to do to get started was spin myself up a Kubernetes cluster. Since this was just testing out some bad ideas, I didn’t need anything robust, so spinning up KinD was more than plenty. For the purposes of our example, we’ll spin up a two node cluster by crafting the following config file (config.yaml) and applying it.

---
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
Enter fullscreen mode Exit fullscreen mode

With our configuration ready, we can then get things fired up by running:

╰─❯ kind create cluster --config.yaml --name metallb-kind
Enter fullscreen mode Exit fullscreen mode

In this example, the above will create a cluster named metallb-kind , but please be aware that you can give your cluster whatever name you would like. Or you could just accept the default. Whatever you choose, just make sure if you’re running through this that you’re pointing yourself at the appropriate cluster and not firing this into a cluster that actually matters.

Once your cluster is up and running, you should have something that looks like the following:

╰─❯ kubectl get nodes
NAME STATUS ROLES AGE VERSION
metallb-kind-control-plane Ready control-plane 4m57s v1.31.0
metallb-kind-worker Ready <none> 4m41s v1.31.0
Enter fullscreen mode Exit fullscreen mode

Step 1.5: Reach For Your Kubernetes Nodes

Usually, just being able to access your kubernetes cluster is enough. However, with the testing that we want to do, we need to actually make sure that we can reach our Kubernetes nodes by address. Specifically, this is a test to make sure that our Docker network is reachable. Otherwise, trying to make any other additional addresses available to us would be pointless.

To begin this test, we’ll first grab the addresses associated with our KinD nodes. You can get these by running the following and grabbing the results under the INTERNAL-IP column:

╰─❯ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
metallb-kind-control-plane Ready control-plane 10h v1.31.0 172.18.0.4 <none> Debian GNU/Linux 12 (bookworm) 6.5.0-15-generic containerd://1.7.18
metallb-kind-worker Ready <none> 10h v1.31.0 172.18.0.5 <none> Debian GNU/Linux 12 (bookworm) 6.5.0-15-generic containerd://1.7.18
Enter fullscreen mode Exit fullscreen mode

Once you’ve got your addresses, a simple ping check will suffice to verify your connectivity.

╰─❯ ping 172.18.0.4
PING 172.18.0.4 (172.18.0.4) 56(84) bytes of data.
64 bytes from 172.18.0.4: icmp_seq=1 ttl=64 time=0.065 ms
64 bytes from 172.18.0.4: icmp_seq=2 ttl=64 time=0.062 ms
64 bytes from 172.18.0.4: icmp_seq=3 ttl=64 time=0.042 ms
Enter fullscreen mode Exit fullscreen mode

If you’re seeing some responses, you’re good to go! Otherwise you may need to troubleshoot what may be in the way of your connectivity. This should work without much issue on Linux based operating systems, but in the likely chance that you’re using a machine with a nice fruit stamped on the lid, there may be some additional steps you need to take. If you’re searching for some assistance, taking a look at this article may help.

Step 2: Sourcing Some Network Blocks

At this point, we’ve got a cluster and we’ve verified that we can access the nodes. But now we need to source a pool of addresses that we can use to advertise our services with via MetalLB. Since we’re relying on the Docker networking that KinD is using, we’ll need to grab some unused address space from there. By default, the Docker network kind is used, so we can use tools like jq to poke at this a bit (unless you feel particularly inclined to eyeball this directly). The value we’re looking for will be buried under IPAM.Config in the subnet value. There may be more than one entry here, so what you’ll want to grab is the IPv4 subnet. If you don’t have any complex configurations in your docker network, then running the following will yield us what we’re looking for:

╰─❯ docker inspect kind | jq .[].IPAM.Config
[
  {
    "Subnet": "fc00:f853:ccd:e793::/64"
  },
  {
    "Subnet": "172.18.0.0/16",
    "Gateway": "172.18.0.1"
  }
]
Enter fullscreen mode Exit fullscreen mode

Step 3: Bringing MetalLB To Life

At this point, we’ve got the values that we’ll need to get a basic MetalLB deployment up and running. To keep things simple, we’ll deploy the manifests provided by the MetalLB GitHub repo:

╰─❯ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.8/config/manifests/metallb-native.yaml
Enter fullscreen mode Exit fullscreen mode

If you prefer, you can also deploy this via Helm with the following:

╰─❯ helm repo add metallb https://metallb.github.io/metallb
╰─❯ helm install metallb metallb/metallb
Enter fullscreen mode Exit fullscreen mode

Whatever path you choose, what you should end up with is a functioning MetalLB deployment of a single controller and two speaker pods based on the cluster that we’ve created.

╰─❯ kubectl get pods -n metallb-system
NAME READY STATUS RESTARTS AGE
controller-8694df9d9b-fhkp2 1/1 Running 0 1d
speaker-5xc9r 1/1 Running 0 1d
speaker-as78d 1/1 Running 0 1d
Enter fullscreen mode Exit fullscreen mode

Step 3.5 — Configuring MetalLB

Now that we’ve got a functional instance of MetalLB, we can make sure that it’s configured correctly to use the network ranges that we dug up earlier. As I mentioned, in previous versions this was managed via a ConfigMap. However, in recent versions there are now a number of CRDs that are available to us. The resources that we’re interested in for this example are the ipaddresspools and l2advertisements . If you’re interested in the other CRDs that are available, you can check out this listing via doc.crds.dev .

metallb/metallb

To begin, we’ll need to create one of each of the above. The first will be an empty l2advertisement and the other will be an ipaddresspool containing a small portion of the subnet that we gathered earlier.

---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: demo-pool
  namespace: metallb-system
spec:
  addresses:
  - 172.18.255.1-172.18.255.25  
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: demo-advertisement
  namespace: metallb-system
Enter fullscreen mode Exit fullscreen mode

This is probably larger than whatever we’ll need to run locally, but we may as well give ourselves some room to grow while we’re here. Once we have the above in place, MetalLB will be ready for any service that applies the appropriate annotations requesting an address.

Step 4 — Creating A LoadBalancer Service

Now that MetalLB is up and running, we can begin to create services that will utilize it. For a quick test, I’ll rely on an old favorite: http-echo . To test this out, you can create the following deployment and service to get started.

---
apiVersion: v1
kind: Namespace
metadata:
  name: echo
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
        - name: echo
          image: hashicorp/http-echo
          args:
            - -listen=:8080
            - -text="hello there"
          ports:
            - name: http
              containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: echo
spec:
  ports:
    - name: http
      port: 80
      targetPort: http
      protocol: TCP
  selector:
    app: echo

Enter fullscreen mode Exit fullscreen mode

With the above applied, you should be able to see a running echo pod in the echo namespace. You’ll also see that you’ve got yourself a regular ol’ service there as well.

╰─❯ kubectl get svc -n echo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo ClusterIP 10.96.173.217 <none> 80/TCP 2m40s
Enter fullscreen mode Exit fullscreen mode

That’s great to get us started. But we want a LoadBalancer service to make use of that sweet MetalLB magic! So let’s make a few modifications. There are two things we’ll need to add/update with our service defintion.

---
apiVersion: v1
kind: Service
metadata:
  annotations:
    metallb.universe.tf/address-pool: demo-pool
  name: echo
spec:
  ports:
    - name: http
      port: 80
      targetPort: http
      protocol: TCP
  selector:
    app: echo
  type: LoadBalancer
Enter fullscreen mode Exit fullscreen mode

The two areas that we’re interested in live under the annotations and type fields.

The first, fairly self-explanatory change is to the type field. We want to specify that we want this to be of type LoadBalancer so that we can get an external IP assigned to our service.

Next, we’ll want to add a MetalLB specific annotation to our service so that the MetalLB controller knows that it needs to take action on it. MetalLB annotations can allow you to take a number of actions such as targeting a specific pool, a specific address and much more. In our case, we’re just going to target the pool that we created earlier (because in our case — we don’t care what specific address gets used). The pool we created was named demo-pool , so we’ll add the annotation metallb.universe.tf/address-pool: demo-poolto our service. After applying our new annotation, our service should look more like the following with a shiny new address that we can reach it at.

╰─❯ kubectl get svc -n echo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo LoadBalancer 10.96.190.254 172.18.255.200 80:30216/TCP 2m21s
Enter fullscreen mode Exit fullscreen mode

At this point, we can see that the service was updated to be of type LoadBalancer and that an address has been assigned from our pool under EXTERNAL-IP . From here, a quick curl should suffice to prove that this is all functioning as expected.

╰─❯ curl http://172.18.255.200
"hello there"
Enter fullscreen mode Exit fullscreen mode

SUCCESS! We’ve got MetalLB running and configured, we’ve got an app running and we’ve advertised an address that we can hit it directly on. We’ve got the whole world in front of us! I mean really, we do. This is a pretty basic example and there’s a ton of more exciting things that we can do now that we’ve confirmed our basic configurations are in place.

But, for the purposes of this blog, we’ve accomplished what we’re looking for. We’ve got a KinD cluster and MetalLB up and working together. Personally, this is a huge benefit to me as while KinD is more than capable of poking some holes and making things available via certain ports — it’s always been much more beneficial to me to run something that looks more like my production environment. So being able to test out and play with the tools that I actually use has drastically reduced the number of lumps that I need to take during my deployments and vastly sped up my development loops.

Hopefully this provides you some similar benefit for any local development you’ve got planned!

. . . . . . . . . . . . .
Terabox Video Player