Lighthouse Scanner: How To Setup a Local Kubernetes Staging Environment

Sebastian - Jul 13 '20 - - Dev Community

Alt Text

With my Lighthouse-as-a-Service website scanner you can quickly check a webpage about its performance, SEO and best practices. You can use the scanner here: https://lighthouse.admantium.com/.

From an application perspective, my lighthouse scanner was a monolith until recently: One docker container that delivers the webpage, has an API for accepting scan requests, and executes the scans. Testing was simple: Just start the Docker container. However, I redesigned the monolith to separate microservices, and now I have three Docker containers plus a database that communicate with each other.

In my production environment, I'm using Kubernetes. Containers are created with a Deployment, persistent volumes are available, and further application configuration in the form of ConfigMaps and Secrets. I want to have an easy way of testing this setup in a stage environment. I want the complexity of microservice communication in my staging environment. And ultimately, I want to change as little as possible in my build commands and configurations.

With some planning ahead, you can achieve the very same thing. In this article, I will highlight the most important aspects: Setup local Kubernetes, configuration management, container communication and multiarch builds.

This article originally appeared at my blog.

Local Kubernetes Staging Environment

My production environment uses K3S, a lightweight Kubernetes distribution. Its setup consists of exactly one line per host. On top of that, I created a private docker registry, an Nginx Ingress, and the cert manager for TLS encryption.

For the staging environment I do not want to use my MacBook, but also an environment of multiple nodes. And for this, I can happily use my Raspberry Pi Stack. It turns out the K3S can be installed as easily on Raspberry pi. To convince you: For setting up a two-node cluster, you just need to execute the following two commands. That is all.

k3sup install --ip $SERVER_IP --user $USER

k3sup join --ip $NODE_IP --server-ip $SERVER_IP --user $USERNAME
Enter fullscreen mode Exit fullscreen mode

Now I have a production and staging environment with the same technology. To switch between the environments, I simply change the used .kubeconfig file to .kubconfig-prod or .kubeconfig-stage. And with this, most other artifacts can be reused without much changes.

Unified Configuration Management

In Kubernetes, applications configuration can happen in several places:

  • Docker image: Copy config files into the container or define fixed environment variables
  • Docker container: Mount config files into the container, define runtime environment variables
  • Kubernetes Deployments: Mount config files or environment variables into the containers

Since I'm using Kubernetes as the execution environment for production and staging, I want to provide all configuration options with Kubernetes.

One example: The configuration for Redis moved from the Docker container to a Kubernetes ConfigMap and volume mount.

apiVersion: v1
kind: ConfigMap
metadata:
  name: redis-config
data:
  redis.conf: |
    save 60 1
    appendonly yes
...

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
spec:
  replicas: 1
    ...
    spec:
      containers:
        - name: redis
          image: docker.admantium.com/redis-multi:latest
          ...
          volumeMounts:
            - name: redis-config
              mountPath: /usr/local/etc/redis/
           ...
Enter fullscreen mode Exit fullscreen mode

ConfigMaps and Secrets are defined declaratively in yaml files. They do not differ in their values, and I just need to apply them to the production or staging environment.

Simplifying Container Communication

A short detour. I used Hashicorp Consul and Nomad to manage container applications before switching to Kubernetes. In order to achieve container-communication with domain names like redis.local, I needed to setup a local DNS server on each node, configure this server to connect to the Consul service discovery, and provide an Nginx reverse proxy configuration to get TLS encryption. Read about these details in Service Discovery with Consul and Traefik and Nginx as a Reverse Proxy.

In Kubernetes, this is delightfully simple. A Deployment is exposed with a Service. The service name becomes the domain name. Here is an example for Redis.

apiVersion: v1
kind: Service
metadata:
  name: redis
spec:
  ports:
  - protocol: TCP
    port: 7139
    targetPort: 7139
  selector:
    app: redis
Enter fullscreen mode Exit fullscreen mode

This service definition defines the domain name redis which is available from each other Pod running in the cluster. Delightfully simple.

Since all containers are in the same namespace, I can simply reuse the very same service definitions and therefore the domain names in production and staging environment.

Docker Multiarch Build

Using Docker Multiarch Builds is an optional step. In my setup, the production environment runs on linux/amd64 and the staging is linux/arm/v7. With the goal of changing as few things as possible, building Docker images that run on multiple platforms is a necessity. This saves the trouble of using different machines for builds, using different Dockerfiles, naming/taging the images and use these different images in your deployment.

The full steps and benefits of using multiarch builds are detailed in my article Building Images for Multiple Architectures. In a nutshell:

  • Install Docker Community Edition version 2.0.4.0 or higher
  • Enable experimental features
  • Create a multiarch builder object
  • Use docker buildx build --platform

Multiarch Docker images provide the very same execution environment for your application, a crucial aspect for testing. The only change needed for me was to use the new buildx command.

Conclusion

Testing applications is important. In complex microservice setup, you should recreate a staging environment that is as close as possible to your production environment. In the context of Kubernetes, this article outlined the important aspects: Use the same Kubernetes distribution. Move all application configuration from Docker to Kubernetes ConfigMaps and Secrets. Use consistent Service definition for simplifying container communication. And, optionally, use Docker multiarch builds to support different hardware execution environments. Overall, this approach requires time spend in environment setup and tooling modification, and rewards you with a near 100% reuse of build commands, application configuration and deployment steps.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player