Kubernetes with Kubespray

Sebastian - Oct 24 '22 - - Dev Community

Kubespray is a meta distribution with an impressive configurability and scalability. It can be used for various infrastructure types, on-premise or cloud, bare metal or VMs, and it provides fine-grained control about all aspects of the installation process. Also, a wide range and customizability for the control plane storage, CRI, CNI, and ingress is given. Kubespray uses the configuration management tool Ansible at its core and represents your Kubernetes cluster as complete and versioned infrastructure as code.

In this article, you will learn everything you need to know to start provision a Kubernetes cluster with Kubespray. First, we will get an overview about the distribution to see which Kubernetes components it supports. Then we will learn about the different installation architectures, the installation and upgrade process, and the customization with different Kubernetes components.

This article originally appeared at my blog admantium.com.

Distribution Overview

A default installation with Kubespray provides these Kubernetes components:

  • Control plane storage: ETCD
  • CRI: containerd
  • CNI: calico
  • Ingress: nginx

Other configuration options:

  • Optionally install kubeadm
  • Kubernetes components can be installed as binaries on the target infrastructure or run as containers
  • DNS resolution with kubeproxy or iptables

Installation Architectures

Kubespray puts the choice in your hands: You can install a single node cluster, a single controller, or multi controller with any number of worker nodes. During the configuration step, this choice needs to be reflected in the configuration files.

  • Single Node Cluster: You deploy everything on just one node. This node should have substantial hardware capabilities - remind yourself that you are installing the official Kubernetes binaries, not a optimized version such as in K3S.
  • Single controller, multi worker: You configure the cluster to have one controller node and several worker nodes. Same requirements apply: The controller node should have good hardware capabilities, and for the workloads you use the additional worker nodes.
  • Multi controller, multi worker: This is the recommended way to setup a Kubernetes cluster. The number of controller nodes should confirm to the equation of 2*n + 1 to allow a quorum in the case that a controller node goes down.

Other than that, there is a detailed guide how a high availability cluster works with the setup of multiple etcd instances and kube apiserver on each controller node. Also check the official documentation for large clusters.

Installation Process

For using Kubespray, you need an additional computer, called the Kubespray controller, from which the cluster installation and configuration is launched. Installing Kubernetes encompasses these steps:

  • Inventory Definition
    • Kubespray controller: The computer or server on which you install Ansible and all required libraries
    • K8S controller nodes: The node(s) designated as controller nodes
    • K8S worker nodes: The node(s) designated as worker nodes
  • K8S nodes setup
    • controller nodes need at least 1.5 GB RAM, worker nodes 1GB of RAM.
    • On the nodes, a compatible OS needs to be installed
    • Ensure SSH access to the nodes
  • Kubespary controller setup
    • The installation process uses an Ansible galaxy role which does all the heavy lifting: Defining a Python virtual env in which the Ansible version is isolated, install all requires Python libraries, and also clone the actual Ansible files that Kubespray uses
  • Inventory Configuration
    • The inventory is composed of three groups: control plane nodes, worker nodes, and etcd servers
    • Copy the sample inventory file by cp inventory/sample inventory/mycluster and define the nodes and their role as controller or worker
  • Kubernetes Component Configuration
    • Decide and define which Kubernetes Components to use
    • Define these components in the configuration files inventory/mycluster/group_vars/all/all.yml and inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
  • Rollout
    • Run the Ansible playbook with ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml and your cluster will be created

Upgrade Process

To upgrade the Kubernetes version used in your cluster, follow these step as outlined in the documentation:

  • Upgrade the worker nodes: The Ansible playbook upgrade-cluster.yml is called and the desired Kubernetes version is specified
   ansible-playbook upgrade-cluster.yml \
    -b -i inventory/sample/hosts.ini
    -e kube_version=v1.25.0 \
    --limit "kube_control_plane:etcd"
Enter fullscreen mode Exit fullscreen mode
  • Upgrade the worker nodes: The same playbook is used, but you specify which nodes to use
   ansible-playbook upgrade-cluster.yml \
    -b -i inventory/sample/hosts.ini
    -e kube_version=v1.25.0
    --limit "node5*"`
Enter fullscreen mode Exit fullscreen mode

There are several configuration flags to limit the upgrades to one node at time, to pause the play for manually checking and/or rebooting the nodes, and much more - see the documentation mentioned above.

You can also upgrade individual Kubernetes components of the cluster. This is mandatory if you upgrade the Kubernetes version, but you can also run it separately. All upgrades are applied in this order:

  • Docker
  • Containerd
  • etcd
  • kubelet and kube-proxy
  • network plugins
  • kube-apiserver, kube-scheduler, and kube-controller-manager
  • Add-ons (such as KubeDNS)

Component upgrades are triggered by setting new versions in the component configuration files, and then running the same upgrade-cluster.yml component, and add tags for the components that you want to be upgraded, like this:

  ansible-playbook \
    -b -i inventory/sample/hosts.ini \
    --tags=docker \
    cluster.yml
Enter fullscreen mode Exit fullscreen mode

Customization

Kubespray supports several customizations for the Kubernetes components. At the time of writing, this is:

  • Control Plane Storage
    • etcd
  • Container Runtime
    • Containerd
    • Docker
    • CRI-O
    • Kata Containers
    • gVisor
  • Container Networking Interface
    • cni-plugins
    • calico
    • canal
    • cilium
    • flannel
    • kube-ovn
    • kube-router
    • multus
    • weave
    • kube-vip
  • Ingress
    • Kube VIP
    • ALB Ingress
    • MetalLB
    • Nginx Ingress
  • Storage
    • cephfs-provisioner
    • rbd-provisioner
    • aws-ebs-csi-plugin
    • azure-csi-plugin
    • cinder-csi-plugin
    • gcp-pd-csi-plugin
    • local-path-provisioner
    • local-volume-provisioner

Conclusion

In this article, you learned about the Kubernetes meta distribution Kubespray. Based on Ansible, it provides infrastructure as code comfort to configure, controls, install and update your cluster. It supports all Kubernetes versions and provides extensive customization for the CRI, CNI, Ingress and storage components. In addition to these features, it is also infrastructure-agnostic and can be used on on-remise bare metal server, VMs, or any cloud environment.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player