Breaking Down The Kubernetes API

Michael Levan - May 11 '22 - - Dev Community

When an engineer, programming language, or CICD system is interacting with Kubernetes, it’s interacting with an API. 9 times out of 10 any time you interact with Kubernetes, you’re interacting directly with the API. Although it’s one API, there are many versions and groups to make sure all Kubernetes resources work efficiently.

In this blog post, you’ll learn about the Kubernetes API and how it works under the hood.

Where The API Runs

In the opening section of this blog post, you read that any time you interact with Kubernetes, you’re interacting with an API. The API lives on the Control Plane, sometimes referred to as the API server or the Master Node. Depending on your Kubernetes environment, you may either be running Kubernetes in the cloud or on-prem.

If you’re running Kubernetes in the cloud, like AKS, GKE, or EKS, the Control Plane is completely abstracted from you. You have no control over it other than what version of the Kubernetes API it’s running. The scaling, management, and underlying infrastructure are handled by the cloud provider.

If you’re running Kubernetes on-prem, using something like k3s or Kubeadm, you’re deploying both the Control Plane(s) and the Worker Node(s). The Control Plane, when you deploy it, consists of the API server (amongst other components like etcd, kube-scheduler, etc.). The main implementation of the API in Kubernetes is called the kube-apiserver. It scales horizontally, as in, creates more Control Planes if needed. The core job of kube-apiserver is to validate objects like Pods, Deployments, etc... You’ll learn more about that in the upcoming section.

How it Works

Typically when you’re interacting with Kubernetes, you’re most likely using the kubectl command, which is the Kubernetes CLI. Although many people use kubectl, there are many other ways to interact with Kubernetes. For example:

  • If you’re on OpenShift, you can use the oc CLI or the oda CLI
  • If you use Terraform, you can create Kubernetes Deployments, Pods, etc.
  • If you use Python, Go, or any other programming language, you can create Kubernetes Deployments, Pods, etc.

The reason why this works is that even though they’re all different tools and methodologies for interacting with k8s, they’re doing the same thing; interacting with the Kubernetes API

When you use kubectl, Terraform, Python, or any other tool to create a resource in Kubernetes, all you’re doing is sending a POST request to the Kubernetes API.

When you’re calling resources that are already created to view them, like when running kubectl get pods, you’re sending a GET request to the Kubernetes API

Mostly everything in Kubernetes is API-centric.

API Groups and Versioning

Now that you know how the API works and a few different ways to interact with the Kubernetes API, let’s dive a bit deeper into how the APIs themselves work. First, let’s talk about API groups and versions.

In the previous section, you learned that running commands like kubectl get pods reaches out to the Kubernetes API. Even though it’s typically one API, there are several API groups. The primary groups are:

  • Core Group
  • Named Group

Each group has a different API path, for example, api/v1 or/apis/rbac.authorization.k8s.io. You’ll see an example of both in a Kubernetes Manifest in a second, but first, let's break down Core Group and Named Group.

The Core Group are all of the original Kubernetes resources. For example, Pods and Deployments. The Named Group are all of the new APIs, for example, RBAC and Ingress Controllers.

Below is an example of using the Core Group:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginxdeployment
  replicas: 2
  template:
    metadata:
      labels:
        app: nginxdeployment
    spec:
      containers:
      - name: nginxdeployment
        image: nginx:latest
        ports:
        - containerPort: 80
Enter fullscreen mode Exit fullscreen mode

Below is an example of using the Named Group:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-nginxservice-a
spec:
  ingressClassName: nginx-servicea
  rules:
  - host: localhost
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginxservice
            port:
              number: 8080
Enter fullscreen mode Exit fullscreen mode

As you can see, the way that you define the Kubernetes Manifest via YAML isn’t any different from a syntax perspective. The only difference is the API you’re using

You can check out the full Kubernetes API reference list here.

API Extensions

Let’s say there’s something in Kubernetes that doesn’t currently exist. What do you do? Well, since Kubernetes is an open-source project, you can create extensions. These extensions are sometimes called Controllers or Operators (typically Controllers for the most part). With Operators, you can extend the Kubernetes API based on the OpenAPI Kubernetes spec.

As an example, here’s a Spec that was created by HashiCorp for a Vault Container Storage Interface (HCI).

Per the code below, you can see that it’s using a completely separate API compared to Kubernetes Pods or Deployments.

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: vault-db-creds
spec:
  # Vault CSI Provider
  provider: vault
  parameters:
    # Vault role name to use during login
    roleName: 'app'
    # Vault's hostname
    vaultAddress: 'https://vault:8200'
    # TLS CA certification for validation
    vaultCACertPath: '/vault/tls/ca.crt'
    objects: |
      - objectName: "dbUsername"
        secretPath: "database/creds/db-app"
        secretKey: "username"
      - objectName: "dbPassword"
        secretPath: "database/creds/db-app"
        secretKey: "password"
Enter fullscreen mode Exit fullscreen mode

Here’s a cool step-by-step guide from Red Hat on how you can get started building your very own Kubernetes Operator.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player