Engineers hear a lot of “management plane this” and “single pane of glass that”, but when push comes to shove, they don’t seem to be getting everything they thought they would out of the tool/product they’re using.
Although it’s impossible to assume any tool will ever be perfect, a great place to start is by thinking about how to perform actions in a “Kubernetes centric” way. What’s meant by that is a tool that’s build for Kubernetes, used in a declarative fashion, and almost feels “homegrown” (like plugins).
Open Cluster Management is one of those tools.
In this blog post, you’ll learn how to install Open Cluster Management, register clusters, and deploy workloads to those clusters.
Prerequisites
To follow along with this blog post in a hands-on fashion, you’ll need the following:
- Two Kubernetes clusters running.
If you aren’t following along in a hands-on fashion, that’s totally fine! It’s good to still read the blog and see the process of using Open Cluster Management.
Quick Note On Open Cluster Management
If you haven’t heard of Open Cluster Management (OCM), it’s a way to manage the deployment of workloads, manage clusters, and work distribution. Some of its most used features are:
- Manage clusters from one location. It has a “management/worker” model. One cluster gets registered as the Management Cluster (the cluster that sends all of the workloads to the clusters it’s managing) and the regular clusters that you’re deploying your workloads to. The “regular clusters” get registered as worker clusters to the management cluster.
- Deploy workloads (like Deployments, Pods, Services, etc.) from the management cluster to the worker clusters. This gives you the ability to deploy all application stacks from one location.
OCM is a great way to have a central cluster managing all other clusters from one location. It’s also a great combination with GitOps solutions like ArgoCD.
Management Cluster Installation
Now that you know a bit about the “why” behind OCM, let’s learn how to deploy it.
First, you’ll start by installing the management cluster.
- Install the OCM CLI.
curl -L https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/main/install.sh | bash
- Initialize OCM on the Management Cluster (remember, this is the cluster that’s managing the worker clusters. It’s the brains of the operation).
clusteradm init --wait
You should see an output similar to the below.
flag wait has been set
[WARNING HubApiServer check]: Hub Api Server is a domain name, maybe you should set HostAlias in klusterlet
Preflight check: HubApiServer check Passed with 1 warnings and 0 errors
[WARNING cluster-info check]: no ConfigMap named cluster-info in the kube-public namespace, clusteradm will creates it
Preflight check: cluster-info check Passed with 1 warnings and 0 errors
CRD successfully registered.
Registration operator is now available.
ClusterManager registration is now available.
The multicluster hub control plane has been initialized successfully!
Nodes To Manage
Now that the Management Cluster is configured, you’ll want to add worker clusters to deploy application stacks to.
After the cluster is fully configured, you’ll see an output like the one below.
You can now register cluster(s) to the hub control plane. Log onto those cluster(s) and run the following command:
- Log into (switch Kube contexts) on the new cluster you want to register to be managed with your Open Cluster Management cluster and run the
join
command.
clusteradm join --hub-token \
REALLY_LONG_TOKEN --wait --cluster-name <cluster_name>
Replace <cluster_name> with a cluster name of your choice. For example, cluster1.
You should see an output similar to the one below as the cluster is being registered.
W0623 11:16:53.349581 22810 exec.go:217] Failed looking for cluster endpoint for the registering klusterlet: configmaps "cluster-info" not found
Preflight check: HubKubeconfig check Passed with 0 warnings and 0 errors
Preflight check: DeployMode Check Passed with 0 warnings and 0 errors
Preflight check: ClusterName Check Passed with 0 warnings and 0 errors
CRD successfully registered.
Registration operator is now available.
You’ll also see that the Open Cluster Management Namespaces get created.
kubectl get ns
NAME STATUS AGE
default Active 6m46s
kube-node-lease Active 6m46s
kube-public Active 6m46s
kube-system Active 6m46s
open-cluster-management Active 97s
open-cluster-management-agent Active 96s
open-cluster-management-agent-addon Active 86s
- If you’re switching contexts on the same terminal between the Open Cluster Management cluster and the cluster you’re registering to be managed by the Open Cluster Management cluster, you’ll most likely need to run this command on the Open Cluster Management cluster:
clusteradm accept --clusters cluster_name_that_you_are_registering
You’ll then see an output similar to the one below.
Starting approve csrs for the cluster aksenvironment01
CSR aksenvironment01-x6xmw approved
set hubAcceptsClient to true for managed cluster aksenvironment01
Your managed cluster aksenvironment01 has joined the Hub successfully. Visit https://open-cluster-management.io/scenarios or https://github.com/open-cluster-management-io/OCM/tree/main/solutions for next steps.
A Namespace gets created for the newly registered cluster. The Namespace is how you send workloads to the registered clusters.
For example, if you register a cluster called aksenvironment01
, you’ll see a Namespace with the same name.
kubectl get ns
NAME STATUS AGE
aksenvironment01 Active 2m18s
default Active 94m
kube-node-lease Active 94m
kube-public Active 94m
kube-system Active 94m
open-cluster-management Active 52m
open-cluster-management-hub Active 52m
The Brains Of The Operation
The OCM Management Cluster and the Worker Cluster are both deployed. Let’s test it out by deploying an application stack to the Worker Cluster from the Management Cluster.
💡 Make sure you’re using the right Kube Context. It should be the one for the Management Cluster.
-
Create a Manifest that contains a workload. In the example below, it contains a Deployment and Service to deploy Nginx Pods.
Save the example below as
manifest.yaml
.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginxdeployment
replicas: 2
template:
metadata:
labels:
app: nginxdeployment
spec:
containers:
- name: nginxdeployment
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginxservice
spec:
selector:
app: nginxdeployment
ports:
- protocol: TCP
port: 80
type: NodePort
- Create an object/resource to deploy via Open Cluster Management.
-
work
creates a new object/resource within the Open Cluster Management CRD. The object/resource is called -
deploynginx
: This is just a metadata name. You can call it whatever you’d like, but ideally it would be the resource that’s being deployed for proper naming convetion. -
manifest.yaml
: This is the name of the Manifest file from the previous step. -
clusters
: Looks for a Namespace with your cluster name. This gets created when you register a cluster from the previous section Nodes To Manage.
clusteradm create work deploynginx -f manifest.yaml --clusters clustername
- A new resource will be created called
deploynginx
. You can find it by running the following:
kubectl get manifestwork
- You can see the entirety of the workload by describing it.
kubectl describe manifestwork deploynginx
You should see an output similar to the below.
Name: deploynginx
Namespace: default
Labels: <none>
Annotations: <none>
API Version: work.open-cluster-management.io/v1
Kind: ManifestWork
Metadata:
Creation Timestamp: 2024-06-23T14:54:15Z
Generation: 1
Resource Version: 14699
UID: d5390040-16c7-4628-b3f1-32227563d217
Spec:
Workload:
Manifests:
API Version: apps/v1
Kind: Deployment
Metadata:
Name: nginx-deployment
Spec:
Replicas: 2
Selector:
Match Labels:
App: nginxdeployment
Template:
Metadata:
Labels:
App: nginxdeployment
Spec:
Containers:
Image: nginx:latest
Name: nginxdeployment
Ports:
Container Port: 80
API Version: v1
Kind: Service
Metadata:
Name: nginxservice
Spec:
Ports:
Port: 80
Protocol: TCP
Selector:
App: nginxdeployment
Type: NodePort
Events: <none>
You can see if the workloads were deployed by checking the cluster. For example, if the workloads sent to cluster name aksenvironment01
, you’d run the following.
kubectl get deployment --context aksenvironment01
If you’d like to update anything, you can use the overwrite flag:
clusteradm create work deploynginx -f manifest.yaml \
--clusters clustername \
--overwrite
Congrats! You have successfully built an OCM environment.