Deploying and managing Kubernetes is at the top of many organizations minds ranging from startups to the enterprise. When thinking about Kubernetes, there are two top line items; how it’s being deployed and what’s being deployed to it (which applications). From a cloud management and infrastructure standpoint, the biggest question is how it’s being deployed and what the architecture will look like.
In this blog post, you’ll learn about step one; how it’s being deployed. You’ll go over the most popular cloud services for Kubernetes including the pros and cons.
How To Deploy Kubernetes
There are many ways to deploy Kubernetes for both organizations and personal use. Some of the popular ways are:
- A Kubernetes service in the cloud
- locally with Minikube or Docker Desktop
- With a Rasberry Pi running something like k3s
- With a cloud/platform agnostic solution like OpenStack or Kubernetes running on OpenShift
You’ll most likely also see raw Kubernetes clusters talked about, which is Kubernetes running on a bunch of virtual machines that you manage. A popular way to do this is with Kubeadm.
What To Think About
When you’re deploying Kubernetes in any environment, big or small, there are a few things that you’ll want to think about.
The first is who’s managing it. If you have or are in a small team, running Kubernetes in the cloud most likely makes the most sense because you really don’t have to manage virtual machines or infrastructure. You’ll primarily be focused on deploying applications and making sure the networking works properly within the cluster and between the Kubernetes applications.
If you have or are in a larger team, perhaps something like OpenShift would make sense if you’re company is fully invested in enterprise support and no vendor lock-in. Cloud and platform agnostic scenarios are a reality for a lot of large organizations, so tying into something like Azure or AWS might not be in the cards.
If you want to simply play around with Kubernetes and learn it, Minikube is a great option. If you want a little DIY action and enjoy playing with ARM devices, pick up a few Rasberry Pi’s and run k3s on them. You’ll not only learn a lot about what it’s like to manage a raw Kubernetes cluster, but you’ll have a cool story to tell.
Kubernetes Service Breakdown
Now that you’ve read about Kubernetes deployments and how to think about them, let’s break down the various ways you can deploy Kubernetes in the cloud. One thing you’ll notice is nine times out of ten, they’re all sort of doing the same thing. The result, outcome, and reason why you’d use a Kubernetes service in the cloud is pretty much the same. However, it is important to understand the differences and upsides to each.
Azure Kubernetes Service (AKS)
AKS is the Azure implementation of Kubernetes. It allows you to not have to worry about the control plane and instead just worry about the worker nodes. The Kubernetes worker nodes run as Azure Virtual Machines. You can manage AKS via:
- Azure portal
- PowerShell
- Azure CLI
- Terraform
- ARM
and pretty much any other way that you can connect to Azures API with.
When you’re deploying AKS, you have Azure-specific options available to you like Azure networking, Azure Active Directory, and out-of-the-box monitoring.
AKS is a great option if you want an easy-to-spin-up Kubernetes cluster. You won’t have much control over it compared to EKS in AWS, but it’s a great solution if you want something quick and easy.
In preview there is Azure Container Apps, which is supposed to be a version of “serverless Kubernetes”.
Elastic Kubernetes Service (EKS)
EKS is the AWS implementation of Kubernetes. Like AKS, you don’t have to worry about the control plane/API server. Instead, you just run worker nodes as EC2 instances. You create the worker nodes via a Node Pool. The Node Pool contains the EC2 instances that run as Kubernetes worker nodes. You can manage EKS via:
- AWS portal
- Terraform
- AWS CLI
and any other API/programmatic related way.
EKS is definitely known for its complexity, but arguably the complexity is a good thing. Some engineers want a lot of abstraction and others want the ability to have more control and customization. When you’re deploying EKS, you have far more control than, say, AKS.
From a “serverless Kubernetes” perspective, there is something called Fargate Profiles for EKS. When you use a Fargate Profile, you no longer have to worry about EC2 instances as worker nodes in node groups. Instead, you just worry about deploying the application.
Google Kubernetes Engine (GKE)
It’s well-known that Kubernetes was created at Google. In fact, it was used for roughly 15 years at Google prior to Google releasing it to the public. Because of that, if you assume that GKE is a top notch Kubernetes service, you’re correct. GKE is definitely one of the easiest ways to deploy Kubernetes and you have a ton of management around the cluster itself.
You can manage GKE by all of the standard ways mentioned above, and also with gcloud
which is the command-line tool for Google Cloud Platform (GCP).
If you want to go the “serverless Kubernetes” route, GKE has AutoPilot. AutoPilot allows you to, much like Azure Container apps and Fargate Profiles, not have to worry about the underlying worker nodes. You just have to deploy an application, manage the networking of the application, and you’re on your way.
Which Service Should You Use?
The reality is all of the Kubernetes services in most cases are the same. It comes down to a “pick your poison” option. If you’re already in GCP, Azure, or AWS, choose the service that they offer. If you want a more cloud/vendor-agnostic solution, go with something like OpenShift or Kubernetes running on OpenStack.