What Problem Is Kubernetes Actually Trying To Solve?

Michael Levan - Aug 10 '22 - - Dev Community

When you comb through the marketing around Kubernetes regarding “this product or that product” or the countless amount of buzz-words that come with it, you’re left with a platform. A platform that as you dive deeper and deeper, isn’t as easy and straightforward as abstraction-based services like Kubernetes running in the cloud make it out to be.

Kubernetes and all of its internals, along with everything under the hood of services like AKS, EKS, GKE, etc. are extremely complex.

With all of this complexity, what is Kubernetes actually trying to solve?

The purpose of this blog post is to explain exactly why Kubernetes exists in the first place.

Technology Over Platform

Kubernetes itself is a platform.

It’s an extremely large platform. It’s been called things like:

  • The datacenter of the cloud
  • The operating system of the cloud

and for good reason - because there’s a lot that happens underneath the hood.

With that being said, let’s take a step back and think about what Kubernetes actually does for us.

Kubernetes scales and manages Pods, which consist of containers, and containers run applications. The applications could be frontend apps, backend apps, scripts, or pretty much anything you’d like.

Kubernetes orchestrates Pods for us by:

  • Scheduling the Pods to run on certain nodes.
  • Auto-scaling up or down, horizontally or vertically, the Pods for us.
  • Allowing the same application to run in multiple Pods with replicas.
  • Exposing Pods in the form of services so users and other Pods running inside of the Kubernetes cluster can access them.

Kubernetes in essence is an orchestrator and manager of applications.

The core problem that Kubernetes is solving is the ability to manage containerized apps at scale.

However, Kubernetes isn’t the only platform doing this. Remember, you must keep in mind that “technology over platform” is extremely important. Kubernetes isn’t doing anything that no other platform has done before. It’s just doing it in, arguably, a much better way.

The most important thing to keep in mind when thinking about what Kubernetes is trying to solve isn’t using Kubernetes, but what Kubernetes actually is, which is an orchestrator and manager of applications.

Complexities Of A Datacenter

When you have virtual machines running in a datacenter from a bare-metal perspective, you’ll have something that looks like this:

  • A datacenter itself
  • Racks of servers
  • Network equipment (firewalls, routers, switches, etc.)
  • Lots of cables

If you’re running Kubernetes on-prem, you’re still going to have the datacenter, racks of servers, network equipment, and cables. What you won’t have are applications running inside of virtual hardware, like ESXi scaled across countless servers in and out of the datacenter.

Instead, you’ll have Kubernetes clusters scaled in and out of the datacenter.

Although this may sound just as complex in the beginning, it’s not. Kubernetes allows you to have one system to manage, with a fraction of time spent vs managing virtualized hardware. You’ll also have one API to interact with vs many UI’s, automation methodologies, and configurations across countless servers. The other reality is, chances are you’re probably not going to run Kubernetes on a bare metal server. You’ll most likely use something like OpenStack or a hybrid cloud model, which will add another layer of abstraction that removes the direct bare-metal management.

In short, you’ll manage your environment with an API instead of clicking buttons and having tons of different scripts automating for you.

Complexities Of The Cloud

When you move workloads to the cloud, the core difference between a data center and the cloud is you’re no longer managing the hardware. Everything else is still on you to figure out, including:

  • Scaling applications
  • Replicas for the applications
  • Auto-healing the applications
  • How you manage and interact with the applications
  • How you automate everything on this list

The cloud definitely made life easier, but the actual complexities other than managing hardware are the same in the cloud.

With Kubernetes, it’s meant to make this a bit easier. Let’s take the cloud as an example. If you’re running Kubernetes in the cloud and you need to scale a node out, you can do so automatically. A worker node automatically gets created and is automatically connected to your Kubernetes cluster. Then, if that worker node is needed for Pods to run it, that automatically happens as well based on the Kubernetes Scheduler.

Another huge, and arguably the most important, aspect of running Kubernetes in the cloud is that the control plane is abstracted for you. This is another layer of abstraction and overall another complexity that you don’t have to worry about.

Without Kubernetes, you can still have auto-scaling groups for cloud virtual machines, but you still need to create the automation and repeatable processes to get your workloads over to the virtual machines.

In short, running Kubernetes in the cloud and outside of the cloud helps to resolve the scalability issues of infrastructure.

Complexities Of Scaling Servers

The ability to scale and have high availability throughout virtualized or bare-metal servers isn’t the easiest task in the world.

Two ways of scaling servers are:

  • Hot
  • Cold

“Hot” means that servers are up and running, but on standby. They’re ready to take the workload as it comes.

“Cold” means the servers possibly have a golden image on them of the application, binaries, and dependencies that are needed, but they’re turned off.

In either of the scenarios, you have to worry about updates to those servers, managing them, paying for them, and automating the upkeep of them.

With Kubernetes, you pretty much just have to add a new worker node or a new control plane. Depending on where you’re deploying the worker node or control plane, you’ll still have to do the upkeep (updates, management, maintenance, etc.) on the server, but it’s far less than having to manage a server that’s running an application.

With scaling in general, it’s no lite task. There are a lot of repeatable processes that need to occur to get an application, binaries, and dependencies running on a server that makes it look and feel the same way as the other servers. With Kubernetes, when you’re installing a new control plane or worker node with, say, Kubeadm, it’s a one-line command that needs to be run on the server. After a few minutes, you’ve officially scaled out.

Complexities Of Scaling Apps

Kubernetes helps in creating a container-centric environment.

In other environments, you may have several other components like:

  • Bare metal
  • Virtual machines
  • A little bit of both

However, with Kubernetes, you have one type of environment - containerized.

With a containerized environment comes, of course, complexity on it’s own, but one great thing is you only have to create and plan for one type of environment. This goes for complexities like scaling of applications as well.

In a traditional environment, there’s a lot of planning that goes into scaling an application. There’s entire workflows that are comprised of how an application will work properly if the load becomes too intense on a server. The workflow (at a high level) looks something like:

  • A new server gets created in an auto-scaling group
  • The server gets configure
  • The server installs all application dependencies
  • The application gets deployed
  • Tests are run to ensure that the application works properly

Even at a high level, those steps appear to be complex. The amount of automation code alone that goes into configuring the server is a beast.

With Kubernetes, it’s pretty much done for you. If a Pod cannot obtain resources on one worker node, it just goes to the next worker node. If more replicas are needed for a Pod, you simply set a mincount and maxcount for how many replicas can be deployed. If you realize that the maxcount is too low, you simply update the maxcount and re-deploy the Kubernetes Manifest.

Kubernetes is certainly complex, but how easy it makes scaling applications is amazing.

Upgrading Applications

Thinking about the previous section, there is also the need to upgrade applications. The workflow (at a high level) typically looks something like this on a server:

  • SSH into the server
  • Copy the new binary to the server
  • Shut down the service
  • Add in the new binary
  • Start the service
  • Hope that it all goes well

With Kubernetes, you don’t have to do that. Most of that work is done for you with Rolling Updates.

Rolling Updates are like Canary Deployments. Let’s say you have three replicas for the Deployment. A Rolling Update will roll out the new version to one Pod, confirm it works, then move on to the next. This approach is much more straightforward than on a general virtual machine or bare-metal server.

Manual Complexities

Manual efforts mean:

  • Slower build and times
  • Human intervention
  • No repeatable process

Let’s break down each of these.

If you’re attempting to build and deploy an application on a server, it’s going to look something like this - create a server, deploy an operating system, run updates, install app dependencies, deploy the app, and troubleshoot the app. This is not only hours but perhaps days of effort. Instead, it makes sense to containerize an app, package up the dependencies, and deploy it out to be managed by an orchestrator. The feedback loop of what’s wrong with the app and what needs to be fixed is vastly faster than the original, manual effort of deploying a server.

Humans make mistakes, and that won’t ever stop. Human intervention when it comes to application deployments is awful, and truthfully, there’s no reason for it anymore. 20 years ago engineers needed to be on a keyboard deploying an app. Now, there are so many automation platforms and orchestrators that it doesn’t make sense to spend time doing things manually. Instead, train systems to do the work for you. Kubernetes is one of those systems.

If you’re manually doing work, that means no repeatable processes are being created. Not only is it not great for you because you’re putting out fires instead of creating value-driven work, but it’s not great for your team. If you’re on vacation or if you quit, the rest of your team has no idea what makes the system you were working on “tick”. Orchestration platforms and creating repeatable processes help mitigate this risk. It’s a huge lift off of any engineer's shoulders.

In short, Kubernetes is configuration management for your apps, and one of the reasons it exists is to remove the manual complexity of manually configuring apps.

Desired State

In any environment, there are two types of state:

  • Current state
  • Desired state

Sometimes, the current state of your platforms and application are the desired state. Most times, they aren’t. Whether it’s because what was deployed didn’t work as expected or someone SSH’d into a server manually and changed something. Then, if the environment is re-deployed, the state wouldn’t match the manual change.

Kubernetes aims to solve this with Controllers, which look at the current state and confirm that it’s the desired state. For example, take the Deployment Controller. The Deployment Controller will, for example, look at an app that’s supposed to have three replicas. If, for whatever reason, only two exist, the Deployment Controller will take action and ensure that the application obtains a third replica (a third Pod).

There are other Kubernetes-centric systems that do this as well. For example, GitOps. GitOps takes the idea of what a Kubernetes Controller is and puts the same method into having a source control repository become the desired state. GitOps methodologies “check in” with the source control repo to confirm what’s deployed to Kubernetes matches what’s in source control. If it doesn’t, the GitOps Controller (like Flux) kicks into gear and deploys what’s needed for the Kubernetes environment to match the source control repo.

The Push For Microservices

Monolithic applications are apps that are tightly coupled, which means if any update needs to occur on one part of the app, it’s going to impact the rest of the application.

Microservices are carved out pieces of an app. For example, let’s say you have a frontend, a middleware, and a backend. In a monolithic environment, all three of those pieces would be together in the same codebase. In a microservice, they’re split into three different parts. That means if you want to update one part, the rest of the app isn’t impacted.

Because of the way that Pods work, it essentially doesn’t make sense to not split up your application into microservices, or at least split up the code base in separate repos and have a container image for each code base.

Because of how straightforward Kubernetes makes it to create a Pod, it’s helped a ton for organizations to get the push they’ve need to enable microservices.

Closing Thoughts

When implementing Kubernetes, the primary thing to remember is it’s complex in the beginning, but as you learn about it, you realize that it’s only complex because it’s a new way of managing applications. Once you truly go in-depth with Kubernetes, you understand that it’s actually removing a lot of the complexities that standard environments face and in turn, allows you to focus on more value-driven work instead of putting out fires or utilizing a ton of your time in manual intervention.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player