Hi everyone, my name is Michael Levan. Over the course of three months, I’ve been working with OpenMetal to conduct research and perform tests around OpenStack, but primarily, Kubernetes on OpenStack.
I started this journey to really dive in and figure out if OpenStack was a good solution for Kubernetes to run on, especially with all of the other Managed Kubernetes Services out in the wild.
In this blog post, I’m going to explain my findings and what stuck out to me, along with my experience with OpenMetal.
OpenMetal Overall
First and foremost, I never really did anything with OpenStack prior to this engagement. I believed that because of that, I could come in with “fresh eyes” to really take a look at the entire OpenStack experience to give my honest feedback on how I felt about Kubernetes running on OpenStack.
The environment that was configured for me in OpenMetal’s private cloud was:
- Three physical clusters
- Intel(R) Xeon(R) D-2141I CPU @ 2.20GHz
- 128GB of RAM
- 3.2GB of NVMe storage
As you can see, it was a pretty good stack that provided plenty of reliability, redundancy, and high availability. Throughout my OpenStack journey on OpenMetal, my environment didn’t go down once.
The idea of running OpenStack is daunting for many organizations. Quite frankly, it’s not the easiest thing in the world and requires a ton of engineering effort. With OpenMetal, the heavy lifting is abstracted away from you and all you have to worry about are the value-driven engineering efforts.
Kubernetes On OpenStack
The first task that I wanted to tackle when I dove into OpenStack was to understand the “why” behind organizations wanting to use Kubernetes on OpenStack instead of a Managed Kubernetes Service or rolling their own cluster. I’ll admit, I was a bit skeptical at first about finding a reason. As I dove deeper, I found several reasons.
During my research, I came across an article that showed the Mercedes (car manufacturer) tech team was running over 900 Kubernetes clusters, and they were all running on OpenStack. This spiked my interest a ton and I had to take a closer look. I came to realize that the team runs Kubernetes in this fashion because the idea of having automated workflows in OpenStack combined with managing the overall infrastructure and network implementations was crucial for their success. They needed certain specs, node sizes, and clusters on demand. They couldn’t wait for quota limits to get lifted. Kubernetes is a true platform in their environment and they needed to treat it as such.
Diving in a bit deeper, I also looked at Telco providers, and some of the same rules applied for Telco as they do for Mercedes. Telco providers want a way to manage their infrastructure in a cloud-native fashion, but while managing the infrastructure. Think about Telco providers - they have a lot of network traffic going back and forth constantly. They can’t allow that to be in control by a third party. They must control that themselves, but at the same time, they don’t want to run bare-metal. They want the cloudy feel while managing the underlying components. That’s where OpenStack shines.
In short, Kubernetes on OpenStack gives you a healthy combination of “managing it yourself” and feeling like you’re in a regular cloud. You have all of the automation and repeatable capabilities, along with being able to truly manage your infrastructure the way you want to.
Kubernetes Installation Types
After I understood the use case for Kubernetes on OpenStack, I dove into installation methods. There are a ton of different ways to deploy Kubernetes on OpenStack. The ones that are popular like Rancher, Kubespray, etc. work great, as well as all of the other popular installation methods. I decided to go with two installation methods to get a feel for how it all worked.
- Magnum templates
- Kubeadm
Magnum is a way to create orchestration templates. It’s not just for Kubernetes. It works for other orchestrators like Docker Swarm and Mesos. However, because I was primarily testing Kubernetes components, I used Magnum for Kubernetes. Overall, the experience was fantastic. Magnum templates give you the ability to almost create a golden image of sorts, except the golden image is in a template/text-based. You can specify the base image you want to use (like Ubuntu), the size, type, CNI components, and a lot more. Then, all I had to do was click a “create” button and in a little while, my Kubernetes cluster was up and running. Pretty straightforward.
The next option I went with because I wanted a good mix of a raw Kubernetes cluster and automation, was Kubeadm. I spun up three servers in OpenStack running on Ubuntu. I created the control plane and the two worker nodes just like I would in any virtual machine, ran the standard Kubeadm installation approach, and everything worked as expected. I had zero problems performing a Kubeadm installation and overall management on OpenStack. It was definitely a pleasant experience.
Overall, OpenStack gives you a great out-of-the-box experience with Magnum templates, and you also have the ability to roll your own cluster with the popular Kubernetes deployment tools/platforms out there.
Key Kubernetes Components
The last primary part of Kubernetes on OpenStack I tested was around key Kubernetes implementations, like secret providers and Container Storage Interfaces (CSI). As expected after I learned about Kubernetes on OpenStack, the experience was good.
First, I started out with figuring out the CSI situation. For any app that requires the storage of data, a CSI was needed. Going through the implementation details, I found that there are two CSI’s available for OpenStack - Cinder and Manila. Cinder remains the most popular option at the time of writing this, so I went with that solution. Luckily, the installation process was pretty straightforward. There’s a Kubernetes Manifest on GitHub that has the CRD and requirements needed to install it, so it was just a matter of running the workload and getting the CSI installed. Overall a pleasant experience.
Next, I dove into secrets management on OpenStack. You essentially have a few options - use a third-party service, like HashiCorp Vault, or use the Barbican built-in secrets manager. Because I wanted to stay in the OpenStack native tools, I went with Barbican. This was a great solution to the overall Kubernetes Secrets problem because, by default, Kubernetes Secrets use the Opaque standard, which stores secrets in plain text. As this is a serious problem for every organization, Barbican was needed.
One thing I’ll mention here is that there isn’t a lot of documentation around the process of implementing certain aspects of Kubernetes with OpenStack. For example, going through the Barbican/Kubernetes implementation, I reached out to the OpenMetal engineering team a ton, and they had to write up documentation around the process because the documentation didn’t exist anywhere else. For pieces of the puzzle like this, I believe that although the options exist, there’s an extremely high barrier to entry for newcomers in the OpenStack space. Documentation definitely needs to be updated for this. This has nothing to do with OpenMetal, but instead, the overall OpenStack community.
Wrapping Up
Closing out my engagement with OpenMetal, I’m happy to say that diving into Kubernetes on OpenStack was an incredibly successful experience. OpenStack certainly isn’t the most popular solution on the market right now when it comes to Kubernetes, but I truly believe that it should be looked at way more in the coming months and years as it’s an incredible middle ground between managing the environment yourself and having the cloudy feel for automation and repeatability.