OpenShift For Managed Kubernetes: Up and Running Guide

Michael Levan - Apr 20 '22 - - Dev Community

When you’re thinking about deploying Kubernetes, there are two primary methods:

  • A cloud-based service like EKS, AKS, or GKE
  • On-prem Kubernetes

However, those aren’t the only methods. There’s a middle-ground that allows you to use the same internal Kubernetes workings while not having to worry about the cluster itself in a Platform-as-a-Service (PaaS) type of way. One of those ways is with Red Hat’s OpenShift.

In this blog post you’ll learn about what OpenShift is, why you’d use it, and how to get a full environment up and running to deploy an application.

What is OpenShift?

OpenShift isn’t necessarily a Kubernetes alternative because it’s technically still using Kubernetes underneath the hood, but it’s sort of an alternative to the standard Kubernetes approach. OpenShift is considered both a PaaS and a container platform, so you have the usability of a PaaS while also having the needed internals of a container platform like Kubernetes. The goal is more or less to get the best of both worlds. You don’t have to worry about managing a cluster in the cloud or on-prem and you can deploy it anywhere (AWS, Azure, on-prem, etc.).

Out of the box, OpenShift offers a few components:

  • Monitoring
  • Policy management
  • Standard security practices
  • Compatibility with all Kubernetes workloads
  • Helm chart support
  • A pretty straightforward UI

and a lot more great features.

One of the best features other than application specific workloads is the ability to run OpenShift anywhere. If you’re using a cloud-based Kubernetes service like AKS or EKS, you’re locked into that cloud. With OpenShift, you can run a truly agnostic environment (other than having to use Red Hats enterprise product).

Why Would You Use OpenShift?

There are a few reasons why you’d want to use OpenShift:

  • You can run it anywhere in the cloud or on-prem
  • You want to use Kubernetes, but you don’t want to create and manage a Kubernetes cluster
  • You’re already in the Red Hat ecosystem
  • Monitoring and logging are available out of the box
  • Cost management directly from the OpenShift portal
  • Built-in security features like RBAC

To break the reasons down a bit more:

If you’re an engineer/developer, you’ll like the idea of using Kubernetes without having to manage Kubernetes. You may also love the idea of not being tied to a specific cloud or on-prem environment if you want to stay vendor-agnostic. The enterprise support from Red Hat whenever you have a problem is a great way to help troubleshooting as well.

If you’re a manager/leader, you’ll like the enterprise support and cost management features. The security pieces probably play a big role for you too if there are certain compliance policies that need to be met.

Signing Up For Free

Now that you know what OpenShift is and why you may be interested in it, let’s get down to the hands-on goodness of this blog post. First things first, you’ll need to sign up.

The best way to get started in a development environment is with the sandbox.

Image description

You can sign up using an email address (you’ll have to create a free Red Hat account as well).

To sign up for the OpenShift Developer Sandbox, go to this link: https://developers.redhat.com/developer-sandbox/get-started

Once you signed up, you’ll have four options:

  • Provision a dev cluster
  • Deploy a sample app
  • Edit the code in an IDE
  • Explore the developer experience

Choose the first option to provision a dev cluster and once you’re done, click the red Start using your sandbox button.

Image description

You’ll then be brought to the OpenShift page

Image description

A Look Around The OpenShift UI

The UI is broken up into different sections for different workloads. For example, when you deploy a Kubernetes Deployment spec, it’s in a different places than the Kubernetes Service spec.

Image description

Although the Kubernetes specs are in different places, they still all work together. For example, if you have a Kubernetes Deployment spec that needs a Service, you can create the Deployment manifest and then the Service manifest.

The portal is broken up into two views:

  • Administrator
  • Developer

Poke around the UI a little bit. Click buttons, see how things work, and get familiar with what options are under each portal section.

Image description

Administrator Portal

The administrator portal is pretty much where all of the Kubernetes workloads exist. Deploying a Deployment spec, a Service spec, and pods. Performing network and storage tasks like Ingress and PersistentVolumes. Preparing and deploying an application via Kubernetes manifests is all done via the administrator portal.

Developer Portal

The developer portal is where you’ll find observability (monitoring), Helm charts, ConfigMaps, and secrets. It definitely seems a little weird at first because a lot of the sections that you’ll find in the administrator portal sort of seem like they should be in the developer portal. In any case, that’s the way that it’s set up.

You can also import code bases via Git and import YAML manifests via the developer portal.

Deploying an App

Now that you’re familiar with the OpenShift UI, let’s learn how to deploy an application. To keep things simple, you can use a sample application. The sample application won’t do much, but you’ll be able to see how all of the Kubernetes specs (Deployments, Services, Pods, etc.) work together in OpenShift.

Deploying a Deployment Spec

First, go to the Administrator Portal and click Workloads —> Deployments —> Create Deployment.

You’ll see a Kubernetes manifest like the one below:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: example
  namespace: mikel1992-dev
spec:
  selector:
    matchLabels:
      app: httpd
  replicas: 3
  template:
    metadata:
      labels:
        app: httpd
    spec:
      containers:
        - name: httpd
          image: >-
            image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest
          ports:
            - containerPort: 8080
Enter fullscreen mode Exit fullscreen mode

Take node of the app: httpd key/value pair.

Click the blue create button and you’ll see that 3 pods were created.

Image description

Deploying a Service

Now that the deployment spec is created, think about this; what if you need to attach a Service spec to the deployment? You can’t do it via the same manifest under Workloads —> Deployments, so you need to create the Service spec another way.

To do that, under the Administrator Portal, go to Networking —> Services and click the blue Create Service button.

Notice on line 8 of the Kubernetes manifest that there’s a key/value pair for app: MyApp. Change it to app: httpd like the Kubernetes manifest in the Deployment space.

The manifest should now look like the below:

apiVersion: v1
kind: Service
metadata:
  name: example
  namespace: mikel1992-dev
spec:
  selector:
    app: httpd
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376
Enter fullscreen mode Exit fullscreen mode

Click the Create button.

You’ll now see that the sample service is created and the three pods that were created in the Deployment spec are connected.

Image description

Wrapping Up

The cool thing about OpenShift is it’s Kubernetes without the whole “having to manage the cluster” piece. You get the best of both a PaaS and containerized solution world while being able to run it anywhere you want. You can use the standard Kuberentes manifests that you’ve been using and have one centralized UI to manage it all.

It is a Red Hat Enterprise solution, so don’t expect it to be cheap. That may be the only downside for many small and mid-sized organizations, but if the money is available, at least take OpenShift for a test drive.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player