Unlock Real-Time Data Streaming in 5 Minutes with Apache Kafka & Quarkus

Emily Johnson - Oct 11 - - Dev Community

Building microservices with Quarkus, which integrates Apache Kafka within a Kubernetes cluster, is a complex process that requires careful consideration. Fortunately, Quarkus provides built-in support for MicroProfile Reactive Messaging, making it easy to interact with Apache Kafka. For a detailed, step-by-step guide on sending and receiving messages to and from Kafka, refer to carsnewstoday.com.

While running Kafka locally via Docker using docker-compose is a great starting point, microservices developers often need to access Apache Kafka within Kubernetes environments or hosted Kafka services. This is where things can get tricky.

Accessing Apache Kafka in Kubernetes: Option 1

The open-source Strimzi project offers a solution, providing container images and operators for deploying Apache Kafka on Kubernetes and Red Hat OpenShift. A series of informative blog posts on Red Hat Developer, titled "Accessing Apache Kafka in Strimzi," outlines the process of utilizing Strimzi. To access Kafka from applications, developers can choose from several options, including NodePorts, OpenShift routes, load balancers, and Ingress.

However, these options can be overwhelming, especially when all you need is a simple development environment to create reactive applications. In my case, I wanted to set up a basic Kafka server within my Minikube cluster.

A quick start guide is available for deploying Strimzi to Minikube, but it lacks clear instructions on how to access it from applications.

To fill this gap, I created a simple script that deploys Kafka to Minikube in under 5 minutes. The script is part of the cloud-native-starter project. To give it a try, simply run the following commands:

$ git clone https://github.com/IBM/cloud-native-starter.git
$ cd cloud-native-starter/reactive
$ sh scripts/start-minikube.sh
$ sh scripts/deploy-kafka.sh
$ sh scripts/show-urls.sh

The output of the last command prints out the URL of the Kafka bootstrap server which you’ll need in the next step. You can find all resources in the ‘Kafka’ namespace.

To access Kafka from Quarkus, the Kafka connector has to be configured. When running the Quarkus application in the same Kubernetes cluster as Kafka, use the following configuration in ‘application.properties’. ‘my-cluster-Kafka-external-bootstrap’ is the service name, ‘Kafka’ the namespace and ‘9094’ the port.

kafka.bootstrap.servers=my-cluster-kafka-external-bootstrap.kafka:9094
mp.messaging.incoming.new-article-created.connector=smallrye-kafka
mp.messaging.incoming.new-article-created.value.deserializer=org.apache.kafka.common.serialization.StringDeserializer

When developing the Quarkus application locally, Kafka in Minikube is accessed via NodePort. In this case, replace the kafka.bootstrap.servers configuration with the following URL:

$ minikubeip=$(minikube ip)
$ nodeport=$(kubectl get svc my-cluster-kafka-external-bootstrap -n kafka --ignore-not-found --output 'jsonpath={.spec.ports[*].nodePort}')
$ echo ${minikubeip}:${nodeport}

Alternative 2: Harnessing Kafka as a Cloud-Based Solution

Leading cloud providers offer fully managed Kafka services, doing away with the need for self-management. For example, IBM Cloud's managed Kafka service, Event Streams, provides a free lite plan that grants access to a single partition within a multi-tenant Event Streams cluster, requiring only a free IBM ID, with no credit card required.

Similar to most production-ready Kafka services, Event Streams necessitates a secure connection. This additional configuration must be specified in the 'application.properties' file once again.

kafka.bootstrap.servers=broker-0-YOUR-ID.kafka.svc01.us-south.eventstreams.cloud.ibm.com:9093,broker-4-YOUR-ID.kafka.svc01.us-south.eventstreams.cloud.ibm.com:9093,...MORE-SERVERS
mp.messaging.incoming.new-article-created.connector=smallrye-kafka
mp.messaging.incoming.new-article-created.value.deserializer=org.apache.kafka.common.serialization.StringDeserializer
mp.messaging.incoming.new-article-created.sasl.mechanism=PLAIN
mp.messaging.incoming.new-article-created.security.protocol=SASL_SSL
mp.messaging.incoming.new-article-created.ssl.protocol=TLSv1.2
mp.messaging.incoming.new-article-created.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="token" password="YOUR-PASSWORD";

To unlock access to this data, you'll require two crucial pieces of information: a list of Kafka bootstrap servers and your Event Streams service password. You can procure these details through the web interface of the Event Streams service or by leveraging the IBM Cloud CLI.

My colleague Harald Uebele has developed a script that automates the setup of the service and retrieves these two essential pieces of information in a programmatic manner.

Next Steps

The scripts mentioned in this article are an integral part of the cloud-native-starter project, which provides comprehensive guidance on developing reactive applications with Quarkus. For a more in-depth exploration of the project, I recommend reading my previous article.

Take the opportunity to delve into the code and explore it for yourself.

. . . . . . . . . . . . . . . . . . . . .
Terabox Video Player