Building a Scalable Microservices Application with Java Spring Boot and Angular

Nathan Rajkumar - Oct 30 - - Dev Community

PART 2: Playing With Lego

Make sure to check out Part 1 if your following along in our little adventure otherwise, tally ho!


Okay, day one of the first day of sprint 1, and we are sitting in the boardroom with a whiteboard and a laptop. Where and how do we start?

We need to keep in mind a few guiding principles when designing a new system to ensure that we follow design patterns and best practices.

fry-why

Why? Each choice can make or break future decisions/processes in regards to maintainability, scalability, and fault tolerance. Plus, these are tried. tested and true software design concepts, so you know, there's that

The Big Question

Will this cost money to build? Locally no, we will be using as many free tier options as we can. However, I should mention that if we want to make this application scalable for more traffic, then yeah, there will be costs involved. Leverage at your own discretion.

Okay, lets start putting it together…

Our first component should be our front end…

Angular front end: Provides the user interface for shipping order creation, assignment, and notifications. Also, sends requests to the API Gateway.

Which then routes to our API Gateway…

API Gateway (Zuul): Manages routing, authentication and load balancing. This uses our service discovery to dynamically locate our backend services.

The new order shipping system, according to the CEO, should create orders, assign responsibilities to said orders to authorized users, and receive notifications on any status updates in real time. In other words…

Order Service: Manages orders, including creation and assignment, and publishes OrderCreated events to Kafka.

User Service: Manages authentication, authorization, and user profiles.

Notification Service: Listens to Kafka topics like order-created to trigger notifications.

All of these services registers itself with our Service Discovery so other services can discover and connect to it without hardcoded endpoints. Why? In the case a service cannot be reached at a URL or if the URL is changed at all for that service, it is a pain to have to manually change the URL especially in a large application. This can ultimately lead to scalability issues and cause headaches. So, we let our Service Discovery handle our service endpoints dynamically so we never run into the game of “what's the URL again?”

We also need to add some services to help with our microservice architecture…

Service Discovery (Eureka): Central directory (think like a phone book) where each microservice registers itself upon startup. It also periodically checks service health which then de-registers any failed instances to prevent sending requests to down services. Popular choices include Eureka, Consul, and Spring Cloud Service Registry. We are using Eureka for this build. Why? Eureka is integrated into Spring Cloud, provides self registration, health monitoring and easy API based lookup.
Kafka Broker: Manages topics for event-driven communication among microservices. What are Kafka topics? Kafka topics support asynchronous, event-driven interactions that reduce direct dependencies between services. What this means is that its a non blocking way to send data back and forth without waiting on any synchronous processes. Also, its event driven, which means its listening, queuing up its own messages within a “topic”, then releasing them every time the listening event is triggered.

Centralized Logging and Monitoring (ELK Stack and Prometheus): We are using the ELK (Elasticsearch, LogStash and Kibana) stack to consolidate all the logs from our services for easier debugging and troubleshooting. For monitoring, Prometheus by Grafana allows us to track real time system metrics where we can ensure our application stays performant and most importantly, available.

So far, it should look like this…

lucid.app

Breakdown

Lets take a look at some core principles and patterns:

Single Responsibility Principle (SRP)

Each microservice in this system has a focused responsibility:

Order Service is only concerned with managing orders
Notification Service exclusively handles notifications
User Service is responsible for managing user data

Why? Here we want to make sure we separate our concerns. Why would we want to do this? We want to reduce interdependencies, ensuring that changes in one service have minimal impact on others. SRP aligns with modularity which simplifies testing and allows each service to scale independently if needed

Database per Service Pattern

Every service in this architecture has its own database:

  • The User Service uses a MySQL relational database to store structured user information

  • The Order Service leverages a NoSQL MongoDB database like for fast, flexible storage of order data

  • The Notification Service uses our message queue system in our published topics to send and receive notifications when triggered.

Why? This pattern provides data isolation which means we can enhance data resilience by eliminating shared database dependencies between services. Each service can choose the database type that best fits its needs, which is particularly helpful when thinking about scaling our application

Event-Driven Architecture

An event-driven approach enables asynchronous communication between services. For example:

  • When a new order is created, the Order Service publishes a OrderCreated event to a Kafka topic.

  • The Notification Service listens to this topic and responds by sending notifications to users

Why? The whole point of microservices is keyed into the decoupling of services where we allow each component to work independently. With Kafka, we gain a high-throughput, scalable messaging platform that ensures data consistency while allowing non-blocking communication between services

Saga Pattern for Distributed Transactions

The Saga pattern manages distributed transactions by breaking them into smaller, compensable transactions. For instance:

  • When a new order is assigned, the Order Service first creates the order and publishes an OrderAssigned event

  • The Notification Service picks up this event and sends notifications to the relevant users

  • If the Notification Service encounters an error (e.g., a network issue), the Order Service can compensate by marking the task as “Incomplete” or retrying notifications

Why? This approach to handling distributed transactions with compensating actions keeps data consistent without locking resources, which is ideal for a distributed architecture. What does compensating action mean? If there is ever a moment when a service returns a failure, returns something undesirable or unexpected, the action will abort and previous actions are reverted.

API Gateway Pattern

The API Gateway acts as a single entry point for client requests, reducing the complexity of managing multiple endpoints.

Centralized Authentication: Authentication checks happen at the gateway level, so each service doesn’t need to implement its own.
Routing and Load Balancing: Requests for orders, users, or notifications are routed to the correct microservice, balancing load and providing a single unified entry point for clients

Why? The API Gateway not only simplifies the client’s experience but also provides a layer of control over security, traffic management, and monitoring

Circuit Breaker Pattern

For resilience, we will use the circuit breaker pattern to handle failures in dependencies. For example, if the Notification Service is down:

  • The Order Service can use a circuit breaker to prevent further calls to the Notification Service until it is back up

  • This isolates the failure to the Notification Service, preventing a ripple effect across other services

Why? Using tools like Resilience4J, we can implement circuit breakers to handle dependency failures gracefully and prevent cascading issues across the system

Centralized Logging and Monitoring

To maintain visibility into each service’s health, we use:

  • ELK Stack (Elasticsearch, Logstash, and Kibana) for centralized logging
  • Prometheus for real-time monitoring and alerting

Why? For monitoring we need a way to centralize logging so developers and operations teams can identify and resolve issues proactively by viewing logs, system metrics, and real-time data on service behavior

okay-then

Lets Put It All into Perspective

Let’s consider a typical scenario, like creating a new order. Here’s how the architecture and design patterns work together:

Client Request: A user creates an order through the frontend, which hits the API Gateway

Gateway and Order Service: The API Gateway authenticates the user and routes the request to the Order Service

Event Publication: The Order Service stores the order in its database and publishes an OrderCreated event to Kafka

Notification Service: The Notification Service listens for OrderCreated, sends notifications, and publishes a NotificationSent event

Error Handling: If the Notification Service fails, a Saga compensating action triggers the Order Service to handle the failure by either retrying or marking the order as incomplete

Okay, day one completed, tomorrow we can start looking at the backend infrastructure and exploring and setting up event driven communication using Kafka.

. .
Terabox Video Player