Built replies generation application with Angular

Connie Leung - Jun 13 - - Dev Community

Introduction

I built a replies generation application four times using different large language models, APIs, frameworks, and tools. I experimented with different tools and models to find my preferred stack to build Generative AI applications.

Create a new Angular Project

ng new ng-prompt-chaining-demo
Enter fullscreen mode Exit fullscreen mode

Update the app component

// app.component.ts

import { Component } from '@angular/core';
import { RouterOutlet } from '@angular/router';

@Component({
  selector: 'app-root',
  standalone: true,
  imports: [RouterOutlet],
  template: '<router-outlet />',
})
export class AppComponent {}
Enter fullscreen mode Exit fullscreen mode

The app component has a router outlet to lazy load the shell component, allowing users to input feedback and generate replies in the same language.

Define routes to load the reply component

// app.constant.ts

import { InjectionToken } from '@angular/core';

export const BACKEND_URL = new InjectionToken<string>('BACKEND_URL');
Enter fullscreen mode Exit fullscreen mode
// feedback.routes.ts

import { Route } from "@angular/router";
import { BACKEND_URL } from '~app/app.constant';
import { ReplyComponent } from "./reply/reply.component";
import { FeedbackShellComponent } from './feedback-shell/feedback-shell.component';

export const CUSTOMER_ROUTES: Route[] = [
  {
    path: '',
    component: FeedbackShellComponent,
    children: [
      {
        path: 'gemini',
        title: 'Gemini',
        component: ReplyComponent,
        data: {
          generativeAiStack: 'Google Gemini API and gemini-1.5-pro-latest model'
        },
        providers: [
          {
            provide: BACKEND_URL,
            useValue: 'http://localhost:3000'
          }
        ]
      },
      {
        path: 'groq',
        title: 'Groq',
        component: ReplyComponent,
        data: {
          generativeAiStack: 'Groq Cloud and gemma-7b-it model'
        },
        providers: [
          {
            provide: BACKEND_URL,
            useValue: 'http://localhost:3001'
          }
        ]
      },
      {
        path: 'huggingface',
        title: 'Huggingface',
        component: ReplyComponent,
        data: {
          generativeAiStack: 'huggingface.js and Mistral-7B-Instruct-v0.2 model'
        },
        providers: [
          {
            provide: BACKEND_URL,
            useValue: 'http://localhost:3003'
          }
        ]
      },
      {
        path: 'langchain',
        title: 'Langchain',
        component: ReplyComponent,
        data: {
          generativeAiStack: 'Langchain.js and gemini-1.5-pro-latest model'
        },
        providers: [
          {
            provide: BACKEND_URL,
            useValue: 'http://localhost:3002'
          }
        ]
      },
    ]
  }
];
Enter fullscreen mode Exit fullscreen mode

In this demo, I have four backend applications and a frontend application. The child paths load the same ReplyComponent, but each calls a different endpoint to generate a reply.

When the path is /gemini, the component requests http://localhost:3000. When the path is /groq, the component requests http://localhost:3001. I tackled this problem by dependency injection. I created an injection token, BACKEND_URL, to inject a different endpoint for each child route.

Define application routes to lazy load children routes

// app.route.ts

import { Routes } from '@angular/router';

export const routes: Routes = [
    {
        path: 'customer',
        loadChildren: () => import('./feedback/feedback.routes').then((m) => m.CUSTOMER_ROUTES)
    },
    {
        path: '',
        pathMatch: 'full',
        redirectTo: 'customer/gemini'
    },
    {
        path: '**',
        redirectTo: 'customer/gemini'
    }
];
Enter fullscreen mode Exit fullscreen mode

When the path is /customer, the application loads the lazy feedback routes, and the ReplyComponent. The default and the 404 routes redirect to the first ReplyComponent that makes requests to http://localost:3000.

Create the feedback shell component

// feedback-shell.component.ts

// Omit the import statements due to brevity

@Component({
  selector: 'app-feedback-shell',
  standalone: true,
  imports: [RouterOutlet, RouterLink],
  template: `
    <div class="grid">
      <h2>Customer Feedback</h2>
      <nav class="menu">
        <p>Menu</p>
        <ul>
          <li><a routerLink="gemini">Gemini</a></li>
          <li><a routerLink="groq">Groq + gemma 7b</a></li>
          <li><a routerLink="huggingface">Hugginface JS + Mixtrial</a></li>
          <li><a routerLink="langchain">Langchain.js + Gemini</a></li>
        </ul>
      </nav>
      <div class="main">
        <router-outlet />
      </div>
    </div>
  `,
  changeDetection: ChangeDetectionStrategy.OnPush,
})
export class FeedbackShellComponent {
  router = inject(Router);

  constructor() {
    this.router.navigate(['gemini']);
  }
}
Enter fullscreen mode Exit fullscreen mode

This shell component displays a menu bar for users to route to a different page to call the backend to generate a reply. In the constructor, the component navigates to the gemini path to allow the user to call the backend hosted at http://localhost:3000.

Implement the Reply Head component

// reply-head.component.ts

// Omit the import statements due to brevity

@Component({
  selector: 'app-reply-head',
  standalone: true,
  template: `
    <div>
      <span>Generative AI Stack: </span> 
      <span>{{ generativeAiStack() }}</span>
    </div>
  `,
  changeDetection: ChangeDetectionStrategy.OnPush,
})
export class ReplyHeadComponent {
  generativeAiStack = input<string>('');
}
Enter fullscreen mode Exit fullscreen mode

This simple component displays the Generation AI stack that I used to generate replies.

// feedback-send.componen.ts

// Omit the import statements due to brevity

@Component({
  selector: 'app-feedback-send',
  standalone: true,
  imports: [FormsModule],
  template: `
    <p>Feedback: </p>
    <textarea rows="10" [(ngModel)]="feedback" ></textarea>
    <div>
      <button (click)="handleClicked()" [disabled]="vm.isLoading">{{ vm.buttonText }}</button>
    </div>
    <p class="error">{{ vm.errorMessage }}</p>
  `,
  changeDetection: ChangeDetectionStrategy.OnPush,
})
export class FeedbackSendComponent {
  feedback = signal('')
  prevFeedback = signal<string | null>(null);
  errorMessage = signal('');
  isLoading = model.required<boolean>()
  clicked = output<{ feedback: string }>();
  buttonText = computed(() => this.isLoading() ? 'Generating...' : 'Send'); 

  viewModel = computed(() => ({
    feedback: this.feedback(),
    prevFeedback: this.prevFeedback(),
    isLoading: this.isLoading(),
    buttonText: this.buttonText(),
    errorMessage: this.errorMessage(),
  }));

  handleClicked() {
    const previous = this.vm.prevFeedback;
    const current = this.vm.feedback;

    this.errorMessage.set('');
    if (previous !== null && previous === current) {
      this.errorMessage.set('Please try another feedback to generate a different response.');
      return;
    }

    this.prevFeedback.set(current);
    this.clicked.emit(current);
    this.isLoading.set(true);
  }

  get vm() {
    return this.viewModel();
  }
}
Enter fullscreen mode Exit fullscreen mode

The FeedbackSendComponent component comprises a text area and a send button to emit feedback to the parent component. The feedback signal is two-way data binding to the text area to store the feedback. The prevFeedback signal stores the previous feedback. When feedback and prevFeedback are the same, the component displays a message asking for different feedback. isLoading model disables the button and changes the text from "Send" to "Generating..." and vice versa. clicked is an OutputEmitterRef that emits the current feedback to the parent component.

Implement the Reply component

// reply.component.ts

// Omit the import statements due to brevity

@Component({
  selector: 'app-reply',
  standalone: true,
  imports: [ReplyHeadComponent, FeedbackSendComponent],
  providers: [ReplyService],
  template: `
    <app-reply-head class="head" [generativeAiStack]="generativeAiStack()" />
    <app-feedback-send [(isLoading)]="isLoading" />
    <p>Reply: </p>
    <p>{{ reply() }}</p>
  `,
  changeDetection: ChangeDetectionStrategy.OnPush,
})
export class ReplyComponent {
  generativeAiStack = input<string>('');
  feedbackSend = viewChild.required(FeedbackSendComponent);
  isLoading = signal(false)
  feedback = signal('');
  reply = signal('');
  replyService = inject(ReplyService);

  constructor() {
    effect((cleanUp) => { 
      const sub = outputToObservable(this.feedbackSend().clicked)
        .pipe(
            filter((feedback) => typeof feedback !== 'undefined' && feedback.trim() !== ''),
            map((feedback) => feedback.trim()),
            tap(() => this.reply.set('')),
            switchMap((feedback) => this.replyService.getReply(feedback)
               .pipe(finalize(() => this.isLoading.set(false)))
            ),
        ).subscribe((aiReply) => this.reply.set(aiReply));

      cleanUp(() => sub.unsubscribe());
    });
  }
}
Enter fullscreen mode Exit fullscreen mode

ReplyComponent uses viewchild to obtain the reference to FeedbackSendComponent. this.feedbackSend().clicked is an OutputEmitterRef that must convert to an Observable to pipe the feedback to various RxJS operators to invoke the service to generate replies. The Observable subscribes, assigns the result to the reply signal, and displays it on the user interface.

Implement the ReplyService

The service injects BACKEND_URL to obtain the endpoint and calls POST to generate a reply from feedback.

// reply.service.ts

@Injectable()
export class ReplyService {
  private readonly httpClient = inject(HttpClient);
  private readonly backendUrl = inject(BACKEND_URL); 

  getReply(prompt: string): Observable<string> {
    return this.httpClient.post(`${this.backendUrl}/esg-advisory-feedback`, { prompt }, {
      responseType: 'text'
    }).pipe(
      retry({ count: 3, delay: 500 }),
      catchError((err) => {
        console.error(err);
        return (err instanceof Error) ? of(err.message)
          : of('Error occurs when generating reply');
      })
    );
  }
}
Enter fullscreen mode Exit fullscreen mode

Let's create an Angular docker image and run the Angular application in the docker container.

Dockerize the application

// .dockerignore

.git
.gitignore
node_modules/
dist/
Dockerfile
.dockerignore
npm-debug.log
Enter fullscreen mode Exit fullscreen mode

Create a .dockerignore file for Docker to ignore some files and directories.

// Dockerfile

# Use an official Node.js runtime as the base image
FROM node:20-alpine

# Set the working directory in the container
WORKDIR /usr/src/app

# Copy package.json and package-lock.json to the working directory
COPY package*.json /usr/src/app

RUN npm install -g @angular/cli

# Install the dependencies
RUN npm install

# Copy the rest of the application code to the working directory
COPY . .

# Expose a port (if your application listens on a specific port)
EXPOSE 4200

# Define the command to run your application
CMD [ "ng", "serve", "--host", "0.0.0.0"]
Enter fullscreen mode Exit fullscreen mode

I added the Dockerfile that installs the dependencies and starts the application at port 4200. CMD ["ng", "serve", "--host", "0.0.0.0"] exposes the localhost of the docker to the external machine.

//  .env.docker.example

GEMINI_PORT=3000
GOOGLE_GEMINI_API_KEY=<google gemini api key>
GOOGLE_GEMINI_MODEL=gemini-pro
GROQ_PORT=3001
GROQ_API_KEY=<groq api key>
GROQ_MODEL=gemma-7b-it
LANGCHAIN_PORT=3002
HUGGINGFACE_PORT=3003
HUGGINGFACE_API_KEY=<huggingface access token>
HUGGINGFACE_MODEL=mistralai/Mistral-7B-Instruct-v0.2
WEB_PORT=4200
Enter fullscreen mode Exit fullscreen mode

.env.docker.example stores the WEB_PORT environment variable that is the port number of the Angular application.

// docker-compose.yaml

version: '3.8'

services:
  backend:
    build:
      context: ./nestjs-customer-feedback
      dockerfile: Dockerfile
    environment:
      - PORT=${GEMINI_PORT}
      - GOOGLE_GEMINI_API_KEY=${GOOGLE_GEMINI_API_KEY}
      - GOOGLE_GEMINI_MODEL=${GOOGLE_GEMINI_MODEL}
    ports:
      - "${GEMINI_PORT}:${GEMINI_PORT}"
    networks:
      - ai
    restart: unless-stopped
  backend2:
    build:
      context: ./nestjs-groq-customer-feedback
      dockerfile: Dockerfile
    environment:
      - PORT=${GROQ_PORT}
      - GROQ_API_KEY=${GROQ_API_KEY}
      - GROQ_MODEL=${GROQ_MODEL}
    ports:
      - "${GROQ_PORT}:${GROQ_PORT}"
    networks:
      - ai
    restart: unless-stopped
  backend3:
    build:
      context: ./nestjs-huggingface-customer-feedback
      dockerfile: Dockerfile
    environment:
      - PORT=${HUGGINGFACE_PORT}
      - HUGGINGFACE_API_KEY=${HUGGINGFACE_API_KEY}
      - HUGGINGFACE_MODEL=${HUGGINGFACE_MODEL}
    ports:
      - "${HUGGINGFACE_PORT}:${HUGGINGFACE_PORT}"
    networks:
      - ai
    restart: unless-stopped
  backend4:
    build:
      context: ./nestjs-langchain-customer-feedback
      dockerfile: Dockerfile
    environment:
      - PORT=${LANGCHAIN_PORT}
      - GOOGLE_GEMINI_API_KEY=${GOOGLE_GEMINI_API_KEY}
      - GOOGLE_GEMINI_MODEL=${GOOGLE_GEMINI_MODEL}
    ports:
      - "${LANGCHAIN_PORT}:${LANGCHAIN_PORT}"
    networks:
      - ai
    restart: unless-stopped
  web:
    build:
      context: ./ng-prompt-chaining-demo
      dockerfile: Dockerfile
    depends_on:
      - backend
      - backend2
      - backend3
      - backend4
    ports:
      - "${WEB_PORT}:${WEB_PORT}"
    networks:
      - ai
    restart: unless-stopped
networks:
  ai:
Enter fullscreen mode Exit fullscreen mode

In the docker compose yaml file, I added a web service that depended on backend, backend2, backend3, and backend4 services. The Docker file is located in the ng-prompt-chaining-demo repository, and Docker Compose uses it to build the Angular image and launch the container.

I added the docker-compose.yaml to the root folder, which was responsible for creating the Angular application container.

docker-compose up
Enter fullscreen mode Exit fullscreen mode

The above command starts Angular and NestJS containers, and we can try the application by typing http://localhost:4200 into the browser.

This concludes my blog post about using Angular and Generative AI to build a reply generation application. I built a replies generation application four times to experiment with Gemini API, Gemini 1.5 pro model, Gemma 7B model, Mistral 7B model, Langchain, and Huggingface Inference. I hope you like the content and continue to follow my learning experience in Angular, NestJS, Generative AI, and other technologies.

Resources:

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player