Reverse Proxy and Load Balancing: Do we need both?

jiisanda🙆‍♂️ - Oct 10 - - Dev Community

In this guide, I’ll walk you through configuring Nginx as both a reverse proxy and a load balancer, along with handling SSL termination for secure client-server communication. This setup can be used widely to enhance performance, scalability for websites and APIs.

Understanding Reverse Proxy and Load Balancing:

Though both terminologies sound similar, reverse proxy acts as an intermediate between clients and servers. It forwards client requests to servers and relays the server’s responses back to the clients.

While Load Balancer distributes incoming client requests across multiple backend servers, ensuring the load is spread evenly. It monitors server health to route traffic away from any server that becomes unavailable.

Do we need both ?

Yes, in some cases. Most production environments benefit from using both.

Load balancer helps to eliminate single point of failure, making API more reliable by allowing the API to be deployed in multiple backend servers, this intern improves user experience by reducing number of errors responses for clients, as it routes requests to healthy servers.

Load balancer also does health checks, which is when it sends separate health check requests, in frequent intervals and determines a server is healthy based on specified type of response like 200 or OK.

A reverse proxy hides the details of our backend servers from clients. It can also provide security against DDoS attacks, blacklist certain IP addresses, and prevent overloading.

SSL encryption and decryption is CPU intensive, offloading these tasks to a reverse proxy reduces the load on your backend servers and improves overall performance.

Configuring Nginx as a Reverse Proxy

Nginx provides multiple protocols for proxying requests, including HTTP, FastGCI, SGCI, and uwsgi. We will be using HTTP:

The proxy_pass directive allows Nginx to forward client requests to backend servers. For example,

location /api/app1 {
    proxy_pass http://backed_server;
}
Enter fullscreen mode Exit fullscreen mode

In this configuration, a request to /api/app1/hello will be provided into http://backend_server/api/app1/hello. You can define multiple location blocks to handle different types of requests.

Setting up Nginx for Load Balancing

By using upstream directive load balancing can be set up. You can setup multiple servers in a backend pool and apply different algorithms to distribute client request

upstream backend_pool {
    server host1.example.com;
    server host2.exmaple.com;
    server host3.exmaple.com;
}
Enter fullscreen mode Exit fullscreen mode

Here nginx forwards requests to any of the servers in the backend_pool. By default Nginx uses a round-robin method, distributing requests evenly across servers.

Upstream Balancing Algorithms
  • Round Robin (default) as above
  • Least Connections:

Sends request to server with the fewest active connections, which is useful when servers are slower than others.

upstream backend_pool {
    least_conn;
    server host1.example.com;
    server host2.example.com;
}
Enter fullscreen mode Exit fullscreen mode
  • IP Hash

Ensures requests from the same client IP always go to the same server unless not available, important for session persistence. Here either the first three octets of IPv4 address or whole IPv6 address is used to calculate the hash value.

upstream backend_pool {
    ip_hash;
    server host1.example.com;
    server host2.example.com;
}
Enter fullscreen mode Exit fullscreen mode
  • Generic Hash

To which server to send a request to is determined from a user-defined key, which can be text, string, variable or combination. Example, a key can be paired, source IP address and port or a URI.

upstream backend_pool {
    hash $request_uri consistent;
    server host1.example.com;
    server host2.example.com;
}
Enter fullscreen mode Exit fullscreen mode

SSL Termination

To setup HTTPS in a config file or Nginx, to allow SSL encryption and decryption reducing the load on our backend servers.

server {
    listen 443 ssl;
    server_name www.example.com;

    ssl_certificate /etc/nginx/ssl/server.crt;
    ssl_certificate_key /etc/nginx/ssl/server.key;
    ssl_protocols       TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
        ssl_ciphers         HIGH:!aNULL:!MD5;

    location /api/hello {
        proxy_pass http://backend_pool;
    }
}
Enter fullscreen mode Exit fullscreen mode

This configuration listens on port 443 for HTTPS traffic, handles SSL encryption/decryption, and provides requests to backend_pool.

Note:

While Nginx does not provide support for advanced health check, it allows basic health probing using max_fails and fail_timeout parameters.

upstream backend_pool {
    server host1.example.com max_fails=2 fail_timeout=5s;
    server host2.example.com;
}
Enter fullscreen mode Exit fullscreen mode

In this configuration, if host1 fails twice within a 5-second window, Nginx will stop routing requests to it until it recovers.

Conclusion

Now, we understand the load balancing and load balancing support in Nginx. To configure both, we can create a nginx.conf file in the /etc/nginx directory, and add the configurations as below.

NGINX as Load Balancing and Reverse Proxy

events {}

http {
    # define the upstream for load balancing
    upstream backend {
        server host1.example.com:8080,
        server host2.example.com:8080 max_fails=2 fail_timeout=5s;
    }

    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10ms;

    server {
        listen 443 ssl;
        server_name www.example.com;
        keepalive_timeout 70;

        ssl_certificate         /etc/nginx/ssl/server.crt;
            ssl_certificate_key     /etc/nginx/ssl/server.key;
        ssl_protocols           TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
                ssl_ciphers         HIGH:!aNULL:!MD5;

        location /service/api1 {
            proxy_pass http://backend;
        }

        location /service/api2 {
            proxy_pass http://backend;
        }
    }
}
Enter fullscreen mode Exit fullscreen mode
. . . .
Terabox Video Player