What is HTTP Error 429 Too Many Request and How to Fix it

Scrapfly - Oct 30 - - Dev Community

What is HTTP Error 429 Too Many Request and How to Fix it

HTTP error 429 Too Many Requests is a commonly encountered status code in web automation, web scraping and API use.

While it might appear self explanatory at first, there are many nuances and different causes to this error and in this article we'll take a look at what is HTTP error 429, why it happens and how to avoid it through examples in Python and Javascript.

What is HTTP error 429 Too Many Requests?

The 429 error code stands for Too Many Requests which means the client is too performing more requests than it's allowed.

This usually happens when APIs or websites rate limited connections to either prevent overload or to sell premium access.

To summarize HTTP error 429 can be caused by:

  • Making too many requests in a given time frame
  • Sending too many concurrent requests

For calculating connection limiting servers usually use some client feature like:

  • IP Address
  • Authentication token
  • Cookies like session cookie or special token
  • Headers like User-Agent or X- special headers
  • Client fingerprint which can be generated from several client features like headers, ip, js fingerprint etc.

To illustrate this better, let's take a look at an example server 429 implementation next.

Server Implementation of HTTP error 429

For this example, we'll be using Flask web framework in Python to create a simple server that returns error code 429 when the client makes too many requests by measuring connection in a few different axises.

For this we'll be using the Flask-Limiter library which provides time based rate limiting for Flask applications:

from flask import Flask, request
from flask_limiter import Limiter
from flask_limiter.util import get_remote_address

app = Flask( __name__ )
limiter = Limiter(app, key_func=get_remote_address)

# Limit by IP address
@app.route("/ip-limit")
@limiter.limit("1 per minute", key_func=get_remote_address)
def ip_limit():
    return "Accessed /ip-limit."

# Limit by Authorization request header token
@app.route("/auth-limit")
@limiter.limit("1 per minute", key_func=lambda: request.headers.get("Authorization", "no-auth"))
def auth_limit():
    return "Accessed /auth-limit."

# Limit by User-Agent request header
@app.route("/user-agent-limit")
@limiter.limit("1 per minute", key_func=lambda: request.headers.get("User-Agent", "no-ua"))
def user_agent_limit():
    return "Accessed /user-agent-limit."

# Limit by fingerprint (IP + sorted headers)
@app.route("/fingerprint-limit")
@limiter.limit("1 per minute", key_func=lambda: (get_remote_address() + "|" + str(sorted(request.headers.items()))))
def fingerprint_limit():
    return "Accessed /fingerprint-limit."

if __name__ == " __main__":
    app.run(debug=True)

Enter fullscreen mode Exit fullscreen mode

In the above 429 server example we implement 4 distinct rate limiting strategies to limit the client connection to 1 request per minute.

  • /ip-limit limits by the client IP address
  • /auth-limit limits by the Authorization header
  • /user-agent-limit limits by the User-Agent header
  • /fingerprint-limit limits by the client fingerprint which in this case is just a combination of IP and headers though in real world scenarios it can be more complex.

In addition, most servers also provide a Retry-After header or special X- family of headers that inform the client of the remaining limits. In this case Flask limiter adds these headers:

  • X-RateLimit-Limit - The maximum number of requests allowed in the current period.
  • X-RateLimit-Remaining - The number of requests remaining in the current period.
  • X-RateLimit-Reset - The time (in UTC epoch seconds) when the rate limit will reset.

This example is a great illustration of how request limiting works and now we can take a look at how to bypass it.

How to fix HTTP error 429 Too Many Requests?

HTTP 429 is caused by rate limiting based on client details so to bypass this limitation we need to:

  • Increase our rate limit if possible.
  • Throttle our requests to match the limit.
  • Distribute our requests through multiple agents to bypass the limit.

Let's take a look at each of these strategies and how to implement them.

Increasing Rate Limit

To start, it's important to note that some limitations are not only there to upsell users but can be there to prevent legitimate abuse. In cases like that, many APIs offer free rate limit increase if free auth token is provided.

For example, in Github's API just by providing free Github PATH token we can raise request limit from 60 to 5000 requests per hour:

import httpx

# 60 requests per hour
response = httpx.get(
    "https://api.github.com/scrapfly/scrapfly-scrapers", 
)
# 5000 request per hour with free Github PAT
response = httpx.get(
    "https://api.github.com/scrapfly/scrapfly-scrapers", 
    headers={"Authorization": f"Bearer {token}"}
)

Enter fullscreen mode Exit fullscreen mode

So, the first step is to check if the API or website we're working with offers a way to increase the rate limit.

Throttling Requests

Following rate limiting can be harder than it seems especially when working with concurrent or parallel requests.

The most common cause of 429 Too Many Requests error is un-throttled asynchronous requests that all get sent at once triggering the rate limit immediately.

To avoid this, requests need to be throttled using time based mechanisms like Leaky Bucket or Token Bucket or some other time based distribution algorithm.

Luckily you don't need to implement these algorithms yourself as there are plenty of community libraries that does this for you. Here are some example throttle strategies available in popular HTTP clients:

This python example uses aiolimiter library

to limit any asyncio tasks like httpx requests:

import asyncio
from time import time

import aiometer
import httpx

session = httpx.AsyncClient()

async def scrape(url):
    response = await session.get(url)
    return response

async def run():
    _start = time()
    urls = ["http://httpbin.org/get" for i in range(10)]
    results = await aiometer.run_on_each(
        scrape, 
        urls,
        max_per_second=1/6, # here we can set max rate per second; i.e. aprox 10 requests in 1 minute
    )
    print(f"finished {len(urls)} requests in {time() - _start:.2f} seconds")
    return results

if __name__ == " __main__":
    asyncio.run(run())

# will print:
# finished 10 requests in 9.54 seconds

Enter fullscreen mode Exit fullscreen mode

This javascript example uses bottleneck library.

Example for nodejs with node-fetch (though it's almost identical for other fetch api libraries):

const Bottleneck = require('bottleneck');
const fetch = require('node-fetch'); // Make sure to install node-fetch if you haven't

// Initialize the Bottleneck limiter with a limit of 10 requests per minute
const limiter = new Bottleneck({
  maxConcurrent: 1, // Process one request at a time
  minTime: 6000 // 6000ms delay (1 request every 6 seconds for 10 requests per minute)
});

// Define the function to make an HTTP GET request
async function fetchResource(url) {
  const response = await fetch(url);
  const data = await response.json();
  console.log(data); // Display the response for each request
  return data;
}

// Wrap the fetch function with Bottleneck's limiter
const throttledFetch = limiter.wrap(fetchResource);

// Generate the list of URLs (100 requests to the same endpoint for demonstration)
const urls = Array(100).fill('https://httpbin.dev/get');

// Run the requests with throttling
(async () => {
  const promises = urls.map(url => throttledFetch(url));
  const results = await Promise.all(promises);
  console.log("All requests completed.");
})();

Enter fullscreen mode Exit fullscreen mode

In the above examples, we limit the requests to 10 request per second which is a good starting point for most APIs.

Dynamic Throttle

Not all APIs or websites have a static request limit which means we can't apply a static throttling limit like we did with our above 10req/min examples.

Dynamic rate limiting usually is often implemented through the Retry-After header (or similar X- non-standard header) which inform the client how long to wait before retrying or making the next request.

Here's an example of how to handle dynamic throttling by catching the 429 errors and waiting based on the Retry-After header:

This Python example demonstrates how to handle dynamic rate limiting in Python using httpx library:

import httpx
import asyncio

async def fetch_with_retry(url, retries=3):
    """wrap httpx.get with Retry-After retries"""
    async with httpx.AsyncClient() as client:
        for attempt in range(retries):
            response = await client.get(url)

            if response.status_code == 200:
                return response.json()

            # Check for 429 or 503 and handle the Retry-After header
            if response.status_code not in {429, 503}:
                response.raise_for_status() # Raise if it's another error
                continue
            retry_after = response.headers.get("Retry-After")
            if not retry_after:
                # if no Retry-After header, you can try guessing with exponential backoff
                print("Rate limited but no Retry-After header. Using default backoff.")
                await asyncio.sleep(2 ** attempt) # Exponential backoff if Retry-After isn't provided
                continue
            # Retry-After can be in seconds or HTTP-date format
            try:
                retry_after_seconds = int(retry_after)
            except ValueError:
                retry_after_seconds = (httpx.parse_header_as_http_date(retry_after) - httpx.utils.now())
            print(f"Rate limited. Retrying after {retry_after_seconds} seconds.")
            await asyncio.sleep(retry_after_seconds) # Non-blocking sleep
    raise Exception("Max retries exceeded")

async def main():
    url = "https://httpbin.org/status/429" # This URL simulates a 429 response for testing
    try:
        result = await fetch_with_retry(url)
        print("Request succeeded:", result)
    except Exception as e:
        print(f"Request failed: {e}")

# start asyncio event loop
asyncio.run(main())

Enter fullscreen mode Exit fullscreen mode

This Javascript example demonstrates how to handle dynamic rate limiting in NodeJS using node-fetch library:

const fetch = require('node-fetch');

async function fetchWithRetry(url, retries = 3) {
  for (let attempt = 0; attempt < retries; attempt++) {
    const response = await fetch(url);

    if (response.ok) {
      return await response.json();
    }

    // Check for 429 or 503 and handle the Retry-After header
    if (response.status === 429 || response.status === 503) {
      const retryAfter = response.headers.get("Retry-After");

      if (retryAfter) {
        let retryAfterSeconds;
        // Convert Retry-After to seconds
        if (!isNaN(retryAfter)) {
          retryAfterSeconds = parseInt(retryAfter, 10);
        } else {
          // Convert HTTP-date format to a delay
          const retryDate = new Date(retryAfter).getTime();
          retryAfterSeconds = Math.max(0, (retryDate - Date.now()) / 1000);
        }

        console.log(`Rate limited. Retrying after ${retryAfterSeconds} seconds.`);
        await new Promise(resolve => setTimeout(resolve, retryAfterSeconds * 1000));
      } else {
        // If Retry-After isn't specified, use exponential backoff
        const backoff = Math.pow(2, attempt);
        console.log(`Rate limited without Retry-After header. Retrying after ${backoff} seconds.`);
        await new Promise(resolve => setTimeout(resolve, backoff * 1000));
      }
    } else {
      throw new Error(`Request failed with status ${response.status}`);
    }
  }

  throw new Error("Max retries exceeded");
}

// Example usage:
async function main() {
  try {
    const url = "https://httpbin.org/status/429"; // Simulate 429 for testing
    const result = await fetchWithRetry(url);
    console.log("Request succeeded:", result);
  } catch (error) {
    console.error("Request failed:", error);
  }
}

main();

Enter fullscreen mode Exit fullscreen mode

In the above examples we catch the 429 error and wait based on the Retry-After header. If the header is not provided we can try guessing the wait time using exponential backoff.

Note that the dynamic request limiting is very difficult to work with asynchnronous requests and it's usually the best to limit the requests statically to avoid the 429 errors.

Distributing Requests

Unfortunately, many APIs don't provide a way to increase rate limits and in those cases we can use the following strategies:

  • Use proxies to distribute requests through multiple IP addresses.
  • Use multiple fingerprint agents (different headers, authentication tokens, etc) to distribute request through multiple fingerprints.

Depending on how the rate limiting is implemented, you might need to use either or a combination of both strategies.

For example, if the rate limiting is based on IP address you can use rotating proxies your own rotating proxy pool. Here's a quick example:

This Python example demonstrates how to distribute requests through multiple proxies and User-Agents in Python using httpx library:

import httpx
import asyncio
import random

# Array of proxy IPs (in http://username:password@host:port format if authentication is needed)
proxy_ips = [
    "http://proxy1.example.com:8080",
    "http://proxy2.example.com:8080",
    "http://proxy3.example.com:8080",
]

# Array of User-Agent strings
user_agents = [
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36",
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.1.1 Safari/605.1.15",
    "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/89.0",
]

async def scrape(url):
    """"scrape with random proxy and user agent"""
    # Randomly select a proxy and User-Agent
    proxy = random.choice(proxy_ips)
    user_agent = random.choice(user_agents)

    # Set up the client with the selected proxy
    async with httpx.AsyncClient(proxies={"http://": proxy, "https://": proxy}) as client:
        headers = {"User-Agent": user_agent}
        response = await client.get(url, headers=headers)
        print(response.json()) # Display the response (you can store or process this as needed)
        return response.json()

# Main function to make multiple requests
async def main():
    url = "https://httpbin.dev/ip"
    tasks = [scrape(url) for _ in range(100)] # 100 requests
    results = await asyncio.gather(*tasks)
    print("All requests completed.")

# Run the async main function
asyncio.run(main())

Enter fullscreen mode Exit fullscreen mode

This Javascript example demonstrates how to distribute requests through multiple proxies and User-Agents in NodeJS and fetch api and https-proxy-agent for proxy support:

const fetch = require('node-fetch');
const HttpsProxyAgent = require('https-proxy-agent');

// Array of proxy IPs
const proxyIps = [
  "http://proxy1.example.com:8080",
  "http://proxy2.example.com:8080",
  "http://proxy3.example.com:8080",
];

// Array of User-Agent strings
const userAgents = [
  "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36",
  "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.1.1 Safari/605.1.15",
  "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/89.0",
];

// Function to perform a single request with random proxy and User-Agent
async function fetchWithRandomProxyAndUserAgent(url) {
  // Select a random proxy and User-Agent
  const proxy = proxyIps[Math.floor(Math.random() * proxyIps.length)];
  const userAgent = userAgents[Math.floor(Math.random() * userAgents.length)];

  // Set up the proxy agent
  const agent = new HttpsProxyAgent(proxy);

  // Make the request with the selected proxy and User-Agent
  const response = await fetch(url, {
    method: 'GET',
    headers: {
      'User-Agent': userAgent,
    },
    agent: agent,
  });

  const data = await response.json();
  console.log(data); // Display the response (IP address returned by httpbin)
  return data;
}

// Function to perform multiple requests
async function main() {
  const url = "https://httpbin.org/ip";
  const requests = Array.from({ length: 100 }, () => fetchWithRandomProxyAndUserAgent(url));
  const results = await Promise.all(requests);
  console.log("All requests completed.");
}

// Run the main function
main();
Enter fullscreen mode Exit fullscreen mode

In the above examples we use a rotating proxy pool to distribute requests through multiple IP addresses essentially multiplying the rate limit by the number of proxies we have.

We also randomize the User-Agent header to avoid detection based on connection fingerprinting though in reality there's more to fingerprinting than just the User-Agent header. For more on that see our bot detection article.

Power Up with Scrapfly

Scrapfly has millions of proxies and connection fingerprints that can be used to bypass rate limits and significantly simplify your web automation projects.

What is HTTP Error 429 Too Many Request and How to Fix it

ScrapFly provides web scraping, screenshot, and extraction APIs for data collection at scale.

FAQ

Before we finish, let's take a look at some common questions about HTTP error 429 Too Many Requests:

What is the difference between 429 and 401 status codes?

The 429 status code means Too Many Requests and is used when the client is making more requests than it's allowed. The 401 status code means Unauthorized and is used when the client is not authorized to access the resource or is simply being blocked.

How to know what's the rate limit of an API or a website?

APIs or websites generally return 429 status code when the rate limit is exceeded. This response usually also contains headers like X-RateLimit-Limit, X-RateLimit-Remaining, and X-RateLimit-Reset that inform the client of the remaining limits or Retry-After header that informs the client how long to wait before retrying.

Can I get blocked for getting 429 status code?

Yes, too many 429 status code requests can lead to getting blocked by the website or API. It's important to respect the rate limits set by the service to avoid getting blocked or causing harm to the service if the limits are set to reasonable values.

Summary

To overview HTTP error 429 Too Many Requests is a common status code that can be caused by making too many requests in a given time frame or sending too many concurrent requests.

To fix HTTP error 429 we can:

  • Increase our rate limit if the websites or APIs offer a way to do so
  • Throttle our requests to match the limit
  • Distribute our requests through multiple agents like proxies and client fingerprints (like the User-Agent header)

Finally, it's important to note that rate limiting often has a purpose and it's important to respect the limits set by the website or API to avoid getting blocked or causing harm to the service.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player