Can Postgres replace Redis as a cache?

Raphael De Lio - Sep 10 - - Dev Community

Twitter | LinkedIn | YouTube | Instagram
This article is also available on YouTube!

Today, I decided to ask people on Twitter what was the first message queue system that came to their mind. And to my surprise, one of the responses was: Postgres.

I opened the link and got surprised not only by the possibility of using Postgres as a Message Broker:

“Use Postgres as a message queue with SKIP LOCKED instead of Kafka (If you only need a message queue)”. — Stephan Schmidt

But also… by the possibility of using Postgres as a cache to replace Redis:

“Use Postgres for caching instead of Redis with UNLOGGED tables and TEXT as a JSON data type. Use stored procedures or do as I do, use ChatGPT to write them for you, to add and enforce an expiry date for the data just like in Redis”. — Stephan Schmidt

And the reason why I got surprised is because during my journey to learn Redis, one of the things that I have often heard many people (from Redis) advocate for is that “Redis is a database and therefore it should be your primary database”.

And that actually might make sense. Redis is a real database that just happens to work really well as a cache. And the reason why it works so well as a cache is because it’s fast. Extremely fast. To the point that it can perform millions of operations in a single second.

And well… reading that Postgres, my favorite relational database, could now replace Redis, my favorite no relational database, kind of turned my world upside down. After all, should I replace Redis with Postgres or Postgres with Redis?

But before I even consider this question, I wanted to understand: Is Postgres as a cache a really good idea? Can it indeed replace Redis? That’s what I want to find out today.

The article that had been shared with me, and that I found out later on that was trending on Twitter, was written by Stephan Schmidt.

Stephan doesn’t only advocate for replacing Redis with Postgres, he actually advocates for replacing everything with Postgres. According to him, by doing so, we will be able to remove complexity and move faster.

“Just Use Postgres for Everything (How to reduce complexity and move faster)” — Stephan Schmidt

However, he couldn’t be the only one advocating for replacing Redis, and in fact, a couple of people have done the same thing:

But first of all, why would I want to replace Redis with Postgres?

Stephan already gave two reasons: less complexity & faster changes. But is there more to it?

Using Postgres as a cache is not the most common choice, but there are certain scenarios where it might make sense. Let’s take a look at them:

Unified Technology Stack

Postgres is one of the most popular databases out there. It’s free, it’s open source, and chances are you are already using it in your application today.

Using it as a cache can simplify your technology stack by reducing the need to manage and maintain multiple database systems.

Familiar Interface

Postgres supports complex queries and indexing. This makes it easier to handle advanced data retrieval and transformation tasks directly within the cache layer. Using SQL for caching logic can be advantageous if your team is already proficient in SQL. And this is very likely to be the case.

Cost

In some cases, it might be more cost-effective to use existing Postgres resources for caching rather than deploying a separate caching solution like Redis. Using Postgres for both primary storage and caching can lead to better resource utilization, especially in environments with limited infrastructure budgets.

What should I expect from a caching service?

Traditional caching services, such as Redis, come with a set of features that enhance the performance and scalability of our applications. In order to understand if Postgres can really replace Redis, we need to understand what features are those. So here are some key aspects we should expect from a caching service:

Performance

The primary goal of caching services is to enhance the performance of applications by making data access faster.

High-performance caching solutions can handle high-throughput workloads and provide sub-millisecond response times, significantly speeding up processes that retrieve data.

Expiration

By setting expiration times for cached data, we can ensure that outdated data is automatically removed from the cache after a specified period. Making sure that outdated data is not served to our applications is another essential aspect of a caching service.

Eviction

Cache services usually persist their data in memory, which historically has been more limited. Because of that, setting an eviction policy allows us to automatically remove less frequently used data to make space for new entries.

Key-value storage

At the core of most caching services, data is stored as pairs of keys and values. This simple yet powerful model allows for quick data retrieval, making it easy to store and access frequently used data efficiently.

In a nutshell, what you want from a caching service is that it allows you to access your data faster and that it gives you back data that is as up to date as possible.

How can I turn Postgres into a cache?

According to Martin Heinz, as he wrote in his blog [You Don’t Need a Dedicated Cache Service — PostgreSQL as a Cache], you can get almost everything I mentioned in the previous section from Postgres too.

Both Stephan and Martin say that we can turn Postgres into a caching service by using UNLOGGED tables. Almost all the examples I will be showing from now on are taken from Martin's publication.

Unlogged tables and Write Ahead Log

Unlogged tables in Postgres are a way of preventing specific tables from generating WAL (Write Ahead Log).

WAL, in turn, ensures that all changes made to the database are recorded before they are actually written to the database files. This helps in maintaining data integrity, especially in the event of a crash or power failure.

Fun fact: Redis offers a similar mechanism called Append Only File (AOF), which not only provides a mechanism to persist data in Redis, but also functions in a similar fashion, by logging all operations executed in Redis. For using Redis as your primary database, we turn on AOF, while for using Postgres as a cache, we turn off (on specific tables), WAL.

Turning WAL off means improving the performance

For every data modification, Postgres must write the change to both the WAL and the data file. This doubles the number of write operations required.

Besides that, in order to ensure that every committed transaction is physically written to the disk, WAL is designed to force a disk flush (fsync). And frequent disk flush operations impact the performance as they introduce latency waiting for the disk to acknowledge that the data is safely written.

It also means giving up on persistence

The main thing to know about unlogged tables is that they’re not persistent.

This is because Postgres uses the WAL to replay and apply any changes that were made since the last checkpoint. If we don’t have this logging, the database cannot be restored to a consistent state by replaying the WAL records. Anyway, this could be expected from a cache, right?

CREATE UNLOGGED TABLE cache (
    id serial PRIMARY KEY,
    key text UNIQUE NOT NULL,
    value jsonb,
    inserted_at timestamp);

CREATE INDEX idx_cache_key ON cache (key);
Enter fullscreen mode Exit fullscreen mode

Expiration with Stored Procedures

Both Martin and Stephan say that expiration can be achieved with the use of stored procedures. And well, this is where complexity begins.

Stored procedures might be complicated, and in fact, Stephan even takes a step further and suggests that we use ChatGPT to get them written for us, inferring that they might be indeed complicated.

CREATE OR REPLACE PROCEDURE expire_rows (retention_period INTERVAL) AS
$$
BEGIN
    DELETE FROM cache
    WHERE inserted_at < NOW() - retention_period;

    COMMIT;
END;
$$ LANGUAGE plpgsql;

CALL expire_rows('60 minutes'); -- This will remove rows older than 1 hour
Enter fullscreen mode Exit fullscreen mode

Truth is that most modern applications don’t rely on Stored Procedures anymore and many software developers advocate against them nowadays.

Generally, the reason for that is because we want to avoid business logic leaking into our databases. Besides that, as the number of stored procedures grows, managing and understanding them can become cumbersome.

In addition, we will also need to call these stored procedures on a schedule. And to do that, we need to use an extension calledpg_cron.

After installing the extension, we still have to create our schedulers:

-- Create a schedule to run the procedure every hour
SELECT cron.schedule('0 * * * *', $$CALL expire_rows('1 hour');$$);

-- List all scheduled jobs
SELECT * FROM cron.job;
Enter fullscreen mode Exit fullscreen mode

Complexity is increasing, isn’t it?

Eviction with Stored Procedures

Stephan doesn’t even mention eviction in his article while Martin says that it could be considered optional since expiration would keep the size down.

If you still want to enable eviction though, he suggests adding a column named last_read_timestamp to our table and running another stored procedure every once in a while to achieve a “last recently used” (LRU) eviction policy.

CREATE OR REPLACE PROCEDURE lru_eviction(eviction_count INTEGER) AS
$$
BEGIN
    DELETE FROM cache
    WHERE ctid IN (
        SELECT ctid
        FROM cache
        ORDER BY last_read_timestamp ASC
        LIMIT eviction_count
    );

    COMMIT;
END;
$$ LANGUAGE plpgsql;

-- Call the procedure to evict a specified number of rows
CALL lru_eviction(10); -- This will remove the 10 least recently accessed rows
Enter fullscreen mode Exit fullscreen mode

Redis offers eight types of eviction policy out of the box. Do you need another kind of eviction policy in place for your “Postgres Cache”? Just ask ChatGPT.

What about the performance?

Performance is what matters the most in this case, isn't it? After all, the reason why we generally want a caching service is because we want to access our data faster.

Greg Sabino Mullane did a great job on his article [PostgreSQL Unlogged Tables — Look Ma, No WAL!] comparing the performance of UNLOGGED and LOGGED tables in Postgres. His data shows that the performance of writing into a UNLOGGED tables is twice as fast as doing the same operation in a LOGGED tabled. Specifically:

Unlogged Table:
Latency: 2.059 ms
TPS: 485,706168

Logged Table:
Latency: 5.949 ms
TPS: 168,087557

But what about reading performance?

And here’s the catch. Postsgres performance optimization strategy relies. on shared buffers.

**Shared buffers **store frequently accessed data and indexes directly in memory, making them quickly accessible and reducing the need to read from disk. This improves query performance and data access for both logged and unlogged tables.

It’s true that unlogged tables might reside in these buffers, but they can, and will, be written to disk if they grow too large or memory is limited. Therefore, unlogged tables primarily enhance write speed, not read speed.

To prove it, I performed a quick experiment using pgbench. You can see how I did it here.

And the results show that the performance of both logged and unlogged tables are, in fact, quite similar. Reading from both types of tables took around 0.650 ms on average. Specifically:

Unlogged Table:
Latency: 0.679 ms
TPS: 14.724,204

Logged Table:
Latency: 0.627 ms
TPS: 15.946,025

This outcome reinforces the understanding that unlogged tables primarily enhance write performance. For read operations, the performance benefits of unlogged tables are not evident, as both logged and unlogged tables benefit similarly from Postgres’ caching and optimization strategies.

How does the performance compare to Redis?

In addition to benchmarking Postgres, I also ran an experiment with Redis. You can see the details of the experiment here. The results from Redis show a significant performance advantage in terms of reading and writing operations:

Reading:
Latency (p50): 0.095 ms
Requests per second (RPS): 892.857,12

Writing:
Latency (p50): 0.103 ms
Requests per second (RPS): 892.857,12

The performance comparison shows that Redis significantly outperforms Postgres in both writing and reading operations:

Redis achieves a latency of 0.095 ms, which is approximately 85% faster than the 0.679 ms latency observed for Postgres’ unlogged table.

It also handles a much higher request rate, with 892.857,12 requests per second compared to Postgres’ 15.946,025 transactions per second.

And when it comes to writing operations, as evidenced by the significantly higher throughput and lower latency, we can also see that Redis provides a superior performance.

What if I ran Postgres in RAM?

During the review of this article, a colleague from Xebia, Maksym Fedorov, said:

“What if now unlogged tables were created in tablespace that corresponds to memory mapped file? My guess we’d see completely different numbers.”

To test this, I ran benchmarks with Postgres data persisted in RAM. Surprisingly, there was no improvement in the results. The benchmark showed:

Reading:
Latency: 0.652 ms
Requests per second (TPS): 15.329,776954

After further research, I understood that even though data is stored in RAM, accessing it within Postgres’ shared buffers incurs additional costs. These costs arise from managing locks and other internal processes necessary for data integrity and concurrent access.

Postgres always checks if the data is in shared buffers first. If not, it copies the data from the tmpfs filesystem into the shared buffers before serving it, even when the database is persisted in RAM.

Should I replace Redis with Postgres?

Based on this study, if you need a caching service to improve write performance, Postgres can be optimized using unlogged tables. However, while unlogged tables offer better write performance than logged tables, they still fall short compared to Redis.

The primary reason for using a caching service is to improve data retrieval time. Unlogged tables do not enhance read performance while Redis excels with extremely fast read operations.

Additionally, Redis helps prevent a large volume of low-cost queries from hitting your database, a benefit unlogged tables cannot provide. Redis also offers built-in features like expiration, eviction policies, and more, which are complex to implement in Postgres.

Although managing Postgres may seem easier for some, turning Postgres into a cache doesn’t provide the advantages of a dedicated caching service. At the same time, Redis is easy and enjoyable to learn, deploy, and use.

For faster performance and simplicity, choosing a real caching service like Redis is the clear choice.

I hope you have enjoyed reading this story! I certainly had a lot of fun doing this research!

A special thank you for Maksym Fedorov, João Paulo Gomes, and Hernani Fernandes for reviewing this article.

Stay curious!

Contribute

**Writing takes time and effort. *I love writing and sharing knowledge, but I also have bills to pay. If you like my work, please, **consider donating through Buy Me a Coffee: https://www.buymeacoffee.com/RaphaelDeLio*

Or by sending me BitCoin: 1HjG7pmghg3Z8RATH4aiUWr156BGafJ6Zw

Follow Me on Social Media

Stay connected and dive deeper into the world of Redis with me! Follow my journey across all major social platforms for exclusive content, tips, and discussions.

Twitter | LinkedIn | YouTube | Instagram

. . . . . . .
Terabox Video Player