Distributed Transaction Introduction

ChunTing Wu - Dec 24 '21 - - Dev Community

This article will:

  1. explain what is the distributed transaction.
  2. introduce how to resolve the distributed problem.
  3. provide a real world example to demonstrate how dose the proper solution look like.

To discuss the distributed transaction, we should talk about the transaction first. We will enter the world of the distributed system design step by step. So, let's begin.

What is Transaction

The traditional transaction system provide 4 guarantees by the definition. They are known as ACID:

  • Atomicity: All tasks in a transaction either all have been done or done with nothing. In this premise, we can ensure our database modifications will be all or nothing.
  • Consistency: The operation in a transaction still follows the constraint. In addition, once it is committed, the changes can be read by others correctly.
  • Isolation: The isolation is defined as 4 isolation levels. They are Read Uncommitted, Read Committed, Repeatable Read, and Serializable. Different isolation levels can solve different user scenarios. The Seriablizable treats all operations are in order; therefore, he will not encounter the racing conditions anymore.
  • Durability: Once the data changes are committed, they will not be lost even the database crashes.

Who has Transaction

After describing ACID briefly, let's see who can support them. I pick 3 very common targets, MySQL, Mongo and Redis.

  • MySQL: Of course, yes. As a representative of OLTP(Online Transactional Processing), MySQL supports the transaction very well, and it is widely used to the transaction scheme. Although, the transaction level is usually set to Repeatable Read, MySQL is still the most appropriate solution.
  • Mongo: Since v4.0 released, Mongo has supports the transaction. As long as the correct use Write Concern and Read Concern can ensure Consistency of the transaction. In my opinion, Mongo is very suitable for the de-normalized user scenario.
  • Redis: Well, not really. We know Redis has the atomical operation due to the single-thread design. In addition, Redis has MULTI to execute multiple commands or EVAL to call the Lua script, so that Redis can perform lots of data changes at once. Moreover, Redis can use AOF to accomplish the data persistence. I have to say that Redis cannot support the transaction. Redis may leverage those mechanisms to solve part of the transaction stories; as a consequence, it does still not fully provide ACID.

Let me summarize in a table:

MySQL Mongo Redis
Atomicity V V X
Consistency V Write concern V
Isolation V Read concern V
Durability V V X

How about Distributed Transaction

Alright, we have gone through the traditional transaction. Now, we can talk about the distributed transaction. In order to ease the problem scope, I will give a simple definition to the distributed transaction, that is, provide ACID across multiple homogeneous/heterogeneous data sources. Here are few examples:

  1. Insert a row in one MySQL database, and insert another row into another MySQL database.
  2. Insert a row in a MySQL database, and insert a document into a Mongo database.
  3. Insert a document in one of Mongo shards, and insert another document into another Mongo shards.

The third example can be solved by native Mongo's transaction after Mongo v4.2 released; therefore, we can skip this example.

Simple Design but Incorrect

Okay, let's quick look a basic design concept. We perform the transaction on each data source, if there is one of them failed, we rollback the previous tasks.

Based on our simple design, there is an implementation on data source A, B and C like this:

Image description

The problem here is: after we commit the result to A, we are hard to perform rollback on A if B is failed.

Even if we adopt the nested transaction, the problem is still not resolved like the following diagram:

Image description

According to the diagram below, we can see the same problem as the figure 1. We cannot rollback on C easily.

Two-phase Commit Protocol

In the field of Microservice, this kind of problems are becoming common; hence, many protocols have been proposed to figure out the solution like XA and SAGA. Both XA and SAGA are based on the same theory, 2PC, aka the two-phase commit protocol. I won't dive into 2PC, I will provide an introduction to explain its concept.

There is a coordinator in the story, everyone want to perform the distributed transaction has to register(first commit) with it. The coordinator will poll every data source who participates in this transaction to get the permission. If there is any participant says no, the coordinator will reject the transaction request and perform rollback to every participant. If fortunately all data sources reply yes, the coordinator will response okay to the client. The client is able to perform the real commit(second commit) to the coordinator to proceed the process.

Image description

See this, we can find out 3 problems:

  1. It is very time consuming. Client must wait for every data source agree his request, and then submit the real commit to them. Although this synchronization can ensure the strong consistency on data changes, we have to invest more time.
  2. SPOF(Single point of failure) is on the coordinator. If the coordinator crashes, all the distributed transactions do not work anymore.
  3. If the coordinator's commit is failed due to the network loss or other reasons, the data will be inconsistent on those participated data sources.

Eventually Consistency

It seems that multi-phase commit protocol is not feasible, we should try to find out a light weight approach called eventually consistency. Literally we know that we give up the strong consistency to reduce the implementation overhead. In other words, we hope the data can be consistent in the end. To achieve the eventually consistency approach, there are two keywords should be kept in mind.

  1. Event sourcing: To make this term simple, it means we record the status changes instead of the status itself. For instance, the bank passbook records every transaction not only the balance.
  2. Idempotence: The idempotence is quite intuitive. We can perform a tasks many times, and the result is the same as we performed once.

Combine these two concepts, we can design our system now. We make every transaction be an event, if a client want to perform a distributed transaction which means the client emits an event to the event processor. The event processor receives a event and start to do transaction on those multiple data sources. After all tasks finish, the event can be mark as DONE, and the event processor can handle the next one. If there is a transaction cannot succeed on one of the data sources, we don't mark this event DONE, so that the event processor will retry later. Due to the idempotence, the event can be processed more than once. Nevertheless, putting a retry limit is recommended.

Let's see the diagram to show the design flow:

Image description

Perfect, as a result of the design concept, we can handle the distributed transaction correctly, and the data consistency will be kept eventually.

Even though there is a queue in our design, in fact, you don't need invoke a real queue in your system. There are many alternatives, e.g., store all events in a MySQL table having a column be the status. The core of this design is the event sourcing and the idempotence, you can accomplish them in what way you preferred.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player