March 5, 2025
Summary: in this tutorial, you will learn how to avoid deadlocks in PostgreSQL.
Table of Contents
Getting ready
In any database management systems, deadlocks can occur due to concurrent resource locking. It is the database engines responsibility to detect the deadlocks, and its applications responsibility to prevent the deadlocks. The PostgreSQL engine has the ability to detect the deadlocks; it also provides a few features to the developers to prevent the deadlocks in their application code.
How to do it…
Let’s produce some simple deadlock situations, and we will see all the options that PostgreSQL provides to troubleshoot them:
Have two different database sessions and execute the SQL statements as follows:
Session 1 | Session 2 |
---|---|
|
|
From the preceding example, in session 1 we got the deadlock error due to mutual locking between session 1 and 2. From the preceding DETAIL message, it clearly says that the process 8913 waits for the transaction that holds by the 10417 process, and the 10417 process is waiting for the transaction that holds by the 8913 process.
As you see from the preceding example, deadlocks will not cause any data loss and it will only cause a transaction failure. Deadlocks usually occur when multiple transactions wait for each other to release locks, forming a circular dependency. So, in order to avoid these deadlocks, the key is how to break this dependency circle.
Access data in a fixed order
First, let’s consider the design of transactions. If all transactions access tables or rows in different orders, circular dependencies can form.
If we assume that all transactions operate on the data in the same order, different sessions will request locks in the same order, thus avoiding circular dependencies. There are usually multiple tables in a database, and if each transaction updates these tables in the same order, such as Table A → Table B
, the possibility of deadlocks can be reduced. Similarly, there are many rows in a table, it is also possible to reduce the possibility of deadlocks if each transaction updates the rows in the same order by the primary key.
Session 1 | Session 2 |
---|---|
|
|
However, in practice, it can be difficult to ensure that all transactions follow the same order, especially when the business logic is too complex.
Using FOR UPDATE
This is an approach that tries to avoid the deadlock issues, by pre-locking all the required records that it is going to update in that session. To pre-lock all the required tuples, we have to use the FOR UPDATE clause in the SELECT statement. Let’s see how we are going to fix the preceding problem using this approach:
Session 1 | Session 2 |
---|---|
|
|
From the preceding example, session 2 transaction will be in waiting until session 1 transaction is complete. Here, we just made the two transactions to run in serializable fashion. That means that the transactions will not conflict with each other. This approach cannot be implemented for the SQL queries, which deal with SET operations. That is, usage of FOR UPDATE is restricted in SQL queries that do UNION/INTERSECT/EXCEPT operations.
Advisory locks
PostgreSQL provides advisory locks, which is an external locking mechanism that we can enforce from the application level, to achieve the concurrent data access. Let’s see how we will be avoiding deadlocks using advisory locks:
Session 1 | Session 2 |
---|---|
|
|
In the preceding example, we are making the transactions as serialize, by using the advisory locks. The only disadvantage of using advisory locks is, we need an application to enforce the lock and unlock behavior. Also, the session level advisory locks will not release the lock, even if the transaction failed or rollbacked. However, when a session is closed, then all the associated advisory locks within that session will be released automatically.
Note: Refer to the PostgreSQL documentation for more information about advisory locks: Advisory Locks.
Best practices for administrators
In PostgreSQL, many administrative operations can also cause deadlocks, and administrators can reduce the probability of deadlocks by using the following strategies:
-
ALTER TABLE: Running
ALTER TABLE
during off-peak hours, this command requires anACCESS EXCLUSIVE
lock, which pretty much blocks every other process on that table. -
CREATE INDEX: Use
CREATE INDEX CONCURRENTLY
whenever possible. -
Manual Vacuum:
VACUUM FULL
will take out anACCESS EXCLUSIVE
lock against a table, so should be used only in rare cases. -
Reduce the duration a transaction holds locks: Long running transactions hold locks for long periods of time, increasing the probability of conflicts. Transactions can be split into smaller parts, committed as soon as possible, or rolled back. Avoid time-consuming operations in transactions (e.g., external API calls, complex calculations).
-
Set the lock timeout: Allow transactions to be automatically rolled back after waiting for a lock to expire to avoid infinite waits. Introduce a retry mechanism in your application (the application layer can catch the exception and retry).
SET lock_timeout = '2s';
-
Logging and monitoring: Enable deadlock logging, configure and monitor deadlock metrics.