Comparative Analysis of PostgreSQL Insert Performance

By John Doe November 5, 2025

Summary: In this article, we will explore the advantages of PostgreSQL’s insertion performance and the underlying principles behind it.

Table of Contents

Table Data Insertion Scenario: Use Case

Below, we will create a table in a PostgreSQL database, insert some data, perform cleanup operations on the table data, and observe the performance of the insertion operation.

CREATE TABLE test_table(id int, info text);

\timing

INSERT INTO test_table
  SELECT n, md5(random()::text)
    FROM generate_series(1, 10000000) as n;
Time: 11973.773 ms (00:11.974)

VACUUM (verbose) test_table;
INFO:  vacuuming "postgres.public.test_table"
INFO:  finished vacuuming "postgres.public.test_table": index scans: 0
pages: 0 removed, 83334 remain, 83334 scanned (100.00% of total)
tuples: 0 removed, 10000000 remain, 0 are dead but not yet removable
...
Time: 870.670 ms

It is worth noting that PostgreSQL’s insertion operations incur additional delayed cleanup costs. Although these costs do not appear during a single batch insertion process, the background cleanup and maintenance load during normal business operations will indirectly reduce the transaction processing performance of the database system.

Next, we will create the same table as above in a Redrock Postgres database, insert the same data, and observe the performance of the insertion operation.

CREATE TABLE test_table(id int, info text);

\timing

INSERT INTO test_table
  SELECT n, md5(random()::text)
    FROM generate_series(1, 10000000) as n;
Time: 15654.149 ms (00:15.654)

Because Redrock Postgres introduces a rollback log, the insertion process not only requires inserting tuples into the heap table but also recording the rollback log. This results in a 30.7% performance overhead, calculated as (15.654 - 11.974) / 11.974.

For the performance analysis of PostgreSQL’s update operations, please refer to Write amplification caused by updates in PostgreSQL. In short, the time consumed by PostgreSQL’s update operations is proportional to the number of indexes in the table. In contrast, the update performance of Redrock Postgres can always remain stable.

Cleanup Costs After INSERT

In fact, after the completion of an INSERT operation in PostgreSQL, there is a subsequent delayed secondary modification process. However, this process often goes unnoticed.

When PostgreSQL inserts data into a table, it creates a new tuple. The header of the tuple records the transaction ID that created the tuple, which is called the xmin of the tuple. PostgreSQL stores the current state information of each transaction in the Commit Log (CLOG). Checking the status of a large number of transactions in the CLOG consumes significant resources. Therefore, PostgreSQL caches the transaction state information directly in the tuple header. For example, if it is detected that the xmin transaction has been completed when executing a SELECT statement, PostgreSQL saves this information in the so-called “hint bits” of the tuple. This process dirties the table pages, which are then written to disk, and requires recording WAL logs.

In addition, for table pages after insertion, the VACUUM process also needs to update the visibility map and free space map of the pages.

In other words, the INSERT operation in PostgreSQL actually shifts and distributes part of its costs to subsequent SELECT and VACUUM operations.

Test Conclusions

This performance test result well confirms the design philosophies of the two storage engines: PostgreSQL performs excellently in “append-only” workloads, demonstrating its design advantages in terms of storage; while Redrock Postgres has obvious advantages in complex transaction processing and possesses the maturity of an enterprise-level kernel.

For write-intensive business scenarios, PostgreSQL is a highly competitive choice. Examples include business systems such as log recording, event collection, and time-series data writing, as well as other business systems with extremely frequent write operations.

For core transaction business scenarios that require complex transaction processing, Redrock Postgres will perform more outstandingly. Examples include critical business systems such as financial transactions and inventory management, as well as core business systems with extremely frequent update operations.