September 4, 2025
Summary: In this tutorial, we will give some example queries you can use with pg_stat_statements as a starting point for different challenges.
Table of Contents
Reduce your workload
If you’re looking to reduce load on your system as a whole, a great starting point is looking at your statements by total time, for example:
select
(total_exec_time + total_plan_time)::int as total_time,
total_exec_time::int,
total_plan_time::int,
mean_exec_time::int,
calls,
query
from
pg_stat_statements
order by
total_time desc
limit 50;
This will return the 50 queries that take the most time across all calls, meaning that fast queries that are executed a lot can rank ahead of slow queries that are executed infrequently. This is likely to be a good proxy for which queries are responsible for using the most system resources!
Here, we are casting the timing statistics to integers, as I like to view them to the nearest millisecond, but you can remove those if you prefer, or even get creative with displaying them in different ways.
If you’re on Postgres version 12 (or older), you won’t have access to the planning time statistics, and you’ll also need to substitute total_exec_time and mean_exec_time with total_time and mean_time respectively.
If you’re on Postgres version 13 (or newer) and notice that your total_plan_time column is all zeros, you may want to look into pg_stat_statements.track_planning (which is off by default).
Speed up your slowest queries
If your customers are complaining about performance, and your main aim is to speed up your slowest queries, you can look at your top queries by mean time, for example:
select
(mean_exec_time + mean_plan_time)::int as mean_time,
mean_exec_time::int,
mean_plan_time::int,
calls,
query
from
pg_stat_statements
--where
-- calls > 1
-- and userid = 99999
order by
mean_time desc
limit 50;
This is very similar to the above example, with the same caveats about before and after version 13!
In the commented out where clause, you can see options to reduce the noise in your results. Filtering on the userid can help remove slow queries from users that it doesn’t matter so much to. Similarly, limiting to queries executed more than than a minimum number of times can be handy if you have folks executing one off slow queries that you wish to exclude.
Reduce your I/O
Another way to think of system resource usage is to think in terms of buffers. If you have known the buffer statistics, they are very useful for query optimization, but you may also wish to look at your overall workload by them.
You may wish to sum and order in different ways to me here, but we are summing all of the buffer statistics together, to give a very crude proxy for “work done”. It’s far from perfect, but I’ve found it to be a surprisingly practical starting point:
select
shared_blks_hit + shared_blks_read + shared_blks_dirtied + shared_blks_written + local_blks_hit + local_blks_read + local_blks_dirtied + local_blks_written + temp_blks_read + temp_blks_written as total_buffers,
(total_exec_time + total_plan_time)::int as total_time,
calls,
shared_blks_hit as sbh,
shared_blks_read as sbr,
shared_blks_dirtied as sbd,
shared_blks_written as sbw,
local_blks_hit as lbh,
local_blks_read as lbr,
local_blks_dirtied as lbd,
local_blks_written as lbw,
temp_blks_read as tbr,
temp_blks_written as tbr,
query
from
pg_stat_statements
order by
total_buffers desc
limit 50;
You may wish to mix and match some of these buffer statistics with the queries above, for example to see total_buffers for each of the queries you’re ranking by total time.
In this context we are looking at the columns with block numbers, but if you prefer to view any of them in terms of bytes, you might like the function pg_size_pretty()
— and if you do, remember to multiply by your block size (8192 by default).
Tune your JIT settings
As of Postgres 15, pg_stat_statements now includes JIT compilation statisitics (and I/O timings). Perhaps lots of folks bumping into issues with JIT compilation kicking in too early, and it being a net detriment to their queries and/or workload.
Here is an example query that we can use to see the queries that had the highest JIT compilation time (as a percentage of their time):
select
((jit_generation_time + jit_inlining_time + jit_optimization_time + jit_emission_time)/(total_exec_time + total_plan_time)) as jit_total_time_percent,
calls,
jit_functions,
jit_generation_time,
jit_inlining_count,
jit_inlining_time,
jit_optimization_count,
jit_optimization_time,
jit_emission_count,
jit_emission_time,
query
from
pg_stat_statements
order by
jit_total_time_percent desc
limit 50;
This will order your queries by how high a percent of their total execution time was spent doing just-in-time compilation. The idea is that higher percentages are likely to correlate to JIT not being beneficial, but this at least gives you a starting point to check things system-wide.
Conclusion
We can use pg_stat_statements for system wide improvements, and to help with one-off issues. Hopefully you got an idea or two from this.