Vacuum & autovacuum

Routinely running VACUUM is an essential table and index maintenance task. In modern versions of Postgres and on your Crunchy Bridge instance, vacuuming will be done periodically by the autovacuum daemon. In general this eliminates the need for manual VACUUM commands.

In this guide, we'll go into:

  • the duties performed by vacuum operations
  • the trigger thresholds autovacuum uses to determine when a given table needs vacuuming
  • how to configure and monitor autovacuum

Vacuum's duties

Reclaiming disk space for re-use and bloat prevention

PostgreSQL's Mutliversion Concurrency Control (MVCC) does not use UNDO logs to handle rollbacks of deleted or updated tuples. Instead when a transaction deletes a tuple it simply marks the tuple as deleted with its transaction ID (aka XID) in a hidden field. Tuple updates work by marking the old tuple as deleted and inserting a new tuple with the new values.

The upside to the Postgres tuple handling is instant rollbacks for transactions, no matter the size. Conversely, this means that deleted tuples can accumulate in the table and index data files. Excessive amounts of these dead tuples in a table is called "bloat". Vacuuming is needed to find deleted tuples that are no longer visible to any open transactions to make their space available for re-use, thereby managing bloat.

At the end of each vacuum process the table's Free Space Map is updated. The free space map is an on-disk file that tracks which pages have free space in them. Postgres uses this to find pages with space for new insert and update data being written. This happens before resorting to allocating new pages. The contents of a table's free space map can be examined using the functions provided by the pg_freespacemap extension.

Read more about what bloat is in our Insights and Metrics section.

Avoiding transaction ID wraparound

Postgres' transaction IDs (XIDs) are generated from a 32-bit counter. This means that there are only 4 billion distinct XID values available to use. Once the counter reaches its maximum value it starts over at 1, or "wraps". Thus, the transaction counter is a circular space that does not have any real "beginning" or "end". When an XID value is assigned to a transaction, the 2 billion XIDs before it going around the circle are considered to be in the past and the 2 billion values after it are considered to be in the future. As XIDs are continually allocated for new transactions, the past and future are actually adjacent, constantly moving windows rotating around the circular XID space.

When tuples are inserted they have the inserting transaction's XID included in a hidden field. Once 2B transactions have elapsed after that tuple was written, it will then be in the future window with respect to the current XID on the system. If this is allowed to actually happen, it is called a "transaction ID wraparound failure". This type of failure is catastrophic.

To prevent XID wraparound failures from happening, vacuum takes tuples out of the XID window calculations entirely by marking them as "frozen". This is done for any tuples that vacuum encounters that have XID values older than the current vacuum_freeze_min_age setting. Once a tuple is frozen, it is known to be in the past to all future transactions, and the XID value it had been assigned can safely be re-used.

You can learn more about XID wraparound failure and vacuum in the Postgres docs.

Vacuum's other duties

Vacuum also:

  • Updates table planner statistics stored in pg_statistic when VACUUM ANAYLZE tablename is run. Collection of planner statistics without vacuuming can also be done with ANALZYE tablename.

  • Maintains each table's visibility map. The visibility map is another on-disk file, this one tracks which pages contain only tuples that are visible to all open transactions and future transactions, i.e. they have no dead tuples. This is then used by subsequent vacuums to know that those pages can be skipped and by index tuple visibility lookups for index scans and index-only scans. The visibility map is also used to track pages that contain all-frozen tuples. The contents of a table's visibility map can be examined using the functions provided by the pg_visibility extension.

  • Frees pages back to the file system if they are the last pages in the table and contain no tuples visible to any open transactions. This is the only case where a normal vacuum (i.e. not VACUUM FULL) operation will actually free up disk space and it only happens if the vacuum operation can quickly acquire a short-lived exclusive lock on the table.

  • Updates table pg_class.reltuples values with the number of visible rows in the table. This is then used as part of the threshold equations detailed below, which determine when to run new autovacuum and autoanalyze operations. While this number will quickly be out of date for tables actively receiving writes, it can be used for a quick SELECT COUNT(*) FROM tablename estimate when accuracy is not important.

  • Updates hint bits. Hint bits allow for faster commit confirmation lookups for queries and vacuums.

Autovacuum configuration and tuning

During the course of operation, Postgres constantly updates the system statistics views. Of particular relevance here is the pg_stat_user_tables view. This view shows information on when each table's last vacuums and analyze jobs were run and the counters used by the autovacuum daemon to determine when new autovacuum and autoanalyze jobs should be run.

The next sections detail the threshold functions that the autovacuum daemon uses to determine when it should run new table autovacuum and autoanalyze jobs. The values shown in parentheses in the calculations and configuration settings below are their defaults.

Delete/update traffic vacuum threshold for finding dead tuples for re-use

pg_stat_user_tables.n_dead_tup > autovacuum_vacuum_threshold(50) + autovacuum_vacuum_scale_factor(0.2) * pg_class.reltuples

Each table's pg_stat_user_tables.n_dead_tup value is incremented for each committed DELETE and UPDATE by the number of tuples the query deleted. Thus, this threshold formula uses that counter as a function of the table's tuple count at the time its last vacuum or analyze completed (when pg_class.reltuples is updated) scaled by autovacuum_vacuum_scale_factor and offset by autovacuum_vacuum_threshold.

Put another way: with the default values, a new autovacuum job will be triggered by this threshold after 20% + 50 of the table's tuples have been deleted.

For example, if a table has 1000 live tuples in it when a vacuum completes, that value is set in its pg_class.reltuples entry. The next autovacuum due to delete/update traffic will happen once more than 50 + 0.2 * 1000 -> 250 tuples tuples have been deleted.

These values are fine for small tables. For large growing tables that 20% scale factor means more tuples need to be deleted before vacuums will be triggered. Fortunately, you can configure autovacuum to run more frequently on a per-table basis.

** Tuning autovacuum **

A good rule of thumb for large tables with lots of turnover is to set the autovacuum_vacuum_scale_factor so that automatic vacuums are triggered when there are about 2 million tuples that have been updated or deleted. You can use the current row count for a given table to find an appropriate scale factor:

autovacuum_vacuum_scale_factor = 2M / current row count

Then you can use ALTER TABLE to set it to your computed scale factor. We'll use 0.015 here, which means that autovacuum will be triggered when the number of updated or deleted tuples in this table equals autovacuum_vacuum_threshold + 1.5% of the total size of our very_large_table:

ALTER TABLE very_large_table SET (autovacuum_vacuum_scale_factor = 0.015);

Alternatively, if you really want to ensure that this vacuum threshold is triggered at exactly 2 million rows updated or deleted, you could set autovacuum_vacuum_scale_factor to 0 an autovacuum_vacuum_threshold to 2 million:

ALTER TABLE very_large_table SET (autovacuum_vacuum_scale_factor = 0, autovacuum_vacuum_threshold = 2000000);

Insert traffic vacuum threshold

pg_stat_user_tables.n_ins_since_vacuum > autovacuum_vacuum_insert_threshold(1000) + autovacuum_vacuum_insert scale_factor(0.1) * pg_class.reltuples

This threshold works the same way as the delete/update threshold but for insert traffic, which is tracked in each table's pg_stat_user_tables.n_ins_since_vacuum field. Without this, automatic vacuums would only be triggered for insert-only tables by the aggressive autovacuum freeze trigger. This would be especially painful for a database built from the restore of a database dump as all of the insert-only tables in it would hit that aggressive autovacuum freeze trigger at the same time.

The 2 million rows threshold is again a good rule of thumb for inserts to insert-only tables:

ALTER TABLE very_large_insert_onlytable SET (autovacuum_vacuum_insert_scale_factor = 0, autovacuum_vacuum_insert_threshold = 2000000);

Freeze threshold for preventing XID wraparound failures

age(pg_class.relfrozenxid) > autovacuum_freeze_max_age(200M)

If a table's pg_class.relfrozenxid value grows older than autovacuum_freeze_max_age then a vacuum will be forced to prevent XID wraparound. Normal vacuums use the visibility map to skip pages marked as having all-visible tuples even if they are not marked as all-frozen. This aggressive vacuum visits all pages not marked as all-frozen. Thus, this will be a heavier vacuum operation IO-wise since it may read pages with all-visible tuples that have not yet been marked as frozen.

Considering that wraparound happens when the age of the oldest unfrozen tuple in the server grows larger than 2 billion, the autovacuum_vacuum_freeze_age default of 200 million is actually very conservative. Increasing this to 400 or 500 million for busy tables to allow more time for them to be vacuumed by the delete/insert traffic triggers is common. This can also be done on a per-table basis:

ALTER TABLE very_large_table SET (autovacuum_freeze_max_age = 400000000);

There is also a setting called vacuum_freeze_table_age (150M), which determines the pg_class.relfrozenxid age at which a vacuum triggered or run for any reason will use the aggressive vacuuming strategy. While this can be set as high as 2 billion, the server will limit its effective value to 0.95 * autovacuum_freeze_max_age so that a periodic or manual vacuum run before the autovacuum_freeze_max_age threshold is reached will use this aggressive strategy.

Note that empty tables and those that receive no write traffic will still be vacuumed every time autovacuum_freeze_max_age transactions have been run on the server since their last vacuum. Once they have been fully frozen these vacuums won't do much more than reading their visibility maps and updating their pg_class.relfrozenxid values, but this becomes important with massive amounts of tables (100s of thousands) because simply visiting each one of those can take a lot of time and these automatic vacuums take priority over the delete/insert traffic-based automatic vacuums.

Another important consideration is that tuples can only be frozen once they are older than the "xmin horizon", aka "oldest xmin", which is the oldest of the current values of:

  • All pg_stat_activity.backend_xmin values
  • All pg_stat_replication.backend_xmin values for connected replicas running with hot_standby_feedback enabled (which is the default for all Crunchy Bridge HA and read replica servers)
  • All pg_replication_slots.xmin and pg_replication_slots.catalog_xmin values
  • All pg_prepared_xacts.transaction values

If you see anti-wraparound vacuums repeatedly running on a table or set of tables then it is likely that something from the above list is holding back the xmin horizon, such as a long running transaction or inactive replication slot. A situation like that should be remedied so that vacuum can do its job freezing old tuples.

Analyze threshold

The autovacuum daemon will also run automatic analyze, aka "autoanalyze", jobs using a similar threshold equation:

pg_stat_user_tables.n_mod_since_analyze > autovacuum_analyze_threshold (50) + autovacuum_analyze_scale_factor (0.2) * pg_class.reltuples

If the planner is choosing bad plans for your queries and you are able to fix them by running manual ANALYZE commands then consider adjusting this in the same way for those tables:

ALTER TABLE very_large_table SET (autovacuum_analyze_scale_factor = 0.015);

Throttling vacuum jobs: cost-based vacuum delay

Because vacuum and analyze work can be very IO-intensive, Postgres provides a cost-based vacuum delay mechanism to reduce IO impact. When this is in effect, each page read cache hit, page read cache miss, and buffer dirty (write) increments a cost counter. Once the counter's value reaches the configured cost limit the vacuum or analyze operation pauses for the configured delay time.

Both manual vacuums and autovacuums use the same cost increment values. They have separate parameters for the delay and limit values. Autovacuum values of -1 will use the manual vacuum setting and a delay value of 0 disables cost-based vacuum delay for those operations:

  • vacuum_cost_delay (0)/autovacuum_vacuum_cost_delay (2ms)
  • vacuum_cost_limit (200)/autovacuum_vacuum_cost_limit (-1)
  • vacuum_cost_page_hit (1)
  • vacuum_cost_page_miss (2)
  • vacuum_cost_page_dirty (20)

With these settings manual vacuums do not use cost-based vacuum delay by default (vacuum_cost_delay = 0). Autovacuum and autoanalyze runs will pause for 2ms when they accumulate their cost limit. Note that when the autovacuum daemon has multiple jobs running concurrently, the total cost limit is split between them to ensure that concurrent autovacuum work collectively respects the configured cost limit.

Slowing down vacuum work in exchange for reduced vacuum IO at any given time can therefore be done by either or both of:

  • Increasing vacuum_cost_delay/autovacuum_vacuum_cost_delay so that the delays when the cost limit is reached are longer
  • Decreasing vacuum_cost_delay/autovacuum_vacuum_cost_delay so that the cost limit is reached more frequently, resulting in more delays

The same slow down results can be had by both options since doubling the delay time will double the total time spent in delay periods for a given job, as will halving the cost limit so that 2x more delays happen.

The autovacuum_vacuum_cost_delay and autovacuum_vacuum_cost_limit settings can be set on a per-table basis. By setting this to 5ms on our very_large_tables we increase the amount of time spent in cost-based delays for it by a factor of 2.5x from its default of 2ms:

ALTER TABLE very_large_table SET (autovacuum_vacuum_cost_delay = '5ms')

Delay time increases will not necessarily map to the same proportional changes to the total vacuum times since these only affect how much time is spent in delays and not how much time it takes for vacuum to do its work between them.

When to run manual vacuums

Disabling autovacuum for very large, high-traffic tables

In some cases, you may want to ensure that the heavy IO and CPU required to vacuum large tables does not interfere with concurrent application traffic. One common way to do this is to disable autovacuum for those tables and schedule manual vacuum jobs to run during non-business hours. However, autovacuum to prevent XID wraparound per the autovacuum_max_freeze_age will still run so it is not possible to completely stop autovacuum from running on a table.

ALTER TABLE very_large_table SET (autovacuum_enabled = false);

When scheduling manual vacuum jobs, it is a good idea to use VACUUM VERBOSE and save each run's output to a log file so that you have a record of the work done. Here's an example cron job that will run a nightly verbose vacuum of of our very_large_table at midnight with the vacuumdb utility:

00 00 * * * (date; /usr/pgsql-17/bin/vacuumdb --verbose -t 'very_large_table' appdb; echo) >> /path/to/vacuum.log 2>&1

Info

You should never disable autovacuum globally.

Minimize bloat by batching large updates with manual vacuums

Consider a case where you have a large table that has a significant portion of rows updated. If you update them all at once with a single UPDATE statement you will effectively be doubling the space used by those rows. This causes bloat that may only be recoverable with a VACUUM FULL operation.

Instead, you can update the target rows in batches and run a manual vacuum between each batch. This way, each successive batch's updates will be able to use the space deleted and reclaimed from the previous batch, keeping bloat to a minimum and avoiding the need for a subsequent VACUUM FULL.

Here's pseudo-code demonstrating the idea:

first, last = first_id, last_id
batch_size = 10000

while first <= last
	db.execute('UPDATE very_large_table WHERE id BETWEEN first AND least(first + batch_size, last);')
	db.execute('VACUUM VERBOSE very_large_table;')
	first += first + least(first + batch_size, last)

Run VACUUM FREEZE after importing static data

Running VACUUM FREEZE tablename will run the vacuum with effective values of vacuum_freeze_min_age = 0 and vacuum_freeze_table_age = 0. All live tuples visible to all current and future transactions will be frozen. Running this on tables after large imports of static data can reduce the work needed by later autovacuums.

Use VACUUM FULL to remove excessive table bloat

VACUUM FULL is a special case table compaction/defragmentation operation that:

  1. Rewrites the table with only its live tuples, marking them all as frozen in the newly written table
  2. Acquires and holds an AccessExclusive lock on the table for the duration of the VACUUM FULL operation
  3. Drops the old table once the newly written copy is ready

Thus:

  • No other queries may access the table while the VACUUM FULL is running since it holds an AccessExclusive lock while it works.
  • The operation requires enough disk space for the new copy to be written before then reducing the disk space usage when dropping the old copy.
  • VACUUM FULL should only be used to recover from cases of extreme bloat.

** pg_squeeze **

Because VACUUM FULL operations can take a long time for large tables, and makes them being completely unavailable during that time, we recommend using a utility like the pg_squeeze extension to do the same work with minimal table locking.

Monitoring autovacuum

Autovacuum logging

Autovacuum will log detailed information about what was done for each vacuum that runs, similar to what is output by manual VACUUM VERBOSE commands. Here is an example of an autovacuum log message for servers running PG15+:

[3506673][autovacuum worker][501/2614][0] LOG:  automatic vacuum of table &quot;testdb.public.pgbench_accounts&quot;: index scans: 1
        pages: 0 removed, 327869 remain, 81969 scanned (25.00% of total)
        tuples: 0 removed, 14769015 remain, 2000000 are dead but not yet removable
        removable cutoff: 929, which was 3 XIDs old when operation ended
        new relfrozenxid: 929, which is 11 XIDs ahead of previous value
        frozen: 0 pages from table (0.00% of total) had 0 tuples frozen
        index scan needed: 49181 pages from table (15.00% of total) had 2999999 dead item identifiers removed
        index &quot;pgbench_accounts_pkey&quot;: pages: 54840 in total, 8224 newly deleted, 8224 currently deleted, 0 reusable
        I/O timings: read: 174.219 ms, write: 0.000 ms
         avg read rate: 26.491 MB/s, avg write rate: 22.489 MB/s
         buffer usage: 276192 hits, 41175 misses, 34955 dirtied
         WAL usage: 123002 records, 57432 full page images, 75538789 bytes
         system usage: CPU: user: 0.64 s, system: 0.27 s, elapsed: 12.14 s

And here is that looks like for PG14 and below:

[17656][autovacuum worker][5/463][0] LOG:  automatic vacuum of table &quot;testdb.public.pgbench_accounts&quot;: index scans: 1
        pages: 0 removed, 327869 remain, 0 skipped due to pins, 0 skipped frozen
        tuples: 0 removed, 14740860 remain, 2000000 are dead but not yet removable, oldest xmin: 760
        index scan needed: 49181 pages from table (15.00% of total) had 2999999 dead item identifiers removed
        index &quot;pgbench_accounts_pkey&quot;: pages: 54840 in total, 8224 newly deleted, 8224 currently deleted, 0 reusable
        I/O timings: read: 488.030 ms, write: 238.542 ms
        avg read rate: 55.609 MB/s, avg write rate: 21.009 MB/s
        buffer usage: 192958 hits, 124428 misses, 47008 dirtied
        WAL usage: 122981 records, 0 full page images, 19019531 bytes
         system usage: CPU: user: 1.14 s, system: 0.80 s, elapsed: 17.48 s

These log entries can be extremely valuable for diagnosing a variety of issues. For example:

  • If something is holding back the xmin horizon (discussed above in the "Freeze threshold for preventing XID wraparound failures" section) you will see the removable cutoff/oldest xmin values staying the same across autovacuum log entries for long periods of time.
  • Sudden increase of WAL data written without corresponding writes from an application can often be tracked back to autovacuum having been run on a large table.
  • While a table's pg_stat_user_tables.last_autovacuum entry will tell you when it was last autovacuumed, and how many times total, a full log history will give you a clear idea of how frequently it is being vacuumed via its log entries.

Given all of this, it can be a good idea to set the global log_autovacuum_min_duration setting to a relatively low value of 30s or even to 2s or 0s via config parameters, which will log all autovacuum work.

Tracking when autovacuums will next run for tables

You can quickly review a cluster's vacuum statistics in the Insights tab in the dashboard. This table will show you the last time your tables have been vacuumed, along with related information.

You can also view per-table information about the times of the last completed vacuums and currently running vacuums, along with vacuums expected according to the currently configured thresholds (including any tuning/overrides) using this query:

WITH tbl_settings AS (
	select c.oid,
	  (max(coalesce(relopts.option_value, current_setting(name))) FILTER (WHERE s.name = 'autovacuum_vacuum_scale_factor'))::real AS autovacuum_vacuum_scale_factor,
	  (max(coalesce(relopts.option_value, current_setting(name))) FILTER (WHERE s.name = 'autovacuum_vacuum_threshold'))::bigint AS autovacuum_vacuum_threshold,
	  (max(coalesce(relopts.option_value, current_setting(name))) FILTER (WHERE s.name = 'autovacuum_vacuum_insert_scale_factor'))::real AS autovacuum_vacuum_insert_scale_factor,
	  (max(coalesce(relopts.option_value, current_setting(name))) FILTER (WHERE s.name = 'autovacuum_vacuum_insert_threshold'))::bigint AS autovacuum_vacuum_insert_threshold,
	  (max(coalesce(relopts.option_value, current_setting(name))) FILTER (WHERE s.name = 'autovacuum_analyze_scale_factor'))::real AS autovacuum_analyze_scale_factor,
	  (max(coalesce(relopts.option_value, current_setting(name))) FILTER (WHERE s.name = 'autovacuum_analyze_threshold'))::bigint AS autovacuum_analyze_threshold,
	  (max(coalesce(relopts.option_value, current_setting(name))) FILTER (WHERE s.name = 'autovacuum_freeze_max_age'))::int AS autovacuum_freeze_max_age,
	  (max(coalesce(relopts.option_value, current_setting('autovacuum'))) FILTER (WHERE s.name = 'autovacuum_enabled'))::boolean AS autovacuum_enabled
	from pg_class c,
	  pg_options_to_table(c.reloptions) relopts
	  right join unnest(ARRAY['autovacuum_vacuum_scale_factor', 'autovacuum_vacuum_threshold', 'autovacuum_vacuum_insert_scale_factor',
	                     'autovacuum_vacuum_insert_threshold', 'autovacuum_analyze_scale_factor', 'autovacuum_analyze_threshold',
	                     'autovacuum_freeze_max_age', 'autovacuum_enabled']) s(name) ON relopts.option_name = s.name
	group by c.oid
),
thresholds_progress AS ( -- calculate per-table progress toward autovacuum trigger thresholds
  SELECT ts.oid,
  ts.autovacuum_vacuum_threshold + ts.autovacuum_vacuum_scale_factor * c.reltuples - psut.n_dead_tup + 1 AS dels_until_autovacuum,
  ts.autovacuum_vacuum_insert_threshold + ts.autovacuum_vacuum_insert_scale_factor * c.reltuples - psut.n_ins_since_vacuum + 1 AS ins_until_autovacuum,
  ts.autovacuum_freeze_max_age - age(c.relfrozenxid) + 1 AS txns_until_freeze_autovacuum,
	ts.autovacuum_analyze_threshold + ts.autovacuum_analyze_scale_factor * c.reltuples - psut.n_mod_since_analyze + 1 AS mods_until_autoanalyze
  FROM pg_class c
    JOIN tbl_settings ts ON (c.oid = ts.oid)
    JOIN pg_stat_user_tables psut ON (c.oid = psut.relid)
)
SELECT n.nspname AS schema_name,
  c.relname AS table_name,
  to_char(psut.last_vacuum, 'YYYY-MM-DD HH24:MI:SS')AS last_vacuum,
  to_char(psut.last_autovacuum, 'YYYY-MM-DD HH24:MI:SS') AS last_autovacuum,
  to_char(psut.last_analyze, 'YYYY-MM-DD HH24:MI:SS') AS last_analyze,
  to_char(psut.last_autoanalyze, 'YYYY-MM-DD HH24:MI:SS') AS last_autoanalyze,
  age(c.relfrozenxid) as frozenxid_age, -- oldest tuple xid frozen by last vacuum
  c.reltuples AS n_tup, -- count of live tuples at time of last vacuum
  psut.n_dead_tup AS n_dead_tup,
  -- if any of the next three columns are <=0 the thresholds have been passed and the expect_autovacuum column will be true
  tp.dels_until_autovacuum::bigint,
  tp.ins_until_autovacuum::bigint,
  tp.txns_until_freeze_autovacuum::int,
  tp.mods_until_autoanalyze::bigint,
  ((tp.dels_until_autovacuum <= 0 OR tp.ins_until_autovacuum <= 0) and ts.autovacuum_enabled) OR tp.txns_until_freeze_autovacuum <= 0 AS expect_autovacuum,
  psa.query as current_vacuum_query,
	to_char(now() - psa.query_start, 'YYYY-MM-DD HH24:MI:SS') as current_vacuum_runtime
  --, ts.*
FROM pg_class c
  JOIN pg_namespace n ON c.relnamespace = n.oid
  JOIN tbl_settings ts ON c.oid = ts.oid
  JOIN thresholds_progress tp ON c.oid = tp.oid
  JOIN pg_stat_user_tables psut ON psut.relid = c.oid
	LEFT JOIN pg_stat_progress_vacuum pspv ON (c.oid = pspv.relid AND pspv.datname = current_database())
	LEFT JOIN pg_stat_activity psa ON (pspv.pid = psa.pid)
ORDER BY current_vacuum_runtime DESC NULLS LAST, expect_autovacuum DESC, tp.dels_until_autovacuum ASC

Here's a breakdown of the above query's sections:

  • The tbl_settings CTE each table's effective autovacuum settings between the global GUC settings and any per-table overrides

  • The thresholds_progress CTE determines how many of each operation the autovacuum trigger thresholds track, i.e. deletes, inserts, or transactions, remain until an autovacuum or autoanalyze is run

  • The main query then brings all of that together to display for each table:

    • When their last vacuums and analyzes were run
    • The pg_class.reltuples value as of the last vacuum
    • The number of tuples deleted since the last vacuum in the n_dead_tup column
    • When to expect new traffic-based autovacuum and autoanalyze operations and if one is expected soon
    • The query text and run time of any currently running vacuums
  • Sorts first by tables with vacuums currently running, then by tables with expected autovacuums, then by tables nearing their deleted tuples autovacuum threshold

  • Uncommenting the --, ts.* line will include the effective per-table autovacuum_* settings