Logo ← PostgreSQL Blog

Monitor All Databases in One View

If you are managing a PostgreSQL instance with dozens of databases, you know the pain: Which table has the most dead tuples? or When was…

Monitor All Databases in One View

If you are managing a PostgreSQL instance with dozens of databases, you know the pain: Which table has the most dead tuples? or When was the last vacuum run on that specific customer DB?

By default, PostgreSQL statistics are database-specific. To see everything, you usually have to connect to each database one by one. Not anymore.

In this post, I’ll show you how to create a Global Catalog View using dblink and a handy Bash script to automate the setup.

1. The Automation: Preparing Your Databases

The dblink extension must be installed in your main admin or postgres database. However, for dblink to query other databases, you need a quick way to ensure permissions and extensions are ready.

Instead of doing this manually, use this Bash one-liner to loop through your databases and prepare them:

for db in $(psql -At -c "SELECT datname FROM pg_database WHERE datallowconn AND datistemplate = false"); do
    echo "Processing database: $db"
    psql -d "$db" -c "CREATE EXTENSION IF NOT EXISTS dblink;"
done

This script fetches every connectable database and ensures the bridge is ready.

2. The SQL Magic: vw_all_tables_list

Once the extensions are ready, run the following script in your primary management database (e.g., postgres). This view uses LATERAL joins to iterate over every database and fetch live/dead tuple stats.

CREATE
OR REPLACE VIEW admin.vw_all_tables_list AS
SELECT
        db_isim AS db_name,
        sub.schema_name   ,
        sub.table_name    ,
        sub.n_live_tup    ,
        sub.n_dead_tup    ,
        sub.last_vacuum   ,
        sub.last_analyze
FROM
        pg_database                                                                                                                                                                                                                                                                                                                                                                                                                 db                                                                                                                                ,
        LATERAL dblink('dbname=' || db.datname, 'SELECT        
schemaname,        
relname,        
n_live_tup,        
n_dead_tup,        
last_vacuum,        
last_analyze     
FROM pg_stat_user_tables     
WHERE schemaname NOT IN (''pg_catalog'', ''information_schema'', ''topology'')       
AND NOT (relname ~* ''^[ADI][0-9]+$'')       
AND relname NOT LIKE ''%\_EVW''       
AND relname NOT IN (''spatial_ref_sys'', ''geometry_columns'', ''geography_columns'', ''sde_spatial_references'')' ) AS sub(schema_name name, table_name name, n_live_tup bigint, n_dead_tup bigint, last_vacuum timestamptz, last_analyze timestamptz),
        LATERAL
        (
                SELECT
                        db.datname::name AS db_isim) AS n
WHERE
        db.datallowconn
AND     NOT db.datistemplate;

Why This Approach is a Game Changer

1. Centralized Housekeeping

You can now find fragmented tables across the entire server with a simple query:

SELECT
        *
FROM
        admin.vw_all_tables_list
WHERE
        n_dead_tup > 10000
ORDER BY
        n_dead_tup DESC;

2. Smart Filtering

The query automatically ignores:

  • System Schemas: No clutter from pg_catalog.
  • GIS Metadata: No noise from PostGIS tables like spatial_ref_sys.
  • Internal Wrappers: It filters out specialized views (like _EVW) and temporary system tables.

3. Real-time Monitoring

Since it uses pg_stat_user_tables, the data is as fresh as PostgreSQL's internal stats collector. It's perfect for building a Grafana dashboard or a quick health-check report.

Performance Tip

dblink opens a new connection for each database it queries. If you have hundreds of databases, running this view frequently might add some overhead. For massive environments, consider caching this data into a materialized view once an hour.

Conclusion

Stop jumping between databases. Use the power of LATERAL and dblink to get a bird's-eye view of your entire PostgreSQL instance.