Logo ← PostgreSQL Blog

PostgreSQL Cheatsheet: 50 Tips

You are using PostgreSQL every day, but are you using its full potential?

PostgreSQL Cheatsheet: 50 Tips

You are using PostgreSQL every day, but are you using its full potential?

I have compiled the definitive list of 50 tricks used by Senior Database Engineers.

Bookmark this page. You will need it.

Table of Contents

  1. Terminal Mastery (psql)Speed up your workflow.
  2. SQL Productivity — Write smarter queries.
  3. JSON & NoSQL Features — Ditch MongoDB.
  4. Date & Time Hacks — Stop struggling with timestamps.
  5. Admin & Maintenance — Keep production alive.
  6. Advanced Engineering — The “Dark Arts” of SQL.

1. Terminal Mastery (psql)

Stop fighting the command line. Make it work for you.

1. \x auto (The Readable View) Never squint at broken, wrapped-around tables again. This command automatically switches to "Card View" (vertical) if the output is too wide.

\x auto
-- Now your wide tables will look like a vertical list. Clean.

2. \timing (The Stopwatch) Wondering why your query feels slow? Measure it instantly.

\timing
SELECT * FROM huge_table;
-- Output: Time: 142.503 ms

3. \watch (Real-time Monitoring) Don't spam Up Arrow + Enter. Watch a query refresh automatically.

-- Re-runs the query every 2 seconds
SELECT count(*) FROM orders WHERE status = 'pending'; 
\watch 2

4. \e (External Editor) Need to write a complex query? Don't do it in the single line. Opens Vim/Nano.

\e 
-- Opens your default text editor. Save and quit to run the query.

5. \copy (Fast Export) Dump data to CSV without needing root access or complex scripts.

\copy (SELECT * FROM users) TO 'users.csv' WITH CSV HEADER

6. \d+ (The X-Ray Vision) Standard \d shows columns, but \d+ reveals the secrets. It shows the on-disk size, column descriptions (comments), and extended statistics.

\d+ users
-- Output includes: "Size: 15 MB", "Description: Main user table"

7. \dn (The Map) In enterprise databases, tables are often hidden inside specific schemas (namespaces). Use this to navigate the layout.

\dn
-- Lists all schemas (e.g., public, analytics, legacy_data)

8. \du (The Security Guard) Who has access to this database? Lists all roles, and critically, shows who has Superuser or Replication privileges.

\du
-- Shows: Role name, Attributes (Superuser, Create DB), Member of

9. \l (The Big Picture) Lists every database existing on the current PostgreSQL instance, along with their owner and encoding (UTF8).

\l
-- Useful to check if you are connected to the Dev or Prod instance.
-- \l+ Useful to get the size of the databases

10. \! (The Multitasker) Need to clear the screen, check disk space, or list a file? Run Linux shell commands without ever leaving the psql interface.

\! clear
-- Or check directory:
\! ls -la

11. \h (The Built-in Manual) Stop Googling "Postgres create index syntax". The documentation is inside your terminal.

\h CREATE INDEX
-- Shows the exact syntax diagram and options.

12. \q (The Clean Exit) Don't rage-quit with Ctrl+C. Use the proper command to close the connection gracefully.

\q
-- Disconnects and returns to shell.

2. SQL Productivity

Why write 10 lines of code when 1 line does the job?

13. RETURNING (Get Data Back) Stop doing an INSERT followed by a SELECT to get the generated ID.

INSERT INTO users (name, email) 
VALUES ('Neo', 'neo@matrix.com') 
RETURNING id, created_at;
-- Returns the new ID immediately.

14. DISTINCT ON (The First Row Trick) Get the "most recent order" for every user without complex GROUP BY logic.

SELECT DISTINCT ON (user_id) user_id, total, created_at
FROM orders
ORDER BY user_id, created_at DESC;

15. COALESCE (Handle NULLs) Clean up your data for the UI. Return the first non-null value.

SELECT name, COALESCE(phone, 'No Phone Provided') as contact 
FROM users;

16. generate_series (Fake Data Generator) Need 10,000 rows to test performance?

INSERT INTO test_logs (log_date)
SELECT NOW() - (random() * interval '30 days')
FROM generate_series(1, 10000);

17. ILIKE (Case Insensitive Search) Stop using LOWER(col) = LOWER(val). It breaks indexes.

SELECT * FROM products WHERE name ILIKE 'iphone%';
-- Matches: iPhone, iphone, IPHONE

18. CASE WHEN (Logic Inside SQL) Stop fetching data to your backend just to run a simple If-Else loop. Handle the logic directly in the database for faster reports.

SELECT name, 
       CASE 
           WHEN total_spent > 1000 THEN 'VIP'
           WHEN total_spent > 100 THEN 'Regular'
           ELSE 'New'
       END as customer_type
FROM users;

19. NULLIF ( The Division Saver) The "Division by Zero" error is a classic query killer. NULLIF checks the second value; if it is 0, it returns NULL instead of crashing the query.

-- If total_orders is 0, the result becomes NULL (safe) instead of Error.
SELECT total_sales / NULLIF(total_orders, 0) as average_ticket
FROM sales_stats;

20. ::type (The Casting Shortcut) The ANSI standard CAST(column AS TYPE) is too verbose. Use the double-colon syntax for cleaner, faster type conversion.

SELECT '2025-01-01'::date, 
       '123.45'::numeric,
       'true'::boolean;

21. FILTER (Cleaner Aggregations) Before Postgres 9.4, we used messy CASE WHEN statements inside COUNT. Now, use the modern FILTER clause to pivot data in a single line.

SELECT 
    count(*) FILTER (WHERE status = 'active') as active_users,
    count(*) FILTER (WHERE status = 'banned') as banned_users
FROM users;

22. ORDER BY ... NULLS LAST (Better UX) By default, PostgreSQL considers NULL values as "larger" than non-nulls. This means in a descending sort, NULLs appear at the top. Fix this UI bug instantly.

-- Shows completed tasks first, pending (NULL) tasks at the bottom
SELECT * FROM tasks 
ORDER BY completed_at DESC NULLS LAST;

3. JSON & NoSQL Features

Postgres does everything MongoDB can do, but with ACID compliance.

23. ->> (Get Text) Extract a JSON field as a simple string.

-- data column: {"name": "John", "age": 30}
SELECT data->>'name' FROM users; 
-- Result: 'John'

24. jsonb_pretty (Beautify) Make your JSON output readable in the terminal.

SELECT jsonb_pretty(settings) FROM user_configs;

25. @> (Contains) Check if a JSON document contains a specific key-value pair. Very fast with GIN indexes.

SELECT * FROM books WHERE attributes @> '{"genre": "sci-fi"}';

26. to_jsonb (Instant API Response) Why map data in your backend code? Convert an entire SQL row (or result set) into a JSON document directly in the database. Perfect for sending data straight to a frontend API.

SELECT to_jsonb(u) FROM users u;
-- Result: {"id": 1, "name": "Neo", "email": "neo@matrix.com"}

27. jsonb_agg (The N+1 Killer) This is the most powerful aggregation tool. Instead of running a separate query to fetch "Comments" for a "Post", aggregate them into a JSON array inside the main query.

SELECT post_id, 
       jsonb_agg(comment_body) as comments 
FROM comments 
GROUP BY post_id;
-- Result: 1, ["Great post!", "First!", "Nice tutorial"]

28. jsonb_set (Surgical Update) Need to update just one key inside a massive JSON object? Don't overwrite the whole column. Use this to modify a specific path.

-- Changes only the 'theme' key inside the 'settings' JSON column
UPDATE users 
SET settings = jsonb_set(settings, '{theme}', '"dark"');

29. unnest (Explode Arrays) The reverse of aggregation. It takes an array (e.g., ['a', 'b', 'c']) and transforms it into 3 separate rows. Essential for joining arrays with other tables.

SELECT unnest(tags) as tag FROM posts;
-- If a post has 3 tags, this query returns 3 rows.

30. array_agg (Simple Lists) Similar to jsonb_agg, but creates a native PostgreSQL Array (text[] or int[]). Much cleaner than using string_agg with commas.

SELECT user_id, array_agg(ip_address) 
FROM logins 
GROUP BY user_id;
-- Result: {192.168.1.1, 10.0.0.1}

31. string_to_array (Legacy Data Fixer) Dealing with messy CSV imports or legacy data where tags are stored as "tag1,tag2,tag3"? Convert them into proper arrays instantly.

SELECT string_to_array('apple,banana,orange', ',');
-- Result: {apple, banana, orange}

32. array_length (Count Elements) Want to know how many tags a post has? Don't fetch the array to your application to count it. Count it in the DB.

-- The '1' argument specifies the 1st dimension of the array
SELECT array_length(tags, 1) FROM posts;

4. Date & Time Hacks

Timezones are hard. Postgres makes them easier.

33. age() (Human Readable) Calculate duration instantly.

SELECT name, age(birth_date) FROM employees;
-- Result: "25 years 4 mons 12 days"

34. date_trunc (Reporting) Group data by month, day, or hour easily.

SELECT date_trunc('month', created_at), count(*) 
FROM orders 
GROUP BY 1;

35. Interval Math Add or subtract time using plain English.

SELECT NOW() + interval '1 week 2 days';

36. EXTRACT (Precision Analytics) Need to know which "Hour of the Day" has the highest traffic, or filter data for weekends only? Don't parse string dates. Extract the numeric part directly.

SELECT EXTRACT(HOUR FROM created_at) as hour_of_day,
       EXTRACT(DOW FROM created_at) as day_of_week -- 0=Sunday, 6=Saturday
FROM page_visits;

37. TO_CHAR (Report Formatting) Your manager wants the report date as "19 Dec 2025", but the database gives you ISO-8601 (2025-12-19...). Format it directly in SQL for CSV exports or frontend displays.

SELECT TO_CHAR(NOW(), 'DD Mon YYYY HH24:MI');
-- Result: "19 Dec 2025 14:30

5. Admin & Maintenance

Commands for the “Adults in the Room”.

38. pg_stat_activity (The Control Tower) See who is connected and what queries are running right now.

SELECT pid, usename, state, query 
FROM pg_stat_activity 
WHERE state = 'active';

39. pg_size_pretty Get database sizes in GB/MB instead of bytes.

SELECT pg_size_pretty(pg_database_size('my_db'));

40. pg_cancel_backend (The Polite Kill) Stop a stuck query without killing the connection.

SELECT pg_cancel_backend(1234); -- 1234 is the PID

41. pg_terminate_backend (The Hard Kill) Sometimes pg_cancel_backend isn't enough. If a process is zombie-locked or the application refuses to disconnect, use this. It destroys the entire connection instantly.

-- Kills the connection for the specific Process ID (PID)
SELECT pg_terminate_backend(pid) 
FROM pg_stat_activity 
WHERE pid = 1234;

42. pg_stat_statements (The Slow Query Detector) This is the single most important extension for performance tuning. It tells you exactly which queries are eating up your CPU and time.

-- Find top 5 queries by total execution time
SELECT query, calls, total_exec_time, mean_exec_time 
FROM pg_stat_statements 
ORDER BY total_exec_time DESC 
LIMIT 5;

43. VACUUM (VERBOSE, ANALYZE) (The Bloat Cleaner) PostgreSQL doesn't immediately remove deleted rows; it marks them as "dead". This command reclaims that space and updates the statistics so the Query Planner makes smart decisions.

-- Run this manually if a table is acting weird or slow
VACUUM (VERBOSE, ANALYZE) users;

44. CREATE INDEX CONCURRENTLY (Production-Safe Indexing) Standard CREATE INDEX locks the table, meaning no one can write to it until the index is finished. In production, this causes downtime. Always use CONCURRENTLY to build the index in the background without locking.

-- Slower to build, but keeps your app online
CREATE INDEX CONCURRENTLY idx_users_email ON users(email);

45. pg_is_in_recovery() (Primary vs Replica) Writing a script that needs to know if it's running on the Main server (Primary) or a Read-Only copy (Replica)? This function is the standard check.

SELECT pg_is_in_recovery();
-- Returns 'false' if it is Primary (Read/Write)
-- Returns 'true' if it is a Replica (Read-Only)

6. Advanced Engineering

The “Dark Arts” of Database Engineering.

46. WITH (CTEs) Make complex queries readable by breaking them into steps.

WITH regional_sales AS (
    SELECT region, SUM(amount) as total_sales
    FROM orders
    GROUP BY region
)
SELECT * FROM regional_sales WHERE total_sales > 10000;

47. LATERAL JOIN (For-Loop in SQL) Run a subquery for every row of the main table. Powerful for analytics.

SELECT u.name, l.last_login
FROM users u
LEFT JOIN LATERAL (
    SELECT login_time FROM logins WHERE user_id = u.id ORDER BY login_time DESC LIMIT 1
) l ON true;

48. EXPLAIN (ANALYZE, BUFFERS) (The Truth Serum) Standard EXPLAIN only guesses. EXPLAIN ANALYZE actually runs the query and tells you exactly how long each step took. Adding BUFFERS reveals how much data was read from disk vs RAM (Cache). This is the only way to truly debug a slow query.

EXPLAIN (ANALYZE, BUFFERS) 
SELECT * FROM orders WHERE total > 1000;
-- Look for "Seq Scan" (Bad) vs "Index Scan" (Good).

49. COPY ... TO STDOUT (The Firehose) If you need to move millions of rows to your application (Python/Go/Node) for processing, never use SELECT. It has huge overhead. Use COPY TO STDOUT, which streams the raw data directly through the socket connection. It is the fastest possible way to move data.

-- Streams raw CSV data directly to the client
COPY (SELECT * FROM massive_logs LIMIT 100000) TO STDOUT WITH CSV;

50. LISTEN / NOTIFY (Real-Time Pub/Sub) You don't always need Redis or Kafka for real-time events. PostgreSQL has a built-in message broker. You can make your frontend update instantly when a new row is inserted.

-- Session 1 (Listener):
LISTEN task_completed;

-- Session 2 (Publisher):
-- You can send a payload text along with the event
NOTIFY task_completed, 'Task ID 55 is done';

Final Thoughts

PostgreSQL is more than just a place to store data; it’s a powerful engine. You don’t need to memorize all 50. Save this article and use it as your cheat sheet when you get stuck.

Which trick is your favorite? Let me know in the comments.