PostgreSQL Stored Procedures: 3 Real-World Scenarios
PostgreSQL Stored Procedures: 3 Real-World Scenarios
Writing a basic stored procedure in PostgreSQL is easy. Surviving production traffic without deadlocks, race conditions, or downtime is the real challenge.
Here are 3 real-world case studies from the trenches, covering the exact architectures and solutions used to keep high-concurrency systems alive.

Table of Contents
- Case Study 1: How a Simple Money Transfer Locks the System (Deadlock Resolution & Row-Level Locking)
- Case Study 2: Is Partial Success Possible? Savepoints and Sub-Transactions in Batch Processing
- Case Study 3: Race Conditions in Ticket Sales (Preventing Double-Booking)
Case Study 1: How a Simple Money Transfer Locks the System (Deadlock Resolution & Row-Level Locking)
The Illusion of Simplicity
Let’s start with the most deceptive task in software engineering: transferring money between two accounts. The business requirement is trivial: Deduct $100 from Account A and add it to Account B.
When developers first discover PostgreSQL Stored Procedures, they are thrilled that they can manage transactions directly inside the database using COMMIT and ROLLBACK. They quickly write something like this:
The Naive Approach (Do Not Use in Production):
CREATE OR REPLACE PROCEDURE transfer_funds_naive(
p_sender_id INT,
p_receiver_id INT,
p_amount DECIMAL
)
LANGUAGE plpgsql
AS $$
BEGIN
-- Step 1: Deduct from sender
UPDATE accounts
SET balance = balance - p_amount
WHERE id = p_sender_id;
-- Step 2: Add to receiver
UPDATE accounts
SET balance = balance + p_amount
WHERE id = p_receiver_id;
COMMIT;
END;
$$;
In a staging environment with a single tester, this works perfectly. But in production, during a high-traffic event, the database starts throwing 40P01 (deadlock_detected) exceptions, and API calls begin dropping like flies.
What just happened?
The Anatomy of a Deadlock
To understand the crash, we have to look at the exact timeline of two concurrent transactions. Imagine Alice (ID: 10) is sending money to Bob (ID: 20), and at the exact same millisecond, Bob is sending money to Alice.
Here is how PostgreSQL’s Lock Manager processes this:
| Timeline | Transaction 1 (Alice to Bob) | Transaction 2 (Bob to Alice) | System State |
|----------|------------------------------------|------------------------------------|------------------------------------|
| t0 | UPDATE accounts... WHERE id = 10; | UPDATE accounts... WHERE id = 20; | T1 locks Row 10. T2 locks Row 20. |
| t1 | UPDATE accounts... WHERE id = 20; | UPDATE accounts... WHERE id = 10; | T1 waits for 20. T2 waits for 10. |
| t2 | (Waiting...) | (Waiting...) | DEADLOCK |
At t2, both transactions are waiting for a resource held by the other. They will wait indefinitely until PostgreSQL’s deadlock detector wakes up (defined by the deadlock_timeout parameter). The database will forcefully kill one of the transactions (the victim) and roll it back to unblock the other.
Your application receives an ugly 500 Internal Server Error.
The Solution: Deterministic Lock Ordering
The golden rule of high-concurrency database design is: Always acquire locks in a deterministic order. If every transaction in your system locks rows in the exact same sequence (e.g., lowest ID to highest ID), a deadlock is mathematically impossible. One transaction will simply queue behind the other, wait its turn, and succeed.
To implement this in our procedure, we cannot just blindly execute UPDATE statements. We need to explicitly lock the rows up front using SELECT ... FOR UPDATE, and we must do it in order.
The Production-Ready Approach:
CREATE OR REPLACE PROCEDURE transfer_funds_robust(
p_sender_id INT,
p_receiver_id INT,
p_amount DECIMAL
)
LANGUAGE plpgsql
AS $$
DECLARE
v_locked_id INT;
v_sender_balance DECIMAL;
BEGIN
-- 1. Deterministic Locking Phase
-- We sort the IDs to ensure we always lock the smaller ID first.
-- The FOR UPDATE clause acquires a RowExclusiveLock.
FOR v_locked_id IN
SELECT id
FROM accounts
WHERE id IN (p_sender_id, p_receiver_id)
ORDER BY id ASC
FOR UPDATE
LOOP
-- We are just iterating to acquire locks safely.
-- No action needed inside this loop yet.
END LOOP;
-- 2. Business Logic & Validation Phase
-- Now that we own the locks, we can safely read the balance.
SELECT balance INTO v_sender_balance
FROM accounts
WHERE id = p_sender_id;
IF v_sender_balance < p_amount THEN
-- Locks are automatically released upon exception/rollback
RAISE EXCEPTION 'Insufficient funds. Account: %, Available: %, Requested: %',
p_sender_id, v_sender_balance, p_amount
USING ERRCODE = 'P0001'; -- Custom error code
END IF;
-- 3. Execution Phase
UPDATE accounts
SET balance = balance - p_amount
WHERE id = p_sender_id;
UPDATE accounts
SET balance = balance + p_amount
WHERE id = p_receiver_id;
-- 4. Commit and Release Locks
COMMIT;
EXCEPTION
WHEN OTHERS THEN
-- If any error occurs, rollback the transaction to prevent partial updates.
ROLLBACK;
-- Re-raise the error so the application layer knows it failed
RAISE;
END;
$$;
Why This Architecture Works
- Elimination of Deadlocks: By using
ORDER BY id ASC, if Alice (10) sends to Bob (20), and Bob (20) sends to Alice (10), both transactions will try to lock ID 10 first. The loser of the race simply waits in the queue. - Protection Against Race Conditions: The
FOR UPDATElock ensures that the balance we read during the validation phase (v_sender_balance) cannot be changed by another transaction before we execute theUPDATE. - Graceful Error Handling: We use explicit error codes (
ERRCODE). This allows the backend application to catch the specificP0001error and show a clean Insufficient Funds message to the user, rather than a generic database crash log.
In high-stakes environments, a stored procedure isn’t just about moving logic to the database; it’s about controlling the flow of traffic at the bare-metal level.
Case Study 2: Is Partial Success Possible? Savepoints and Sub-Transactions in Batch Processing
The All-or-Nothing Trap
Batch processing is a reality of enterprise systems. Imagine you have a nightly cron job that processes 50,000 pending subscription renewals or bulk payouts.
Historically, executing this via a single massive query or a traditional PostgreSQL Function (CREATE FUNCTION) meant operating under the rigid rule of atomic transactions: All or nothing. If the 49,999th record throws a unique_violation or a check_violation, the entire transaction rolls back. Hours of database I/O are wasted, and 49,998 successful renewals are reverted.
Before PostgreSQL 11 introduced Stored Procedures, developers had to pull all 50,000 records into the application layer (Node.js, Python, Java), iterate through them, and execute 50,000 individual transactions over the network. This caused immense network latency, connection pool exhaustion, and application memory bloat.
With Stored Procedures, we can finally process the batch natively inside the database, retaining partial success while gracefully handling individual failures.
The Power of Sub-Transactions and Batch Commits
In a PostgreSQL Stored Procedure, you can issue COMMIT and ROLLBACK commands natively. More importantly, when you place an EXCEPTION block inside a BEGIN ... END block, PostgreSQL automatically creates a sub-transaction (effectively an implicit SAVEPOINT).
If a statement inside that block fails, PostgreSQL only rolls back that specific sub-transaction. The main loop continues executing.
However, there is a catch: accumulating thousands of locks and sub-transactions in memory will eventually degrade performance or crash the database. The true senior approach is to combine sub-transactions with Batch Commits — committing the data in chunks.
The Production-Ready Batch Processor:
CREATE OR REPLACE PROCEDURE process_bulk_payouts(p_batch_size INT DEFAULT 1000)
LANGUAGE plpgsql
AS $$
DECLARE
v_record RECORD;
v_processed_count INT := 0;
BEGIN
-- Loop through all pending payouts
FOR v_record IN
SELECT id, user_id, amount
FROM payouts
WHERE status = 'PENDING'
-- Note: Do NOT use FOR UPDATE here if you are committing inside the loop,
-- as COMMIT releases all locks and closes standard cursors.
LOOP
BEGIN -- Start of the implicit Sub-Transaction
-- 1. Business Logic
UPDATE accounts
SET balance = balance + v_record.amount
WHERE id = v_record.user_id;
UPDATE payouts
SET status = 'COMPLETED', processed_at = NOW()
WHERE id = v_record.id;
v_processed_count := v_processed_count + 1;
-- 2. Batch Commit Strategy
-- Commit every N records to release locks and prevent memory bloat
IF v_processed_count % p_batch_size = 0 THEN
COMMIT;
RAISE NOTICE 'Committed batch of % records', p_batch_size;
END IF;
EXCEPTION
WHEN OTHERS THEN
-- 3. Failure Handling
-- The sub-transaction for this specific v_record is rolled back automatically.
-- We log the failure and let the loop continue.
INSERT INTO payout_errors (payout_id, error_message, created_at)
VALUES (v_record.id, SQLERRM, NOW());
UPDATE payouts
SET status = 'FAILED'
WHERE id = v_record.id;
-- We must commit the error log immediately, otherwise a future
-- rollback might wipe out our error tracking.
COMMIT;
END;
END LOOP;
-- Final commit for any remaining records that didn't hit the batch size limit
COMMIT;
RAISE NOTICE 'Bulk payout processing completed.';
END;
$$;
Why This Architecture Works
- Network Efficiency: By moving the loop inside the database, you eliminate 50,000 network round-trips between your backend application and the database server.
- Resilience: The
EXCEPTIONblock acts as a shield. If User A's account has a constraint violation (e.g., account frozen), it fails gracefully, gets logged intopayout_errors, and User B's payout is processed without interruption. - Memory & Lock Management: Calling
COMMITevery 1,000 records (thep_batch_size) is crucial. It flushes the WAL (Write-Ahead Log), releases row locks, and frees up memory. Without this, a long-running procedure would lock thousands of rows for hours, blocking other parts of the application.
A Crucial Warning on Cursors: When you execute a COMMIT inside a FOR loop, PostgreSQL keeps the cursor open implicitly in modern versions, but it releases any FOR UPDATE locks you might have acquired in the SELECT statement. Therefore, design your batch queries to process data incrementally (like filtering by status) rather than relying on locks held across commits.
Case Study 3: Race Conditions in Ticket Sales (Preventing Double-Booking)
The Check-Then-Act Anti-Pattern
Selling a ticket, booking a hotel room, or reserving a specific seat on a flight introduces one of the most notorious concurrency bugs in computer science: the Race Condition. Specifically, the Time-of-Check to Time-of-Use (TOCTOU) flaw.
Imagine an API endpoint that allows a user to book a specific concert seat. A junior developer will typically write the logic like this:
The Naive Approach (The Double-Booking Disaster):
CREATE OR REPLACE PROCEDURE book_seat_naive(
p_user_id INT,
p_seat_id INT
)
LANGUAGE plpgsql
AS $$
DECLARE
v_status VARCHAR;
BEGIN
-- Step 1: Check availability (The Check)
SELECT status INTO v_status
FROM seats
WHERE id = p_seat_id;
-- Step 2: Make the booking (The Act)
IF v_status = 'AVAILABLE' THEN
-- Simulate some business logic delay (e.g., checking user balance)
PERFORM pg_sleep(0.1);
UPDATE seats
SET status = 'BOOKED', user_id = p_user_id
WHERE id = p_seat_id;
INSERT INTO tickets (user_id, seat_id, created_at)
VALUES (p_user_id, p_seat_id, NOW());
COMMIT;
ELSE
RAISE EXCEPTION 'Seat is already taken!';
END IF;
END;
$$;
If you test this locally, it works flawlessly. But when the tickets for a Taylor Swift concert drop and thousands of fans hit the Buy button at the exact same millisecond, you end up with two people holding a valid ticket for Seat 12A.
The Anatomy of a Race Condition
Because standard SELECT statements in PostgreSQL do not block each other, multiple transactions can read the exact same state before any of them has a chance to update it.
| Timeline | Transaction 1 (Alice) | Transaction 2 (Bob) | System State |
|----------|------------------------------------|------------------------------------|------------------------------------|
| t0 | SELECT status FROM seats (ID:12) | SELECT status FROM seats (ID:12) | Both read 'AVAILABLE'. |
| t1 | IF v_status = 'AVAILABLE' (True) | IF v_status = 'AVAILABLE' (True) | Both pass the validation check. |
| t2 | UPDATE seats SET status='BOOKED' | (Waiting for T1 to commit...) | Alice gets the seat. |
| t3 | COMMIT; | UPDATE seats SET status='BOOKED' | Bob OVERWRITES Alice's booking. |
In the end, the database records the seat as booked by Bob, but both Alice and Bob received a Success response from the API, and both were charged.
The Solution: Pessimistic Locking with NOWAIT
To fix this, we must bind the Check and the Act phases together using Pessimistic Locking. When Alice checks the seat, she must place an exclusive lock on that row so that Bob cannot even read it until she is done.
In PostgreSQL, we achieve this using SELECT ... FOR UPDATE. However, if we just use FOR UPDATE, Bob's transaction will hang and wait for Alice to finish. In a high-traffic ticketing system, hanging connections will instantly exhaust your database connection pool.
We need Bob’s request to fail instantly so the UI can tell him Someone else is currently looking at this seat. We do this by appending NOWAIT.
The Production-Ready Booking Procedure:
CREATE OR REPLACE PROCEDURE book_seat_robust(
p_user_id INT,
p_seat_id INT
)
LANGUAGE plpgsql
AS $$
DECLARE
v_status VARCHAR;
BEGIN
-- 1. Atomic Check-and-Lock Phase
-- NOWAIT ensures that if another transaction holds the lock,
-- this query immediately throws a "could not obtain lock" error (55P03)
SELECT status INTO v_status
FROM seats
WHERE id = p_seat_id
FOR UPDATE NOWAIT;
-- 2. Validation Phase
IF v_status != 'AVAILABLE' THEN
-- The seat is locked by us, but it was already booked in the past
RAISE EXCEPTION 'Seat % is no longer available.', p_seat_id
USING ERRCODE = 'P0002';
END IF;
-- 3. Execution Phase
UPDATE seats
SET status = 'BOOKED', user_id = p_user_id
WHERE id = p_seat_id;
INSERT INTO tickets (user_id, seat_id, created_at)
VALUES (p_user_id, p_seat_id, NOW());
-- 4. Commit and Release the Row Lock
COMMIT;
EXCEPTION
WHEN lock_not_available THEN -- Catching the specific NOWAIT exception (55P03)
ROLLBACK;
RAISE EXCEPTION 'Seat is currently being booked by another user. Please try again.'
USING ERRCODE = 'P0003';
WHEN OTHERS THEN
ROLLBACK;
RAISE;
END;
$$;
Why This Architecture Works
- Absolute Consistency: By acquiring a
FOR UPDATElock during theSELECTphase, we guarantee that the state of the row cannot change between the moment we check it and the moment we update it. - Fail-Fast Mechanism: The
NOWAITclause is a lifesaver for system stability. Instead of creating a massive queue of blocked transactions that chew through CPU and connection pools, secondary requests immediately hit thelock_not_availableexception. - Clear API Responses: Because we catch the specific lock exception and translate it into a custom error code (
P0003), the backend application can gracefully handle the rejection and inform the frontend to show a Seat locked, choose another UI prompt.
Pro Tip: If you are building a queueing system where workers are picking up tasks rather than users booking specific seats, look intoSELECT ... FOR UPDATE SKIP LOCKEDinstead ofNOWAIT. It skips locked rows entirely and grabs the next available one, which is perfect for high-throughput job queues.
← PostgreSQL Blog