What Happens When You Create 1600 Columns in PostgreSQL?
What Happens When You Create 1600 Columns in PostgreSQL?
PostgreSQL documentation says the limit is 1600 columns. Today, we are going to push the database to its breaking point to see if that’s true.
If you read the deep corners of the PostgreSQL documentation, you will find a sentence that sounds like a challenge:
“A table cannot have more than 1600 columns. (In practice, the effective limit is lower…)”

Most developers read this and move on. “Who in their right mind designs a table with 1600 columns?” we ask. But theoretical limits are always a playground for engineers.
Today, we are putting on our lab coats to make the PostgreSQL engine a little angry. What really happens when we hit that 1600-column wall? And more importantly, why is getting close to this limit a “architectural suicide”?
Let’s use PL/pgSQL to force the database to its limits.
The Lab: Testing the Limit
Writing 1600 columns by hand (col1, col2, col3...) would be tedious. So, I wrote a dynamic script using PostgreSQL’s own procedural language (PL/pgSQL) to attempt this crazy operation.
The code below tries to create a table with 1 ID (Primary Key) and 1599 INT columns. This brings us exactly to the 1600 limit.
Warning: Do not run this on your production database! Use a local or test environment.
DO $$
DECLARE
-- Variable to hold the dynamic SQL string
v_sql TEXT := 'CREATE TABLE limit_test (id SERIAL PRIMARY KEY, ';
i INT;
BEGIN
-- Loop from 1 to 1599 (Plus ID = 1600 columns total)
FOR i IN 1..1599 LOOP
v_sql := v_sql || 'col_' || i || ' INT, ';
END LOOP;
-- Clean up the trailing comma and close parenthesis
v_sql := rtrim(v_sql, ', ') || ');';
RAISE NOTICE 'Creating table... Target Column Count: 1600';
-- AND BOOM! Execute the query:
EXECUTE v_sql;
RAISE NOTICE 'SUCCESS! Table created.';
EXCEPTION WHEN OTHERS THEN
-- Catch and print any errors
RAISE NOTICE 'ERROR CAUGHT: %', SQLERRM;
END $$;
Result 1: Walking on the Edge (1600 Columns)
When you run this code (assuming standard block size and INT data types), PostgreSQL will likely say: “SUCCESS”.
Congratulations! You now own a table that sits on the absolute edge of what is technically allowed a table that will be a nightmare to manage.
Result 2: Crossing the Line (1601 Columns)
Now, change the loop in the code from 1..1599 to 1..1600 (Totaling 1601 columns). Run it again.
PostgreSQL will show no mercy:

ERROR: tables can have at most 1600 columns
The database engine stops the operation at the parser stage. This is a Hard Limit. No matter how much RAM or CPU you throw at it, you cannot change this rule.
The Hidden Trap: “Tuple Size” Limits
Let’s go back to that parenthesis in the documentation: “(In practice, the effective limit is lower…)”
In our experiment, we used INT. Integers take up very little space. But what if those 1600 columns were TEXT or VARCHAR?
PostgreSQL stores data in 8KB pages. A single row (Tuple) must fit inside a page (excluding the TOAST mechanism). If your columns are full of data, you will hit a wall long before you reach the 1600th column. You might crash at column 400 with this error:
ERROR: row is too big: size 8160, maximum size 8160
This means: “You didn’t hit the column count limit, but your row is physically too fat to fit on a page.” This is one of the worst errors to encounter in a production environment at 3 AM.
The Solution: Fix the Architecture
If your client asks for a form with 2000 fields, or if your application “needs” 1600 columns, the problem isn’t PostgreSQL. The problem is your data model. Here is the prescription:
- Use JSONB: Create columns for fixed fields (ID, Date, Status). For the hundreds of dynamic attributes, use a single
attributes JSONBcolumn. You bypass the 1600 limit and gain schema-less flexibility. - Vertical Partitioning: Split your table into logical parts. Create 1-to-1 related tables like
User_Settings,User_Profile, andUser_Logs. This improves performance by keeping frequently accessed data in smaller, leaner tables.
Conclusion
PostgreSQL gives us the right to 1600 columns. Just like your car’s speedometer might show 260 km/h. Being able to go that fast doesn’t mean you should.
In database design, simplicity and normalization save you from these headaches.
Copy the code, try it on your local machine, and see the limits with your own eyes. But in production? Please, just use JSONB.
← PostgreSQL Blog