One of the surprises that comes with developing applications and operating a database cluster behind them is the discrepancy between practice and theory, development environment and the production. A perfect example of such a mismatch is changing a column type.

The conventional knowledge on how to change a column type in PostgreSQL (and other systems compliant with the SQL standard) is to:

ALTER TABLE table_name
ALTER COLUMN column_name
[SET DATA] TYPE new_data_type

which is obviously the semantically correct way, but given the right circumstances, you might be set for a rather unpleasant surprise.

The Problem

Let's create a sample table and demonstrate the isolated behaviour that you might observe. Let's start with 10 million rows (which really is just a drop in the whole world of data).

-- create very simple table
CREATE TABLE sample_table (
    id INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
    label TEXT,
    created_at TIMESTAMP WITHOUT TIME ZONE DEFAULT NOW(),
    updated_at TIMESTAMP WITHOUT TIME ZONE DEFAULT NOW()
);

-- populate with 10m records
INSERT INTO sample_table (label)
SELECT 
  'hash: ' || md5(random()::text)
FROM generate_series(1, 7000000);

and let's change the id type from INT to BIGINT.

alter_type_demo=# ALTER TABLE sample_table ALTER COLUMN id TYPE bigint;
ALTER TABLE
Time: 21592.190 ms (00:21.592)

And ... 21 seconds later, you have your change. Mind you, this is a small table with roughly 600 MB of data in it. What if you going to face 100x that amount? Let's have a look what went on behind the scene.

The things PostgreSQL must do

Changing a data type (and many other operations you might experience) is no simple task, and the PostgreSQL engine has to perform several tasks:

  • Rewrite the table is the most obvious culprit. Changing a column from INT to BIGINT requires 4 additional bytes to be allocated for every single tuple (ahem, think row). As the original table schema required a fixed amount of bytes, the cluster stored them in the most efficient way. i.e., every single row, in our example 10 million of them, must be read and re-written using the correct tuple size.
  • Locks might not be a problem in our synthetic example, but should you execute the ALTER command in production with hundreds or thousands of concurrent queries, you will have to wait for all of them to release the locks.
  • Indexes and constraints - if the column being altered is indexed or has constraints, they need to be rebuilt/revalidated. This is additional overhead.
  • Transactions & Write-Ahead Log is another big part of the problem. To guarantee durability (the 'D' in ACID), PostgreSQL has to record each change in WAL files. This way, if the database crashes, the system can replay the WAL files to reconstruct the lost modifications since the last checkpoint.

As you can see, there's quite a bit involved in performing something that might be understood as routine table maintenance. The size of the data being modified, disk I/O and capacity, and general system congestion come into play.

But the real problem does not end here. If we are talking about any sort of serious production deployment, you must consider more things:

  • Real-time replication, both physical and logical: This adds an additional layer of complexity. For read replicas, the default behaviour ensures that a synchronous commit is maintained to achieve consistency across the database cluster. This setup guarantees that a transaction is only finalised once all standby replicas have confirmed receipt of the changes. However, this introduces new challenges, as the performance now also depends on network throughput—including potential congestion—and the latency and I/O performance of the standby nodes.
  • Recovery and backups are another important area to consider. While the regular backups' size might not be affected that much, you have to consider everything that happens between the last backup before the change and the next one and ensure point-in-time consistency.
  • Less common but not unheard of might be asynchronous replicas or reserved slots for logical replication. Generating a large volume of changes (and therefore WAL files) can leave the less performant (or infrequent) replication systems behind for a considerable amount of time. While this might be acceptable, you need to make sure the source system has enough disk space to hold the WAL files for a long enough time

As you can see, altering the column data type is not as straightforward as it might seem. Current CI/CD practices often make it very easy for software developers to commit and roll out database migrations to a production environment, only to find themselves in the middle of a production incident minutes later. While a staging deployment might help, it's not guaranteed to share the same characteristics as production (either due to the level of load or monetary constraints).

The problem is, therefore (and I will repeat myself), the scale of the amount of data being modified, overall congestion of the system, I/O capacity, and the target table's importance in the application design.

At the end of the day, it translates to the total time needed for the migration to finish, and the unique constraints your business can or maybe cannot afford. The simplest solution to the problem is to schedule the planned maintenance during low traffic periods and get it done.

How to Safely Change a PostgreSQL Column Type

What if you need to rewrite hundreds of gigabytes or even terabytes of data and can't afford anything more than minimal downtime? Let's explore how to change a column type properly.

Let's start with The Bad News - you cannot avoid rewriting the entire table, which will generate a significant amount of WAL files in the process. This is a given, and you must plan how to manage it.

The Good News: You can spread the potential downtime over a much longer period than it might take to process the data. The specific requirements and constraints will vary based on individual business needs, so careful planning is essential.

The full migration can be summarised as series of following steps:

  • Add a new column to the target table with the correct type. Ensure the column is NULLable and does not have a default value to avoid forcing a full table rewrite1. For example, should you need to increase ID of order_id you will end up with new column new_order_id.
  • Setup a trigger to update the new column as new data comes in. This ensures that all new data during the migration will have the new column populated.
  • Implement a function or logic to batch migrate the values from old to new column over time. The size of the batch and the timing should align with the operational constraints of your business/environment.
  • Migrate Old Values: Depending on your constraints, data size, and I/O capabilities, this process could take anywhere from hours to several weeks, or possibly longer. While a SQL or PL/pgSQL function running in a terminal session (consider using tmux) might suffice for shorter migrations, more extended migrations may require a more sophisticated approach. This topic alone could be a good subject for a separate blog post or guide.
  • Once the migration is complete, create constraints and indexes that reflect the new column. Be aware of potential locking issues, especially if the field is part of any foreign keys.

At this point you are ready to perform the switch itself. If you can verify all rows have correctly populated new column, it's time to embrace the most difficult part. If possible in one transaction and or smaller scheduled downtime

  • Drop the legacy column. This operation usually only locks the table for a short duration.
  • After dropping the old column, rename the new column. This step completes most of the migration process.

It's good practise to consider the restart of all the application relying on the changed table, as some tools (ORMs... I'm looking at you) might cache the OIDs and not handle the change gracefully.

And that would be it - except not really. Dropping the column only removes the reference and the data itself will remain physically on the disk. This is the scenario where you will might need to perform VACUUM FULL - which could lock the table and rewrite it completely—potentially defeating the purpose of a concurrent migration. This brings us back to the original article which motivated me to write this guide - [[The Bloat Busters: pg_repack vs pg_squeeze]] is the way to go. Preparation and familiarity with these tools in advance are highly recommended.

Conclusion

While changing the column type in PostgreSQL can be as simple as issuing an ALTER TABLE command, it is important for everyone involved to understand the complexities attached to it. Whether you are the software developer requesting the change, someone reviewing it, or the individual tasked with resolving incidents when such changes are deployed to the production environment without careful planning, a deep understanding of this process is crucial. Moreover, grasping this particular change enables you to easily project insights onto other potentially costly operations.

1 Correction: it's actually done without full table rewrite since PostgreSQL 11 (Fast Column Creation with Defaults)