To honor the name of the site (boringSQL) let's deep dive into a topic which might sound obvious, but it might be never ending source of surprises and misunderstanding.
Simple things
We can start with the simple statement like
SELECT '2025-03-30 00:30' as t;
Result:
t
------------------
2025-03-30 00:30
which gives you (and I do hope that's not a big surprise) a simple text literal, rather than anything to do with date and time. As soon as you use the string literal in queries similar to
SELECT * FROM events WHERE start_at > '2025-03-30 00:30';
PostgreSQL will implicitly convert the string into timestamp without time zone. The reason why it can happen is the fact start_at
is most likely timestamp based field, allowing automatic cast to match the column's data type. Which is unlike the query
SELECT '2025-03-30 01:00' - interval '15 minutes';
Result:
ERROR: invalid input syntax for type interval: "2025-03-30 01:00"
which will simply not work, as PostgreSQL can't perform automatic cast (which as we will cover later might be surprising behavior, but that's how things are). The correct way to get this example query to work is to use either one of two ways (both functionally equivalent but different notation).
-- PostgreSQL cast notation
SELECT '2025-03-30 01:00'::timestamp - interval '15 minutes';
-- SQL standard explicit type notation
SELECT timestamp '2025-03-03 01:00' - interval '15 minutes';
PostgreSQL timestamp vs timestamptz
The next possible source of confusion when working with time in PostgreSQL is presence of two distinct data types:
timestamp
(ortimestamp without time zone
)timestamptz
(ortimestamp with time zone
) Despite what the names suggest, the key difference isn't whether they store timezone information, but rather how they handle it during storage and retrieval.
IMPORTANT: before we cover the details, remember to always use
timestamptz
. As official Don't Do This page shows, usingtimestamp
is like storing picture of the clock, instead of point in time. And while this article might usetimestamp
it's purely for demonstration purposes.
All you need to remember is the fact timestamp
stores the exact datetime value as entered without any timezone context or adjustments. It's essentially a snapshot of a calendar date and wall clock time, disconnected from any particular geographic location.
-- this stores what you provided
SELECT '2025-03-30 01:30'::timestamp;
Result:
timestamp
---------------------
2025-03-30 01:30:00
On the other hand timestamptz
normalizes all input to UTC internally and then converts values to the session's timezone for display. This provides geographic context to your datetime values.
-- this converts your input to UTC based on your session timezone
SELECT '2025-03-30 01:30'::timestamptz;
Result:
timestamptz
-----------------------
2025-03-30 01:30:00+01
And this is where potential confusion might start. Let's take this example
-- With UTC timezone
SET timezone = 'UTC';
SELECT '2025-03-30 01:30'::timestamptz;
Result:
timestamptz
-----------------------
2025-03-30 01:30:00+00
-- With Tokyo timezone
SET timezone = 'Asia/Tokyo';
SELECT '2025-03-30 01:30'::timestamptz;
Result:
timestamptz
-----------------------
2025-03-30 01:30:00+09
which gives you correct representation based on the session timezone. This comes in handy when we add the actual storage.
-- Create table and view with Berlin timezone
SET timezone = 'Europe/Berlin';
CREATE TABLE time_flies(moment timestamp with time zone);
INSERT INTO time_flies VALUES ('2025-03-30 00:30');
SELECT moment FROM time_flies;
Result:
moment
------------------------
2025-03-30 00:30:00+01
-- View the same data with New York timezone
SET timezone = 'America/New_York';
SELECT moment FROM time_flies;
Result:
moment
------------------------
2025-03-29 19:30:00-04
This demonstrates the advantage of using automatic time zone conversion, allowing you to avoid not only time-zone bugs, but also DST related issues.
While most of us might be fast asleep during our local DST changes, it's no fun to account for the time difference in a wrong. Using timestamptz
is sure way how to avoid it. Let's take an example of recent (at the time of writing the article) DST change.
-- Set Berlin timezone (during DST change)
SET timezone = 'Europe/Berlin';
-- Using timestamp (without timezone awareness)
SELECT '2025-03-30 01:30'::timestamp + interval '45 minutes' as end_time;
Result:
end_time
---------------------
2025-03-30 02:15:00
-- Using timestamptz (with timezone awareness)
SELECT '2025-03-30 01:30'::timestamptz + interval '45 minutes' as end_time;
Result:
end_time
------------------------
2025-03-30 03:15:00+02
AT TIME ZONE
Another powerful but potentially confusing operator in PostgreSQL is AT TIME ZONE
which allows you to convert timestamps between different time zones, but its behavior differs depending on the input data type.
When applied to timestamp (without time zone) When you apply AT TIME ZONE to a timestamp, PostgreSQL interprets the input as being in the specified time zone and converts it to a timestamptz:
-- First, set UTC timezone
SET timezone='UTC';
-- Interpret '2025-03-30 01:30' as if it were in the 'Europe/Paris' time zone
SELECT '2025-03-30 01:30'::timestamp AT TIME ZONE 'Europe/Paris';
Result:
timezone
------------------------
2025-03-30 00:30:00+00
What it does is effectively "Take this wall clock time, consider it as being in the specified time zone, and give me the corresponding moment in time (as a timestamptz)". The representation here is based on session time zone settings.
When applied to timestamptz (with time zone) Conversely, when you apply AT TIME ZONE to a timestamptz, PostgreSQL converts the timestamp to the specified time zone and returns a timestamp without time zone:
-- Set timezone
SET timezone='America/Port_of_Spain';
-- Convert the timestamptz to how it would appear on wall clocks in Tokyo
SELECT '2025-03-30 01:30+01:00'::timestamptz AT TIME ZONE 'Asia/Tokyo';
Result:
timezone
---------------------
2025-03-30 09:30:00
This operation says: "Take this moment in time and show me what wall clock time it would be in the specified time zone."
Practical usage
Probably the only use when correctly using timestamps
is to provide "wall clock" representation of the times at various time zones at once for (view) representational purposes.
-- What time is the company-wide meeting in various offices?
SELECT
'2025-03-30 15:00'::timestamptz AS meeting_time_utc,
'2025-03-30 15:00'::timestamptz AT TIME ZONE 'Europe/London' AS london_office_time,
'2025-03-30 15:00'::timestamptz AT TIME ZONE 'Asia/Tokyo' AS tokyo_office_time;
Result:
meeting_time_utc | london_office_time | tokyo_office_time
------------------------+---------------------+---------------------
2025-03-30 15:00:00+02 | 2025-03-30 14:00:00 | 2025-03-30 22:00:00
It's important to note that in API based clients, it's always better to represent full time notation and leave the clients with the representational layer. Similar to that, unless you need to work with multiple time zones, and you depend on the correct local time representation it's always better to use local session timezone
setting instead of using AT TIME ZONE
operator. This applies for both time input and output.
Common mistake
Let's consider the example where you might consider "magically casting" already stored time using AT TIME ZONE
.
-- Setup example
SET timezone='Europe/Berlin';
CREATE TABLE different_moments(moment1 timestamptz, moment2 timestamptz);
INSERT INTO different_moments (moment1) values ('2025-03-30 01:30');
-- Apply double time zone conversion
UPDATE different_moments SET moment2 = moment1 AT TIME ZONE 'Europe/Berlin' AT TIME ZONE 'America/New_York';
-- View the results
SELECT * FROM different_moments;
Result:
moment1 | moment2
------------------------+------------------------
2025-03-30 01:30:00+01 | 2025-03-30 07:30:00+02
Unless you are aware of the implicit conversion between timestamps
and timestamp
and vice-versa you might be set for a lot of surprises. In this particular case the first conversion performed redundant conversion and removed the time zone offset. While second "forced" it to NYC time, while the subsequent select rendered it in local session timezone.
Timestamps and the storage
While you will use timestamp with time zone
from now on, PostgreSQL still comes with several nuances related to the way it can store the data. First lesser known feature might be timestamp precision. Let's consider following table.
-- Table with various timestamp precision specifications
CREATE TABLE precision (
t1 timestamp with time zone, -- default precision (6)
t2 timestamp(0) with time zone, -- seconds precision
t3 timestamp(2) with time zone, -- centiseconds precision
t4 timestamp(4) with time zone -- 10 microseconds precision
);
Effectively the timestamp
comes with default precision of 6 digits (microseconds), but you can specify anywhere from 0 to 6 for both timestamp
and timestamptz
types. At the same time it does not have any impact on the storage (8 bytes) regardless the precession specified.
-- Insert the same timestamp into all columns
INSERT INTO precision VALUES (current_timestamp, current_timestamp, current_timestamp, current_timestamp);
SELECT * FROM precision;
Result:
-[ RECORD 1 ]---------------------
t1 | 2025-04-03 21:19:20.952354+02
t2 | 2025-04-04 21:19:21+02
t3 | 2025-04-04 21:19:20.95+02
t4 | 2025-04-04 21:19:20.9524+02
Other than precision there are some other interesting aspects of the timestamps storage in PostreSQL:
- It has finite range of
timestamptz '4713-01-01 BC'
andtimestamptz '294276-12-31'
- and at the same time supports positive and negative infinity with
'infinity'::timestamptz
and'-infinity'::timestamptz
that might provide alternative toNULL
for open ended intervals. If you choose to use infinity instead ofNULL
values, you can test whatever the provided date/timestamp is finite value or not usingisfinite(timestamp)
.
Getting the current time
Not many developers are aware that there are different ways to get the current datetime in SQL. These methods not only differ in syntax but also in their underlying logic.
CURRENT_TIMESTAMP vs NOW()
CURRENT_TIMESTAMP is part of the SQL Standard and interestingly, it's defined as both a "time-varying variable" and a "datetime value function". In PostgreSQL, you can use it with or without parentheses, making both these forms valid:
SELECT CURRENT_TIMESTAMP;
SELECT CURRENT_TIMESTAMP(2); -- (2) specifies precision
Along with CURRENT_TIMESTAMP, the SQL Standard also defines CURRENT_TIME
and CURRENT_DATE
for working with individual time and date components respectively.
NOW()
, while not part of the SQL Standard, is a widely used alternative across many database systems. Unlike CURRENT_TIMESTAMP, it's strictly a function and therefore always requires parentheses when called. Its straightforward syntax - NOW()
- makes it a popular choice among developers.
Transaction Behavior
Both NOW()
and CURRENT_TIMESTAMP
share the same transaction behavior: they return the timestamp from the start of the current transaction, not the moment of execution:
BEGIN;
SELECT NOW(); -- Returns transaction start time
SELECT pg_sleep(5); -- Wait 5 seconds
SELECT NOW(); -- Still returns the same time
COMMIT;
This behavior ensures data consistency within transactions but might be unexpected when measuring elapsed time.
Alternative Time Functions
When you need the actual current time regardless of transaction status, CLOCK_TIMESTAMP()
is your best choice. It returns the precise moment of execution, making it particularly useful for measuring real elapsed time. Here's how it differs from NOW()
:
BEGIN;
SELECT pg_sleep(1);
SELECT NOW() AS transaction_time,
CLOCK_TIMESTAMP() AS actual_time;
COMMIT;
Result:
transaction_time | actual_time
------------------------------+-------------------------------
2025-04-03 19:28:25.310254+00 | 2025-04-03 19:28:26.311917+00
Another useful function is STATEMENT_TIMESTAMP()
, which captures the time when the current statement began executing. This differs subtly from CLOCK_TIMESTAMP()
, which gives you the exact moment of function execution. The difference becomes clear in this example:
SELECT
pg_sleep(2),
STATEMENT_TIMESTAMP() AS statement,
CLOCK_TIMESTAMP() AS wall_time;
Result:
pg_sleep | statement | wall_time
----------+-------------------------------+------------------------------
| 2025-04-03 19:32:15.984459+00 | 2025-04-03 19:32:17.98661+00
These timestamp functions serve different purposes and are invaluable when you need to measure transaction timing or analyze statement-level performance. Each provides a different perspective on time within your database operations.
Intervals
Guess if you have worked with time in PostgreSQL you have come across with intervals. You probably just entered the it using something like interval '1 year 2 months 3 days 4 hours'
. Technical that's representation driven by setting intervalstyle
which (as in this case) is set to postgres_verbose
. But you have much more options.
-- PostgreSQL default style
SET intervalstyle = 'postgres';
SELECT interval '1 year 2 months 3 days 4 hours';
Result:
interval
-------------------------------
1 year 2 mons 3 days 04:00:00
-- SQL standard style
SET intervalstyle = 'sql_standard';
SELECT interval '1 year 2 months 3 days 4 hours';
Result:
interval
------------------
+1-2 +3 +4:00:00
-- ISO 8601 style
SET intervalstyle = 'iso_8601';
SELECT interval '1 year 2 months 3 days 4 hours';
Result:
interval
------------
P1Y2M3DT4H
With the session intervalstyle
you only change the interval format representation, not the actual value.
Nevertheless the interval representation you might run into an interval definition that might not longer make it obvious (both in input or output) to interpret. That's where function justify_interval
comes in play - helping you to normalize time expression into conventional formats.
-- Normalize complex interval
SELECT justify_interval(interval '15 months 40 days 30 hours');
Result:
justify_interval
---------------------------------
@ 1 year 4 mons 11 days 6 hours
-- Practical example: normalize project durations
SELECT
project_name,
total_hours || ' hours' AS raw_duration,
justify_interval(interval '1 hour' * total_hours) AS normalized_duration
FROM
(VALUES ('Database Migration', 1850),
('API Development', 720),
('UI Redesign', 340))
AS projects(project_name, total_hours);
Result:
project_name | raw_duration | normalized_duration
--------------------+--------------+--------------------------
Database Migration | 1850 hours | @ 2 mons 17 days 2 hours
API Development | 720 hours | @ 1 mon
UI Redesign | 340 hours | @ 14 days 4 hours
One of the important specific with internal normalization is the fact it uses 30-days time periods as a month definition.
-- Month definition in normalize intervals
SELECT justify_interval(interval '30 days');
Result:
justify_interval
------------------
@ 1 mon
Which might be understandable, but you need to beware of the edge cases when using days and months intervals.
-- Edge case comparison: month vs 30 days
SELECT
date '2025-01-31' + interval '1 month' as example1,
date '2025-01-31' + interval '30 days' as example2;
Result:
-[ RECORD 1 ]-----------------
example1 | 2025-02-28 00:00:00
example2 | 2025-03-02 00:00:00
And one last thing, before wrapping up the interval - you can (same as with dates/timestamp) get specific parts using extract
function.
-- Extract specific parts from intervals
SELECT
extract(days from interval '1 year 3 months 21 days') AS days,
extract(hours from interval '25:30:45') AS hours;
Result:
days | hours
------+-------
21 | 25
Time ranges
While working with timestamp comes natural, it's time ranges which often feel like ugly ducklings. And quite unfairly so - timestamp ranges in PostgreSQL are powerful yet underutilized features that elegantly solve common time-based challenges. PostgreSQL offers dedicated range types that are perfect for modeling time periods, reservations, schedules, and any situation where you need to track intervals with defined start and end points:
tsrange
- range of timestamp without time zonetstzrange
range of timestamp with time zone And to follow the earlier advice - just as you should default totimestamps
, always usetstzrange
for time ranges unless you have VERY specific reason for it. Timestamp ranges aren't merely convenient syntax - they provide a complete set of operations for determining relationships between time periods and enable powerful constraints that handle complex business rules without reinventing the wheel. Timestamp ranges can be created using the range constructor syntax or string syntax.
-- Range constructor syntax
SELECT tstzrange(
'2025-04-01 09:00:00+02'::timestamptz,
'2025-04-01 17:30:00+02'::timestamptz,
'[)'
) AS workday;
-- String syntax
SELECT '[2025-04-01 09:00:00+02, 2025-04-01 17:30:00+02)'::tstzrange AS workday;
Result:
workday
["2025-04-01 07:00:00+00","2025-04-01 15:30:00+00")
The default boundary notation for timestamp ranges is [)
- lower bound inclusive (includes the start time) and upper bound exclusive (excludes the end time).
For time-based applications, this convention is particularly valuable. Consider scheduling consecutive one-hour meetings from 9:00 to 10:00 and 10:00 to 11:00. With [)
notation, these are represented as [09:00, 10:00)
and [10:00, 11:00)
, creating perfect adjacency without overlap or gaps. If you used inclusive bounds [09:00, 10:00]
and [10:00, 11:00]
, the moment 10:00 would technically belong to both ranges - a logical impossibility for scheduling. This natural alignment with how we conceptualize time slots makes [)
notation the default and recommended choice for most of the application use cases.
Timestamp ranges support numerous operations for checking relationships:
-- 1. Check if a timestamp is within a range
SELECT
'[2025-04-01 09:00, 2025-04-01 17:00)'::tstzrange @> '2025-04-01 12:30'::timestamptz AS is_during_workday;
Result:
is_during_workday
------------------
t
-- 2. Check if two ranges overlap
SELECT
'[2025-04-01 09:00, 2025-04-01 12:00)'::tstzrange &&
'[2025-04-01 11:00, 2025-04-01 14:00)'::tstzrange AS meetings_overlap;
Result:
meetings_overlap
-----------------
t
-- 3. Extract time range bounds
SELECT
lower('[2025-04-01 09:00, 2025-04-01 17:00)'::tstzrange) AS start_time,
upper('[2025-04-01 09:00, 2025-04-01 17:00)'::tstzrange) AS end_time;
Result:
start_time | end_time
--------------------------+--------------------------
2025-04-01 09:00:00+00 | 2025-04-01 17:00:00+00
As well as obvious manipulations:
-- 1. Merge adjacent ranges
SELECT
'[2025-04-01 09:00, 2025-04-01 12:00)'::tstzrange +
'[2025-04-01 12:00, 2025-04-01 17:00)'::tstzrange AS full_day;
Result:
full_day
-------------------------------------------------------------
["2025-04-01 09:00:00+00","2025-04-01 17:00:00+00")
-- 2. Find intersection between overlapping ranges
SELECT
'[2025-04-01 09:00, 2025-04-01 14:00)'::tstzrange *
'[2025-04-01 12:00, 2025-04-01 17:00)'::tstzrange AS overlap_period;
Result:
overlap_period
-------------------------------------------------------------
["2025-04-01 12:00:00+00","2025-04-01 14:00:00+00")
-- 3. Find gap between non-overlapping ranges
SELECT
'[2025-04-01 09:00, 2025-04-01 11:00)'::tstzrange -
'[2025-04-01 14:00, 2025-04-01 17:00)'::tstzrange AS gap;
Result:
gap
-------------------------------------------------------------
["2025-04-01 11:00:00+00","2025-04-01 14:00:00+00")
And if you followed the previously mentioned logic, you wouldn't double guess tstzrange
handling of the DST transitions.
-- Testing DST transition handling
SET timezone = 'Europe/Berlin';
SELECT
'[2025-03-30 01:00:00, 2025-03-30 03:00:00)'::tstzrange AS dst_transition_range,
upper('[2025-03-30 01:00:00, 2025-03-30 03:00:00)'::tstzrange) -
lower('[2025-03-30 01:00:00, 2025-03-30 03:00:00)'::tstzrange) AS actual_duration;
Result:
dst_transition_range | actual_duration
-----------------------------------------------------+-----------------
["2025-03-30 01:00:00+01","2025-03-30 03:00:00+02") | @ 1 hour
The area where time ranges win in almost all scenarios is indexing. If you consider a manual implementation of the range using start_at
and end_at
columns you typically rely on B-tree indexes. While they might work for simple queries on just one of the values, they fall short for time ranges. Consider this example:
-- Traditional approach with two columns and B-tree indexes
SELECT * FROM events_columns
WHERE start_at <= '2025-04-01 17:00'
AND end_at >= '2025-04-01 09:00';
For this query, PostgreSQL might use one of the indexes if it's selective enough, but the fundamental problem remains: each B-tree index can only efficiently filter on its own column. The planner might use the start_at index to find events starting before 17:00, or the end_at index to find events ending after 09:00, but it can't use both indexes simultaneously to find the intersection. With range types, we can use GiST (Generalized Search Tree) indexes that are specifically designed for range operations:
-- Create specialized GiST index for time range operations
CREATE INDEX idx_events_range ON events_range USING GIST(time_period);
This allows you to transform the sample query into:
-- Range-based approach using the && (overlap) operator
SELECT * FROM events_range
WHERE time_period && tstzrange('2025-04-01 09:00', '2025-04-01 17:00');
The performance gap between these approaches is substantial – range queries with GiST indexes typically outperform the traditional approach by 2-10x on large datasets. This difference becomes even more pronounced as your data grows.
Time to wrap up
Working with time in PostgreSQL might seem straightforward on the surface, but as we've explored, it's filled with nuances that can make or break your application's reliability. The key takeaways from this deep dive should be:
- Always use timestamptz instead of timestamp
- Be intentional about time zone handling
- Leverage PostgreSQL's specialized time tools
- Mind the edge cases
Time handling remains one of the most deceptively complex aspects of software engineering, but PostgreSQL offers a robust toolkit for managing it effectively. By understanding these "boring" but essential concepts, you'll avoid common pitfalls and build applications that handle time with confidence and precision.