Interestingly, "aggregate first, join later" has been the standard way of joining fact tables in BI tools for a long time. Since fact tables are typically big and also share common dimensions, multi-fact joins for drill-across are best done by first aggregating on those common dimensions, then joining on them.
Makes you wonder how many cases there are out there of optimizations that feel almost second nature in one domain, but have never been applied to other domains because no one thought of it.
Something I really appreciate about PostgreSQL is that features don't land in a release until they are rock solid.
I don't think it's that nobody thought of it for PostgreSQL - I think it's that making sure it worked completely reliably across the entire scope of existing PostgreSQL features to their level of required quality took a bunch of effort.
It's not that nobody thought of it. Group pushdown has been a thing in papers for ~10 years at least, but it's hard to plan; your search space (which was already large) explodes, and it's always hard to know exactly how many rows come out of a given grouping. I have no idea how Postgres deals with these. Hopefully, they're doing something good (enough) :-)
Next up would hopefully be groupjoin, where you combine grouping and hash join into one operation if they are on the same or compatible keys (which is surprisingly often).
I wonder if PG will ever implement plan caching like MSSQL so that the speed of the optimizer is less of a concern and it can take more time finding better plans rather than replanning on every execution of the same statement.
Postgres used to have plan caching inside the same session, and that was so disastrous that it was limited severely by default.
Plan caching is very much a two-edged sword; cache too aggressively, and the situation will be different between the runs. Cache too little, and your hit rates are useless.
Not sure how that makes sense, if the stats change significantly then caches would be evicted during the gathering of statistics.
I believe popular connection poolers and clients attempt to do plan caching through prepared statements and keeping the connection open.
My understanding its not easy to do in PG since connections are process based instead of thread based and the query plans are not serializable between processes, so they cannot be shared between connections.
MSSQL has been doing statement plan caching for at least 20 years and it did stored procedure plan caching before that.
It's not about not knowing about an optimization. The challenge is to know when to apply it, so that it does not cause regressions for cases that can't benefit from it. It may be less risky in specialized systems, like BI systems typically don't need to worry about regressing OLTP workloads. Postgres absolutely needs to be careful of that.
I believe that's one of the reasons why it took about ~8 years (the original patch was proposed in 2017).
The key idea here seems to be that if you’re grouping on a column on a related table you can do your main aggregation by grouping on the foreign key id on the primary table and use that as a proxy for the data on the related table that you’re actually grouping by.
In the examples given, it’s much faster, but is that mostly due to the missing indexes? I’d have thought that an optimal approach in the colour example would be to look at the product.color_id index, get the counts directly from there and you’re pretty much done.
I have a feeling that Postgres doesn’t make that optimisation (I’ve looked before, but it was older Postgres). And I guess depending on the aggregation maybe it’s not useful in the general case. Maybe in this new world it _can_ make that optimisation?
Anyway, as ever, pg just getting faster is always good.
> In the examples given, it’s much faster, but is that mostly due to the missing indexes? I’d have thought that an optimal approach in the colour example would be to look at the product.color_id index, get the counts directly from there and you’re pretty much done.
So I tried to test this (my intuition being that indexes wouldn't change much, at best you could just do an index scan instead of a seq scan), and I couldn't understand the plans I was getting, until I realized that the query in the blog post has a small error:
> AND c1.category_id = c1.category_id
should really be
> AND p.category_id = c1.category_id
otherwise we're doing a cross-product on the category. Probably doesn't really change much, but still a bit of an oopsie. Anyway, even with the right join condition an index only reduces execution time by about 20% in my tests, through an index scan.
That is part of the key idea, yes. It's more elaborate, because it can split the aggregate - it can do part of it before the join, and finalize it after the join. Similarly to what we do for parallel queries.
As for indexes, it can help, but not in this particular example - the "code" tables are tiny, and the planner adds Memoize nodes anyway, so it acts like an ad hoc index.
Indexes are more of a complementary improvement, not an alternative to this optimization (i.e. neither makes the other unnecessary). FWIW in this case the indexes won't help very much - if you use more data in the code tables, it'll use a hash join, not nested loop / merge join.
That doesn't mean we couldn't do better with indexes, there probably are smart execution strategies for certain types of queries. But indexes also come with quite a bit of overhead (even in read-only workloads).
1. Index-only scans on t_product.{category,color} indices, summing each value
2. Lookup the names of those values in their parent tables, generate output rows
If so, I suspect there are two reasons why it might not do that:
Given the relatively small size of the t_product table (23 bytes overhead + 1 byte padding + int4 + int4 + 16 bytes text + [I think] 1 byte varlena = 49 bytes/row), it will be fairly well bin-packed into pages on the heap, consuming roughly 170 pages, assuming 8 KiB default, and default fillfactor of 100%). That trivially fits into a single segment file on-disk, and is a very easy sequential scan.
If it does a sequential scan on the heap, it doesn’t have to check the Visibility Map, because it already has that information in the heap itself, which avoids a second (albeit small) lookup.
Happy for someone who knows more about Postgres to correct me if I’m wrong, though!
> In the examples given, it’s much faster, but is that mostly due to the missing indexes?
You're saying “the missing indexes” as if you could add indexes for every join you're ever doing and that this would be faster than a hash join. For many systems, that's not feasible nor very performant; and depending on selectivity, hash join would often be better than an index lookup anyway.
The biggest win from early aggregation is that you can reduce the number of rows significantly before you go join in other things (which would be a win even in nested-loop index lookup joins; smaller joins are nearly always better along every axis).
toyed with pg_lake against our absolute dump of iceberg files (the nerds in my field call it a data lakehouse but engineering already has too many abstractions). it's pretty insane having postgres & the power of duckdb for mega aggregation, i threw a lot of wild windowed queries and aggregations at it and it seemed to really intuitively switch to using the duckdb jujutsu very well.
looking at migrating the rest of our catalog to iceberg now just to have the pg_lake option in our back pocket for future application development. it's so damn cool, as far as dbs go i haven't personally been involved in anything that needed more power than what postgres could deliver with writes. to be able to tack on bigboi analytics on top of it really consolidates a lot for us. im generally pretty cynical of these big saas peoples acquiring cool stuff but snowflake nabbing crunchydata here (crunchydata = guys who work on some pretty interesting postgres extensions) and helping them push this one to the a proverbial finish line and then open sourcing it was really great to see. i was worried when the acquisition went down because this was the major postgres thing i was really hoping someone would deliver, and crunchydata imo seemed to have the best plan outlined that understood the need.
AFAIK these two joins are exactly the same once you get past the parsing. It's just a different way to write an inner join. It's translated into the same AST and so there's no difference in planning/execution.
Perhaps in this very basic case they are exactly the same but is that still true if you add secondary WHERE conditions that apply to just one table, or if one "table" is actually a view with a complex query definition, or many other ways in which a very simple "example" can quickly get complicated?
In general, you split up the WHERE condition at every top-level AND. Then you do either pullup or pushdown or both (it depends a bit on how your planner looks on the inside). In the end, you end up with the two cases being exactly the same before you start planning.
For outer joins (left/right/full), it's different, and there you absolutely need the explicit join syntax (for correctness). And semijoins are not expressed the same way at all, partially for weird historical reasons.
Neat, I see how this can prevent a lot of frustration when it comes to making certain queries stay quick as complexity grows. I wonder if this means I can forget the trick to chose LATERAL queries all over the place for performance reasons.
you didn't even have the decency to strip the inlined citations out of the drivel that your stochastic parrot produced.
stop it. seriously. i viscerally hate this comment and everything it represents.
what would hn look like if we'd all start slinging low effort ai output into the comment fields? i come here to read humans talking to humans. llm output has ZERO place in this comment section, especially verbatim like this.
if i wanted an llm to summarize this, i would have done so myself.
Makes you wonder how many cases there are out there of optimizations that feel almost second nature in one domain, but have never been applied to other domains because no one thought of it.
reply