Stackoverflow started failing when Jeff left. While he had a big hand in the whole mantra of being more of a Wiki than just Q&A, I feel his sense of community wouldn’t have let it get so bad.
Then all of the OG engineers left (I hope Nick Craver is doing well, his blog posts were incredible), an investment company took over, and whatever good will and vibe that was left melted away.
I’ll still occasionally find a good answer there. But it has zero future of making answers available for future good questions.
Astral folks that are around - there seems to be a bit of confusion in the product page that the blog post makes a little more clear.
> The next step in Python packaging
The headline is the confusing bit I think - "oh no, another tool already?"
IMO you should lean into stating this is going to be a paid product (answering how you plan to make money and become sustainable), and highlight that this will help solve private packaging problems.
I'm excited by this announcement by the way. Setting up scalable private python registries is a huge pain. Looking forward to it!
I would also put this list of issues that this fixes higher. It makes it more obvious what the point is. (And also a setuptools update literally broke our company CI last week so I was like "omg yes" at that point.)
I've been wondering where the commercial service would come in and this sounds like just the right product that aligns with what you're already doing and serves a real need. Setting up scalable private registries for python is awful.
Cloudtasks are excellent and I’ve been wanting something similar for years.
I’ve been occasionally hacking away at a proof of concept built on riverqueue but have eased off for a while due to performance issues obvious with non-partitioned tables and just general laziness.
Developer of River here ( https://riverqueue.com ). I'm curious if you ran into actual performance limitations based on specific testing and use cases, or if it's more of a hypothetical concern. Modern Postgres running on modern hardware and with well-written software can handle many thousands or tens of thousands of jobs per second (even without partitioning), albeit that depends on your workload, your tuning / autovacuum settings, and your job retention time.
Perceived only at this stage, though the kind of volume we’re looking at is 10s to 100s of millions of jobs per day. https://github.com/riverqueue/river/issues/746 talks about some of the same things you mention.
To be clear, I really like the model of riverqueue and will keep going at a leisurely pace since this is a personal time interest at the moment. I’m sick of celery and believe a service is a better model for background tasks than a language-specific tool.
If you guys were to build http ingestion and http targets I’d try and deploy it right away.
Ah, so that issue is specifically related to a statistics/count query used by the UI and not by River itself. I think it's something we'll build a more efficient solution for in the future because counting large quantities of records in Postgres tends to be slow no matter what, but hopefully it won't get in the way of regular usage.
> Perceived only at this stage, though the kind of volume we’re looking at is 10s to 100s of millions of jobs per day.
Yeah that's a little over 100 jobs/sec sustained :) Shouldn't be much of an issue on appropriate hardware and with a little tuning, in particular to keep your jobs table from growing to more than a few million rows and to vacuum frequently. Definitely hit us up if you try it and start having any trouble!
My daughter's amazement that I can translate the French bits of dialogue from Jean-Luc were a joy, too. :-) But that made me appreciate the scriptwriting more -- because almost everything Jean-Luc says, Bluey _also_ says, independently, so it works if -- like my wife and kid -- you don't know a word of French.
Then all of the OG engineers left (I hope Nick Craver is doing well, his blog posts were incredible), an investment company took over, and whatever good will and vibe that was left melted away.
I’ll still occasionally find a good answer there. But it has zero future of making answers available for future good questions.
They took the fun away.