Impressive recap! The work on RISC-V images, Gentoo for WSL, and EAPI 9 really shows how adaptable Gentoo is.
I’m curious about the trend of fewer commits and bug reports—do you think it’s just natural stabilization, or are contributors slowing down? Also, the move from GitHub to Codeberg is bold; how is the community reacting to that change so far?
Would love to hear more about how new contributors are finding the transition and onboarding with these updates.
If AI can rewrite and formalize proofs this way, do we risk losing the human intuition behind the arguments? Or is it just a tool to explore math faster?
Treating agents like full computers instead of ephemeral sandboxes makes a lot of sense—durable state and checkpoints solve real pain points that stateless containers force you to work around. Curious how this approach scales when you need dozens or hundreds of Sprites at once.
Nice approach! Merging metadata from multiple sources is tricky, especially handling conflicts like titles and covers. Curious how you plan to handle scalability as your database grows—caching helps, but will the naive field strategies hold with thousands of books?
Right now the meeting happens on the fly and then is cached. In the future I imagine the finished merge will be saved as JSON to the database, depending on which is more expensive, the merging or a database call.
Merging on the fly kinda works for the future too, for when data change or for when the merging process changes.
No idea what the future will hold. The idea is to pre-warm the database after the schema has been refactored, and once we have thousands of books from that, I’ll know for sure what to do next.
TLDR, there is a lot of “think and learn” as I go here, haha.