Yes, and building correctness into an LLM-enabled system should be part of the architecture. I’ve been working on this for the 18 months and it’s clear that we’re far from understanding industry-wise patterns that may arise or apply.
"You say our project is a buggy ill-defined mess of slapdash features and spaghetti code interdependencies... but I prefer to think of as almost human!"
I get the joke-association, but if "annoying and complicated" was enough to make something human, then AI happened a long time ago. :P