As someone who I both respect a lot and know is really knowledgeable about the latest with AI and LLMs: can you clarify one thing for me? Are all these points based on preparing for a future where LLMs are even better? Or do you think they're good enough now that they will transform the way software is built and software engineers work, with just better tooling?
I've tried to keep up with them somewhat, and dabble with Claude Code and have personal subscriptions to Gemini and ChatGPT as well. They're impressive and almost magical, but I can't help but feel they're not quite there yet. My company is making a big AI push, as are so many companies, and it feels like no one wants to be "left behind" when they "really take off". Or is that people think what we have is already enough for the revolution?
I think that LLMs already changed the way we code, mostly, but I believe that agentic coding (vibe coding) is right now able to produce only bad results, and that the better approach is to use LLMs only to augment the programmer work (however it should be noted that I'm all for vibe coding for people that can't code, or that can't find the right motivation. I just believe that the excellence in the field is human+LLM). So failing to learn LLMs right now is yet not catastrophic, but creates a disadvantage because certain things become more explorable / faster with the help of 200 yet-not-so-smart PHDs in all the human disciplines. However other than that, there is the fact that this is the biggest technology emerging to date, so I can't find a good reason for not learning it.
I've tried to keep up with them somewhat, and dabble with Claude Code and have personal subscriptions to Gemini and ChatGPT as well. They're impressive and almost magical, but I can't help but feel they're not quite there yet. My company is making a big AI push, as are so many companies, and it feels like no one wants to be "left behind" when they "really take off". Or is that people think what we have is already enough for the revolution?