I think LLMs as a replacement for Google, Stack Overflow, etc. is a no brainer. As long as you can get to the source documents when you need them, and train yourself to sniff out hallucinations.
(We already do this constantly in categorizing human generated bullshit information and useful information constantly. So learning to do something similar with LLM output is not necessarily worse, just different.)
What's silly at this point is replacing a human entirely with an LLM. LLMs are still fundamentally unsuited for those tasks, although they may be in the future with some significant break throughs.
(We already do this constantly in categorizing human generated bullshit information and useful information constantly. So learning to do something similar with LLM output is not necessarily worse, just different.)
What's silly at this point is replacing a human entirely with an LLM. LLMs are still fundamentally unsuited for those tasks, although they may be in the future with some significant break throughs.