The problem is that LLMs are just convincing enough that people DO trust them which is sort of a problem since AI slop is creeping into everything.
What can be done to solve it (while not perfect) is pretty powerful. You can force feed them the facts (RAG) and then verify the result. Which is way better than trusting LLMs while doing neither of those things (which is what a lot of people do today anyway). See the recent 5 cases of lawyers getting in trouble for ChatGPT hallucinating citations of case law.
LLMs write better than most college students so if you do those two things (RAG + check) you can get college graduate level writing with accurate facts... and that unlocks a bit of value out in the world.
Don't take my word for it look at the proposed valuations of AI companies. Clearly investors think there's something there. The good news is that it hasn't been solved yet so if someone wants to solve it there might be money on the table.
> and that unlocks a bit of value out in the world.
> Don't take my word for it look at the proposed valuations of AI companies. Clearly investors think there's something there.
Investors back whatever they think will make them money. They couldn’t give less of a crap if something is valuable to the world, or works well, of is in any way positive to others. All they care is if they can profit from it and they’ll chase every idea in that pursuit.
> Investors back whatever they think will make them money.
A not-flagrantly-illegal example of this might be casinos, where IMO it is basically impossible to argue the fleeting entertainment they offer offsets the financial ruin inflicted on certain vulnerable types of patron.
> All they care is if they can profit from it
Notably that isn't the same as the business itself being profitable: Some investors may be hoping they can dump their stake at a higher price onto a Greater Fool [0] and exit before the collapse.
> They couldn’t give less of a crap if something is valuable to the world
"The world" is an abstraction: concretely, every bit of value that is generated within that abstraction accrues to someone in particular -- investors in AI projects, for example.
Take the example of case law. Would you need to formalize the entirety of case law? Would the AI then need to produce a formal proof of its argument, so that you can ascertain that its citations are valid? How do you know that the formal proof corresponds to whatever longform writing you ask the AI to generate? Is this really something that LLMs are suited for? That the law is suited for?
Sure, using RAG is great, but it limits the LLM to functioning as a natural-language search engine. That's a pretty useful thing in its own right, and will revolutionize a lot of activities, but it still falls far short of the expectations people have for generative AI.
Of course. Because enterprise companies take a long time to evaluate new technologies. And so there is plenty of money to be made selling them tools over the next few years. As well as selling tools to those who are making tools.
But from my experience in rolling out these technologies only a handful of these companies will exist in 5-10 years. Because LLMs are "garbage in, garbage out" and we've never figured out how to keep the "garbage in" to a minimum.
What can be done to solve it (while not perfect) is pretty powerful. You can force feed them the facts (RAG) and then verify the result. Which is way better than trusting LLMs while doing neither of those things (which is what a lot of people do today anyway). See the recent 5 cases of lawyers getting in trouble for ChatGPT hallucinating citations of case law.
LLMs write better than most college students so if you do those two things (RAG + check) you can get college graduate level writing with accurate facts... and that unlocks a bit of value out in the world.
Don't take my word for it look at the proposed valuations of AI companies. Clearly investors think there's something there. The good news is that it hasn't been solved yet so if someone wants to solve it there might be money on the table.