People’s basic needs are food and water, followed by safety, belonging, esteem, and finally self-actualization. In addition, convenience over non-convenience.
With LLMs come a promise of safety in that they will solve our problems, potentially curing disease and solving world hunger. They can sound like people that care and make us feel respected and useful, making us feel like we belong and boosting our esteem. They do things we can’t do and help us actualize our dreams, in written code and other things. We don’t have to work hard to use them anymore, and they’re readily available.
We have been using models, and accurate ones at that, for drug discovery, nature simulation, weather forecasting, ecosystem monitoring, etc. well before LLMs with their nondescript chat boxes have arrived. AI was there, the hardware was not. Now we have the hardware, AI world is much richer than generative image models and stochastic parrots, but we live in a world where we assume that the most noisy is the best, and worthy of our attention. It's not.
The only thing is, LLMs look like they can converse while giving back worse results than these models we already have, or develop without conversation capabilities.
LLMs are just shiny parrots which can taunt us from a distance, and look charitable while doing that. What they provide is not correct information, but biased Markov chains.
It only proves that humans are as gullible as other animals. We are drawn to shiny things, even if they harm us.
With LLMs come a promise of safety in that they will solve our problems, potentially curing disease and solving world hunger. They can sound like people that care and make us feel respected and useful, making us feel like we belong and boosting our esteem. They do things we can’t do and help us actualize our dreams, in written code and other things. We don’t have to work hard to use them anymore, and they’re readily available.
So, people are going to focus on that.