You’re right and wrong at the same time. A quantum superposition of validity.
The word thinking is going too much work in your argument, but arguably “assume it’s thinking” is not doing enough work.
The models do compute and can reduce entropy; however, they don’t match the way we presume things do this because we assume every intelligence is human or more accurately the same as our own mind.
To see the algorithm for what it is, you can make it work through a logical set of steps from input to output but it requires multiple passes. The models use a heuristic pattern matching approach to reasoning instead of a computational one like symbolic logic.
While the algorithms are computed, the virtual space the input is transformed to the output is not computational.
The models remain incredible and remarkable but they are incomplete.
Further there is a huge garbage in garbage out problem as often the input to the model lacks enough information to decide on the next transformation to the code base. That’s part of the illusion of conversationality that tricks us into thinking the algorithm is like a human.
AI has always had human reactions like this. Eliza was surprisingly effective, right?
It may be that average humans are not capable of interacting with an AI reliably because the illusion is overwhelming for instinctive reasons.
As engineers we should try to accurately assess and measure what is actually happening so we can predict and reason about how the models fit into systems.
The word thinking is going too much work in your argument, but arguably “assume it’s thinking” is not doing enough work.
The models do compute and can reduce entropy; however, they don’t match the way we presume things do this because we assume every intelligence is human or more accurately the same as our own mind.
To see the algorithm for what it is, you can make it work through a logical set of steps from input to output but it requires multiple passes. The models use a heuristic pattern matching approach to reasoning instead of a computational one like symbolic logic.
While the algorithms are computed, the virtual space the input is transformed to the output is not computational.
The models remain incredible and remarkable but they are incomplete.
Further there is a huge garbage in garbage out problem as often the input to the model lacks enough information to decide on the next transformation to the code base. That’s part of the illusion of conversationality that tricks us into thinking the algorithm is like a human.
AI has always had human reactions like this. Eliza was surprisingly effective, right?
It may be that average humans are not capable of interacting with an AI reliably because the illusion is overwhelming for instinctive reasons.
As engineers we should try to accurately assess and measure what is actually happening so we can predict and reason about how the models fit into systems.