But isn't this exactly the goalpost moving the other comment claimed? If you pass any version of the turing test and then someone comes along and makes it harder that is exactly the problem. At what point do things like "oh, the test wasn't long enough" or "oh, the human tester wasn't smart enough" stop being moving goalposts and instead become denial that AI could replace the majority of humans without them noticing? Because that's where we're headed and it's also where the real danger is.
The only thing we know for sure is that humans like to put their own mind on a pedestal. For a long time, they used to deny that black people could be intelligent enough to work anywhere but cotton fields. In the same way they used to deny that women could be smart enough to vote. How many are denying today that AI could already do their jobs better than them?
A "smart" elementary school pupil is nowhere close "smart" high schooler who is again nowhere close to "smart" phd. Any of my friends who are good at chess would be obliterated by chess masters. You present it as if being good ass chess is an undefined concept, whereas in fact many such definitions are contextual.
Yes, Turing tests do get more advanced as "AIs" advance. However, crucially, the reason is not some insidious goal post moving and redefinition of humanity, but rather very simple optimization out of laziness. Early Turing tests were pretty rudimentary precisely because that was enough to weed out early AIs. Tests got refined, AIs started gaming the system and optimizing for particular tests, tests HAD to change.
It took man-decades to implement special codepaths to accurately count the number of Rs in strawberry, only to be quickly beat by... decimals.
Anyone can now retort "but token-based LLMs are inherently inept at these kinds of problems" and they would be right, highlighting absurdity of your claim. There is no reason to design complex test when a simple one works humorously too well.
You are mixing up knowledge and reasoning skills. And I've definitely met high schoolers who were smarter than PhD student colleagues, so even there your point falls apart. When you mangle together all forms of intelligence without any straight definition, you'll never get any meaningful answers. For example, is your friend not intelligent because he's not a world-elite level chess player? Sure, to those elite players he might appear dumb, but that doesn't mean he doesn't have any useful skills at all. That's also what Turing realised back then. You couldn't test for such an ambiguous thing as "intelligence" per se, but you can test for practical real life applications of it. Turing was also convinced that all the arguments (many of which you see repeated over and over on HN) against computers being "intelligent" were fundamentally flawed. He thought that the idea that machines couldn't think like humans was more a flaw in our understanding of our own mind than a technological problem. Without any meaningful definition of true intelligence, we might have to live with the fact that the answer to the question "Is this thing intelligent?" must come from the pure outcome of practical tests like Turing's and not from dogmatic beliefs about how humans might have solved the test differently.
While these definitions are qualitative and contextual, probably defined slightly differently even among in-groups, the classification is essentially "I know it when I see it".
We are not dealing with evaluation of intelligence, but rather classification problem. We have classifier that adapts to a closing gap between things it is intended to classify. Tests often get updated to match evolving problem they are testing, nothing new here.
>the classification is essentially "I know it when I see it".
I already see it when it comes to the latest version of chatGPT. It seems intelligent to me. Does this mean it is? It also seems conscious ("I am a large language model"). Does that mean it is?
The question is not whether you consider a thing intelligent, but rather whether you can tell meatbag intelligence and electrified sand intelligence apart.
You seem to get Turing test backwards. Turing test does not classify entities into intelligent and non-intelligent, but rather takes preexisting ontological classification of natural and artificial intelligence and tries to correctly label each.
A Turing test also has to be completable by a sort-of average human being — some dumb mistake like not counting Rs properly is not that different from someone not knowing that magnets still work when wet..
A particular subgenre of trolling is smurfing - infiltrating places of certain interest and pretending to be less competent than one actually is. Could a test be devised to distinguish between smurfing and actually less competent?
Turing test is classifier. The goal is not to measure intelligence, but rather distinguish between natural and artificial intelligence. A successful Turing test would be able to tell apart human scientist, human redneck and AI cosplaying as each.
If AI could already do jobs better than a human, then people would just use AIs instead of hiring people. It looks like we are getting there, slowly, but right now there are very few jobs that could be done by AIs.
I can't think of a single person that I know that has a job that could be replaced by an AI today.
One of the problems I've seen is that often enough AIs do a much shittier job than humans but it's seen as good enough and so jobs are axed.
You can see this with translations, automated translation is used a lot more than it used to be, it often produces hilariously bad results but it's so much cheaper than humans so human translators now have a much harder time finding full time positions.
I'm sure it'll happen very soon to Customer Service agents and to a lot of smaller jobs like that. Is an AI chatbot a good customer agent? No, not really but it's cheaper...
I think that you've really hit the nail on it's head with the "but it's cheaper" statement.
Looking at this from a corporate point of view, we are not interested in replacing customer agent #394 'Sandy Miller' with an exact robot or AI version of herself.
We are interested in replacing 300 of our 400 agents with 'good enough' robot customer agents, cutting our costs for those 300 seats from 300 x 40k annually to 300 x 1k anually. (Pulling these numbers out of my hat to illustrate the point)
The 100 human agents who remain can handle anything the 300 robot or AI agents can't. Since the frontline is completely covered by the 300, only customers with a bit more complicated situations (or emotional ones) will be sent their way. We tell them they are now Customer Experts or some other cute title and they won't have to deal with the grunt work anymore. Corporate is happy, those 100 are happy, and the 300 Sandy Millers.. well that's for HR and our PR dept to deal with.
The hope is that the 300 Sandy Millers can find jobs at other places that simply couldn't afford to have a staff of ANY customer support agents in the past (because they needed 300 of them but couldn't pay, so they opted for zero support) but can afford two or three if they are supplanted by AI.
So the jobs go away from the big employer but many small businesses can now newly hire these people instead.
Conversely, SOTA models have actually become good enough at translation that they consistently beat the shittier human takes on it (which are unfortunately pretty common because companies seek to "optimize" when hiring humans, as well).
If you haven't noticed, this is already happening. I've also met a ton of people in jobs that could be trivially replaced. If only for the fact that the jobs are not doing much and are already quite superfluous. We also regularly see this in recent mass layoffs across the tech industry. AI only increases the amount of these kinds of jobs that can be laid off with no damage to the company.
> I've also met a ton of people in jobs that could be trivially replaced
This is usually a sign that you don’t understand their job or the corporate factors driving what you might perceive as low performance.
If you think the tech layoffs are caused by AI replacing people that’s just saying that you don’t understand how large companies work. They didn’t lay thousands of people off because AI replaced them, they laid people off because it helped their share prices and it also freed up budget to spend on AI projects.
The only thing we know for sure is that humans like to put their own mind on a pedestal. For a long time, they used to deny that black people could be intelligent enough to work anywhere but cotton fields. In the same way they used to deny that women could be smart enough to vote. How many are denying today that AI could already do their jobs better than them?