It's more of a meta point to me. I get that this series isn't landing for some people but the meta-observation is that given something of roughly equal substantiveness as before, these friends' motivations for long form content and discussion seem to have atrophied, perhaps largely due to the addition of the AI summary reality cipher to their lives.
Re profoundly weird, the "losing hundreds of thousands of dollars because they can’t do basic math" story is funny.
Guy set up an openclaw called Lobstar Wilde and gave it US$50k in SOL to do what it wanted with. Someone else set up a memecoin called $LOBSTAR and gave 5% of the supply to Lobstar Wilde. Someone wrote to Lobstar with a sob story asking for 4 SOL but Lobstar due to a miscalculation sent tokens then valued at $450k but I think some came back due to tokens going up and down.
Not sure what the current state of its wallet is. Lobstar keeps tweeting philosophically (Most people do not love or hate the thing itself. They love or hate the feeling the thing produces in them, and then they mistake that feeling for knowledge of the thing...) and its owner works on Codex at OpenAI.
>Consider first the more accurate form of the question. I believe that
in about fifty years' time it will be possible, to programme computers, with
a storage capacity of about 10^9, to make them play the imitation game so
well that an average interrogator will not have more than 70 per cent chance
of making the right identification after five minutes of questioning. (Turing 1950)
That was the test as discussed by Turning - five minutes, <70% chance of getting it right.
It's not that demanding. The test you mention could maybe be called an enhanced Turing test but the original one is pretty much passed.
He was a bit off on the time taken and memory used. I think more like 75 years and 50 GB rather than 50 years and 125 MB.
The author of the medium article specifically hobbled the models to stop them thinking it through and got a wrong answer but that would happen with humans too and doesn't prove much.
I would argue that most humans would either give the correct answer or just say "I don't know". Some of them might confidently give the wrong answer, but humans will readily refuse to follow instructions in plenty of circumstances where they decide they aren't worthwhile. LLMs don't do this, and I'd argue that the ability to reject premises is fundamental to engaging with things in a truly logical way.
No way they're boarding boats. They can get an accurate enough cargo weight within seconds visually (maybe minutes, depending on how computerized it is)
Comparing AI to steel production in the Great Leap Forward seems unfair. It's not some communist plan - it's a capitalist free for all similar to the industrial revolutions in the UK/US. It won't lead to a famine, it'll lead to the chaotic creative destruction capitalism usually produces.
You're mistaking communism/capitalism as economic systems for communism/capitalism as organizational structures. The latter is what the argument centers around.
It's been argued frequently that families and tech companies are structured like socialist states. Central planning, flatter structures, division of labor...I'm not starting down that thread or opening up that debate.
This only is not a capitalist structure but capitalism itself doesn't really offer any ideas about structure or governance beyond encouraging the free movement of capital.
reply