Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Silicon Valley hype with GPT-3 has gone too far. It's just a probabilistic language model that has sampled a lot of the internet. It cannot think, and is echoing our thoughts back to ourselves.


It can think, just kinda badly. It's an animal in the ecosystem of language.


GPT-3 can produce text that's probabilistically similar to text that it's been trained on, and observed as part of sample outputs. If there was no huge corpus of human language to train it on, GPT-3 couldn't even begin to give the illusion of thinking, and certainly couldn't tell you (for example) that 3 is greater than 2, or even know that 3 and 2 were concepts that it perhaps should have opinions about.

The really interesting question (at least to me) is to what extent that observation is also true for humans.

I mean, we humans presumably built up our big corpus of human language/knowledge all on our own over the lifetime of the species, which GPT-3 currently cannot do, but.. to what extent does human thinking just consist of probabilistic re-mixing of words, phrases, and sentences that we've seen before, that came to us through our continual training on segments we've been exposed to from that big 'dataset' of human knowledge, and the best of which then get contributed back into that dataset? How much more than that is actually going on, for us?

If what GPT-3 is doing shouldn't really count as 'thinking' (as seems intuitive to me, personally, though others may certainly disagree), then to what extent can we say that humans do anything qualitatively different?


Introspection. Some humans seem to do it, others do not, at least that is my observation. So introspection seems to be the post-processing of inputs to build a coherent conceptual model of the universe. Building inferences between unrelated things, building meta-objects based on those inferences seems to be what humans do quite well. Questions are formed because there are disparities or disconnects that must be explained in order to form a greater holistic coherence. Could we build introspection into GPT-3? Can that be a goal, that in addition to 'training' we add the capability to pull together concepts and classify? That alone would cause GPT-3 to start to ask questions, starting with "Is 42 really a valid answer, or is it a joke?"


We can do introspection, as the other commenter said, but the more basic part is that we can follow an iterated strategy, which GPT-3 is incapable of- it must predict the next letter in a strictly bounded operation. That's why it makes sense to think of it as a "babbler"; it is incapable of not saying the first thing that comes to mind.

However, when you give GPT-3 the opportunity to iterate on a strategy, by asking it to follow each step of the strategy sequentially, you can see its behavior become much more similar to basic human thought.


And looking at ourselves as unique wonderful creatures is so embedded in our culture, that approaching to these questions from the other side¹ may count as blasphemy. I hope to see non-niche but instead general training of these ais in my lifetime. And maybe to see new species, free of our evolutional uglinesses.

¹ Like in "it is not true, so let's prove it first and then brag about it"


It appears as though it can think, which for most purposes is good enough, but clearly insufficient.


Depends how your define thinking. The only systems we now for sure have consciousness (prerequisite for thinking) are biological neural systems, and to stipulate that ability to manipulate text is sufficient for thinking is wrong. Cats cannot and dogs manipulate text, but I am quite certain they can think; and GPT is the opposite.


But isn't it that GPT cannot think in the same way a frozen cat cannot? Artifical networks differ in a very new way: they can be frozen (turned off and stored temporarily) but still not dead. If you define their "lifetime" to be only at training and reflecting-on-input time, wouldn't it look exactly alive and constantly thinking? A cat differs from GPT in that a cat cannot stop the time and lives in a continuous unstoppable realm.


> consciousness (prerequisite for thinking)

I don't know if this is correct, and I doubt it.


This is exactly what I sad. Unless you define what thinking is, the conversation is meaningless. I personally, cannot even imagine thinking without consciousness, but your definition might be different.


"Thought" would indicate it knows what the words mean - I highly doubt it, even with the massive size of the network. It just knows which word is coming next, given the previous one.


We have strong reason to suspect that GPT-3 has at least some model of what words mean, from experiments people have done where they asked it to use words in novel contexts and ascribe properties to them, which GPT-3 is surprisingly good at.


That's body/mind dualism, which went out of fashion with Freud.

Your brain also just "creates the illusion" of understanding. It's slightly better it to the outside world, and a lot better to itself. But it's really just a matter of degree.


I think you underestimate the intelligence of animals.


How much time have you spent with GPT-3?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: