Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

some prompts results are only explainable if chatgpt has the ability to produce some kind of reasoning.

As for your analogy, I'm not sure we know enough about human intelligence core mechanisms to be able to dismiss NN as being fundamentally incapable of it.



The reasoning occurred when people wrote the text it was trained on in the first place; it's training data is full of the symptoms of imagination, reason, intelligence, etc.

Of course if you statistically sample from that in convincing ways it will convince you it has the properties of the systems (ie., people) which created its training data.

But on careful inspection, it seems obvious it doesnt.

Bugs bunny is funny because the writing staff were funny; bugs himself doesnt exist.


> Bugs bunny is funny because the writing staff were funny; bugs himself doesnt exist.

Excellent analogy, and I appreciate analogies (perhaps even a bit too much). Will be using this one. Thank you!


If you “sample” this enough to be reasoning in a general manner, what is exactly the problem here?

Magic “reasoning fairy dust” missing from the formula? I get the argument and I think I agree. See Dreyfus and things like “the world is the model”.

Thing is, the world could contain all intelligent patterns and we are just picking up on them. Composing them instead of creating them. This makes us automatons like AI, but who cares if the end result is the same?


The distribution to sample from mostly doesn't exist.

Data is produced by intelligent agents, it isn't just "out there to be sampled from". That would mean all future questions already have their answers in some training data: they do not.

See for example this exact tweet: pre-2021 coding challenges are excellent, post-2021 are poor. Why? Because post-2021 didnt exist to sample from when the system was built.


At the minimum, chatgpt displays a remarkable ability to maintain a consistent speech throughout a long and complex conversation with a user, taking into account all the internal implicit references.

this to me is the proof it is able to correctly infer meaning, and is clearly a sign of intelligence. (something a drunk human has trouble doing, for example).


"I have seen the output and it matches what I consider to be conversation"

Well yeah, it's been trained to produce output that would look like conversation.


it's not what i meant : you can have a full conversation and then at some point use "it" or "him" , and based on the rest of the sentence, it will understand what previous element of the conversation you were mentionning..

This requires at least "some" conceptualisation of the things you're talking about. It's not just statistics.


It does not require conceptualization, pretty sure the "understanding" of previous references comes from this: https://arxiv.org/abs/1706.03762


This is exactly statistics.


> As for your analogy, I'm not sure we know enough about human intelligence core mechanisms to be able to dismiss NN as being fundamentally incapable of it.

If there's one field of expertise I trust programmers to not have a clue about it's how human intelligence works.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: