Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not OP, but basically:

Humans have the capacity to come up with new language, new ideas, and basically everything in our human world was made up by someone.

ChatPT or similar, without any training data, cannot do this. Thus they're simply imitating



Humans require training data as well.

And what do you think of the Mark Twain quote:

“ There is no such thing as a new idea. It is impossible. We simply take a lot of old ideas and put them into a sort of mental kaleidoscope. We give them a turn and they make new and curious combinations. We keep on turning and making new combinations indefinitely; but they are the same old pieces of colored glass that have been in use through all the ages.”

I’d argue ChatGPT can indeed be creative, as it can combine ideas in new ways.


You could argue like that against anything


Humans can't either, without training data. The biggest difference between chatGPT and humans is that humans are not trained solely on language.


The important difference is that humans are trained on a lot less data than ChatGPT. This implies that the human brain and LLMs are very different, the human brain likely has a lot of language faculties pre-encoded (this is the main argument of Universal Grammar). OpenAI's GPT 4 is now trained on visual data.

Anyway, I think a lot of ongoing conversations have orthogonal arguments. ChatGPT can be both impressive and generate topics broader than the average human while not giving us deeper insight into how human language works.


I think this is going to change very soon.

Based on the current advances, in about a year we should see the first real-world interaction robot that learns from its environment (probably Tesla or OpenAI).

I'm curious (just leaving it here to see what happens in the future), what will be the excuse of Google this time.

This is again the same situation: Google has supposedly superior tech but not releasing it (or maybe it's as good as Bard...)


Humans are not trained? How much of training is responsible for humans being able to come up with new language and new ideas?


Thats assuming modern humans, I was talking about ancient humans, before civilisation. You could argue thats where the creative mind shows up most, as there are very few humans to imitate.


ChatGPT and similar do seem to make new things, arguably they do it more freely than the average adult human.

Art generators are the most obvious example to me. They regularly create depictions of entirely new animals that may look like a combination of known species.

People got a kick out of art AIs struggling to include words as we recognize them. How can we say what looked like gibberish to us wasn't actually part of a language the AI invented as part of the art piece, like Tolkien inventing elvish for a book?


Plenty of examples of it coming up with new languages or ideas. And it’s very hard for a person to come up with a new language completely independent of reference to other known languages.


What experiment can you do to confirm this? If I ask ChatGPT to come up with a new language, it will do it. How do I distinguish that from what a human comes up with?


By not giving them any examples of language. I would expect humans to come up with a language, if not vocal, without guidance. I doubt GPT would do anything without training data to imitate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: