Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Most of us are much worse on coding problems not in our training set!

(Looks down at dynamic programming problem involving stdin/stdout and combining two data structures).



The reason we're being kept around still is that you can solve the problem without it ever appearing in your training set, and once you have, it has.


Hints of The Nine Billion Names of God for sure.

https://en.wikipedia.org/wiki/The_Nine_Billion_Names_of_God


Oh wow - Unsong [1] must have taken some inspiration from that. Into the queue it goes!

[1] https://unsongbook.com/


Allow me to offer you this twitter thread:

https://twitter.com/chaosprime/status/1607895175799373830


What is that exactly? The site doesn't say.


A work of online serial fiction by blogger Scott Alexander, formerly of Slate Star Codex, now of Astral Codex Ten.

Goodreads has a decent intro blurb: https://www.goodreads.com/fr/book/show/28589297-unsong

If you're unsure whether his writing style is your thing, feel free to sample his shorter fiction from his blog:

https://slatestarcodex.com/2015/06/02/and-i-show-you-how-dee...

https://slatestarcodex.com/2015/04/21/universal-love-said-th...

https://astralcodexten.substack.com/p/idol-words


Thanks. The goodreads blurb makes me think it's something like Salmon Rushdie or a Kevin Smith (with less toilet humor) take on things.


A short film adaptation released a year ago.

https://www.youtube.com/watch?v=UtvS9UXTsPI


A better question is, given a significant corpus of complex functionality, can it implement complex code in a language that it knows, but in which it has only seen lower complexity code?

Can it transfer knowledge across languages with shared underlying domains?


I think given that it's been trained on everything ever written, we should suppose the answer is no.

It has always been possible, in the last century, to build a NN-like system: it's a trivial optimization problem.

What was lacking was the 1 exabyte of human-produced training data necessary to bypass actual mechanisms of intelligence (procedural-knowledge generation via exploration of one's environment, etc.).


the implication here is that GPT is just brute force memorizing stuff and that it can't actually work from first principles to solve new problems that are just extensions/variations of concepts it should know from training data it has already seen

on the other hand, even if that's true GPT is still extremely useful because 90%+ of coding and other tasks are just grunge work that it can handle. GPT is fantastic for data processing, interacting with APIs, etc.


No, the implication is that most of us fake it until we make it. And The Peter Principle says we're all always faking something. My comment was just about humanity. ChatGPT isn't worth writing about.


We aren't state machines. We are capable of conscious reasoning which GPT or any computer is not.

We can understand our own limitations, know what to research and how, and how to follow a process to write new code to solve new problems we have never encountered.

Our training set trains our learning and problem solving abilities, not a random forest.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: