Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
visarga
on Nov 11, 2022
|
parent
|
context
|
favorite
| on:
Overfitting and the strong version of Goodhart’s l...
Training a neural net is a dynamic feedback loop too. Back-propagation is the feedback phase.
epgui
on Nov 11, 2022
[–]
Not in the same sense whatsoever. Training a neural net, backpropagation or not, doesn't affect the data. It's basically just some variation of / remix of a linear regression.
visarga
on Nov 11, 2022
|
parent
[–]
Yes, for that you need RL. An environment beats a fixed, even large, training set.
epgui
on Nov 12, 2022
|
root
|
parent
[–]
I think we're probably using different words to make the same distinction, and in any case the underlying mechanism is very different.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: