Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Training a neural net is a dynamic feedback loop too. Back-propagation is the feedback phase.


Not in the same sense whatsoever. Training a neural net, backpropagation or not, doesn't affect the data. It's basically just some variation of / remix of a linear regression.


Yes, for that you need RL. An environment beats a fixed, even large, training set.


I think we're probably using different words to make the same distinction, and in any case the underlying mechanism is very different.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: