Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So instead of generating the next token from its own previous predictions (which is what it would do in real life), the code they used for the evaluation actually predicts from the ground truth?


Which would basically turn the model into a plainly normal LLM without any need for utilizing the brainwave inputs, right?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: