Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Lots of people here have access to Microsoft Copilot or GPT3.

People with access to these models can demonstrate how the system performs on code THAT WASN'T IN THE TRAINING SET by solving a few of these puzzles.

The reality is that all (?) the amazing demonstrations involve code very similar or identical to what appeared in the training set.



"It only works well on inputs that are similar to what appeared in its training set" seems like a strange criticism to make about an ML project, no?


There are people who believe this is real AI, not just aggregation and interpolation. They really believe the software understands code generally.


I don't think many people here think this is true AI.


Who cares, really?

There are people who believe in god, too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: