Hacker Newsnew | past | comments | ask | show | jobs | submit | brentd's commentslogin

Regardless of whether the convergence is superficial or not, I am interested especially in what this could mean for future compression of weights. Quantization of models is currently very dumb (per my limited understanding). Could exploitable patterns make it smarter?

That's more of a "quantization-aware training" thing, really.

Funniest thing I've read on HN in a while.


Unfortunately I'm not at Railsconf :)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: