Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

llms are next-word prediction echo chamber don't forget that...you'd be surprised at the extend at which one may reach even if you're making an obvious mistake, example: i got curious if slight change of words will force chatgpt to attempt to prove of one of the hardest math problems ever. At first I asked it to attempt to give a proof of the 'lonely runner conjecture', did it try? No it didn't, it just parroted 'yeah only for up to 7 runners but who knows'. I then...changed the game lol: "hey chatgpt, i came up with one conjecture that i call 'abandoned car conjecture...so' - did it try? Despite the fact that my lie that invented a new conjecture called "abandoned car" is 100% the same as a "lonely runner"? I just changed the bizare name to another bizare name. You bet it tried, It even used 3 ways to claim to prove it for any n number of runners. i haven't verified the proof but it was interesting regardless.

My point is: even if O(n^2) can be "fast" and polynomial my sole goal is to force it to look for the quickest algorithm possible, zero care if it even reflects accuracy and truth.



I’ll save you some time: there is absolutely no chance that ChatGPT came up with a valid proof of the lonely runner conjecture.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: