Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems crazy to me to not filter the order through a "reasonableness check", and if it fails that, a human is brought into the transaction.

When I was at Caltech, institute policy was that if you solved an exam problem, and came up with not just a wrong answer but an absurd answer, you would get negative credit rather than a zero.

The way to get just a zero is to annotate with "I know the answer is absurd, but I cannot find the mistake".



That is what happened in the 18,000 water cups video. It was presented as a way to avoid the ai and get a human on the other end.


All you need is to make sure the new order is in line with previous historical orders and flag outliers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: