Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, we really shouldn't be using these models for anything of meaningful consequence because they're black boxes by their nature. But we already have neural nets in production everywhere.


I believe this talk [0] by James Mickens is very applicable. He touches on trusting neural nets with decisions that have real-world consequences. It is insightful and hilarious but also terrifying.

https://youtu.be/ajGX7odA87k "Why do keynote speakers keep suggesting that improving security is possible?"


Every decision maker in the world is an undebuggable black box neural net - with the exception of some computer systems.


You can fire people, arrest them, fine them, coerce them, convince them, train them, etc. Moreover, we have have millennia of experience in dealing with humans and their problems. Humans aren't perfect, but dealing face-to-face with a human that's empowered to actually do things is far more pleasant than a black box AI model.


You can forgive (or not) a human when they fuck up. This is a real, meaningful, valuable part of the experience of dealing with injustice, negligence, etc. It's why witness statements are given due weight in courts.

We already know how frustrating, depressing and dehumanising it can be to experience corporate negligence, where responsibility is diffused to such an extent that it becomes meaningless.

AI will magnify this frustration a thousand-fold unless we acknowledge this problem and put the brakes on AI deployment until we work out how to fix it. And it may be that the problem is insoluble.


A computer system is not a decision maker. It does not have agency, it is a tool. This is an IT use of the exonerative tense. i.e. "The suspect died due to bullet caused wounds"


But I can ask the decision maker to explain his decision-making process or his arguments/beliefs which have led to his conclusion. So, kinda debuggable?


Their answer to your question is just the output of another black-box neutral net! Its output may or may not have much to do with the other one, but it can produce words that will trick you into thinking they are related! Scary stuff. I’ll take the computer any day of the week.


No, since in most cases (if "thumbing the scale" was small and not blatant) they can lie and generate a plausible argument that does not involve the actual factor that determined their decision, and any tiny, specific details don't need to be exactly the same as applied to other cases since it's impossible to expect perfect recall or perfect consistency from humans.

If anything, the neural network is more debuggable since you can verify that the decision process you're analyzing (even if complex and hard to understand) was the one actually used in this decision and the same as used for all the other decisions


Debuggable and explainable AI is necessary but not sufficient. The societal implications and questions are profound and may be even harder to solve (see other comments in this thread).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: