In one of his fastai videos Jeremy Howard makes the point that wrong labels can act as regularization and you shouldn't worry too much about them. I'm a bit skeptical as to how far you can push this but you certainly don't need perfect labelling.
That is true up to a certain point (for instance, in my experience, having bounding boxes that are not pixel-perfect acts as a regularizer), but there is also a good chance that you are mislabelling edge cases, situations that happen rarely, and that definitely hurts the performance of the neural network to make a correct prediction on these difficult / uncommon scenarios.
We did some interesting experiments with Go where we inverted the label of who won and measured what impact that had on the final model. This is a binary label so it's probably more impactful (it's the only signal we are measuring)
From memory it had only a small impact (2% strength) with ~7% of results flipped, at 4% it was hard to measure the impact (<1%)
A lot of things. One is the "AI" which isn't so much "I" and quite error prone and hard to impossible to analyze in detail and/or debug. The idea that bad people (be it trolls, criminals or spooks) could force deliberate malfunctioning of/misclassifications in AIs and thus cause crashes is off-putting, on top of the general "normal" errors you can expect.
Then the business/political aspects of it, like Tesla demanding somebody who bought a used car pay again for Autopilot.
We already saw crashes by Autopilot users not paying any attention whatsoever (granted AP isn't fully "self-driving", but still).
On top of that, just like with better car safety and even with the introduction safety belt laws, we saw a stark uptick in accidents, that usually affected people outside the car the most, such as pedestrians and bikers. So me being a pedestrian quite often, I dread in particular the semi-self-driving/assisted driving car tech like autopilot, and have a good skepticism when people tell me that the (almost) perfect fully self-driving cars are just around the corner. If my skepticism turns out to be unwarranted, great.
And this tech will keep many consumer cars around longer, in disfavor of public transportation. The one good-ish thing that came out of SARS-CoV-2 is the reduction in air pollution (I am not saying it is a net positive because of that, far from it). The air smells noticeably nicer around here and the noise is also down.
> The idea that bad people (be it trolls, criminals or spooks) could force deliberate malfunctioning of/misclassifications in AIs and thus cause crashes
I wish people would stop trotting this one out. Bad actors can deliberately cause humans to crash just as easily if not moreso. If they don't, it's only because such behavior is punishable.
Ah, yes, the ethical murderer who only wants to fuck up just that one car but who sincerely worries about the other drivers on the road. That's the demographic you're concerned about? So how does indiscriminately trying to trick generally available systems specifically target only one person without risking other drivers?
If you're interested in replying in a condescending manner and attacking strawmen arguments I never made, be my guest, but I have no desire to further discuss this with you.