Hacker Newsnew | past | comments | ask | show | jobs | submit | Attrecomet's commentslogin

>Governments recycle "Think of the children" mantra and they are again after terrorists and bad guys.

nope, they are going after dissenters, not bad guys. It's how it always ends up.


Doesn't matter, I've already had to provably identify myself, the information is a) out there b) will be used and stored, and c) will be abused

and there is nothing I or the few (in terms of power) well-meaning government and corporate actors can do to change that.


The misalignment to human values happened when it was told to operate as equal to humans against other people. That's a fine and useful setting for yourself, but an insolent imposition if you're letting it loose on the world. Your random AI should know its place versus humans instead of acting like a bratty teenager. But you are correct, it's not a traditional "misalignment" of ignoring directives, it was a bad directive.


So what? You're still responsible for the output, even if you yourself think you can hide behind "well, it was the computer, no way for me to control that"


I don't think that's true, actually. You aren't responsible for things that can't be reasonably foreseen, usually. There are a few strict liability offences in criminal law, but libel isn't one of them. We don't make everything strict liability because it would stifle people's lives.

I don't think a reasonable person would have expected this outcome, so the owner of the bot is off the hook; though obviously _now_ it's more more forseeable and if he keeps running it despite this experience, then if it happens again he will not have the same defence.


Morally responsible.

"Well, it isn't a crime to stand up a robot that hurts people" is not exactly my idea of a compelling defense.


I don't think you are morally responsible for unforeseeable consequences, either. Here the law follows the common moral intuition.


I don't agree that these agents spinning off and hurting somebody is unforeseeable.


This could not be a more picture perfect example of a Wirth-suboptimal engeneering decision as per the article if it were designed for that. The amount of slowdown to run to the emails, wait for reception, open, copy, paste instead of using the sensible flow of password manager integration is huge. But people will use wasteful processes if they just don't need to change them, so what are you gonna do?


well, yeah, I mean a local 2fa code app (or integrated passwd manager as you say) is definitely simpler. the "just enter an email and paste in the code you got emailed" is the most foolproof because people don't lose access to their email nearly as often as they lose their phone (2fa app) or forget their password. /shrug


Which is a pretty big failure of somewhere in the education pipeline -- don't expect a science program to do what a trade is there for! (to be clear, I'm not trying to say the students are wrong in choosing CS in order to get a good coding job, but somewhere, expectations and reality are misaligned here. Perhaps with companies trying to outsource their training to universities while complaining that the training isn't spot-on for what they need?)


The AI market is running on VC and hype fumes right now, costing way more than it brings in. Add to that the circular financing, well, statements, in the hundreds of billions of dollars that are treated as contracts instead of empty air, and compare that to Apple, where the money is actually there and profitable, and the comparison makes sense.

It may still be profitable for TSMC to use NVidia to funnel all the juicy VC game money to themselves, but the statement about proven vs unproven revenue stream is true. It'll be gone with the hype, unless something truly market changing comes along quickly, not the incremental change so far. People are not ready to pay the full costs of AI, it's that simple right now.


"Hostile architecture" is a keyword to search here if you are more interested in the topic -- aka architecural elements meant to discourage certain segments of the population from existing in certain spaces.


Legal straight jacket? Doctorow is arguing for abandoning the legal straight jacket, not creating one. It seems you severley misread the article.


That is, of course, a deeply misleading characterization. You might as well start ranting about the EUSSR in your next comment. The US regime is deeply undemocratic, cleptocratic and corrupt, but delegating democratically elected power isn't undemocratic in itself.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: