Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> but I _really_ dislike when ChatGPT lectures me because my request is against its "moral values."

Just know that the morality systems cost more GPU cycles to run, and they are the first to be gutted when an open source model emerges. See for example stable diffusion, in which people disable watermarking and filtering and other stuff the user didn't ask for.



> Just know that the morality systems cost more GPU cycles to run

Unlikely to be true. It is part of the same model. You just put the morality you want it to uphold into the training set. Much simpler that way.


That still effectively costs GPU cycles though. If the morality system was not in place, then a slightly smaller model could reach the same performance as the existing production model, thus saving cycles.


It's saves space in the context window. ChatGPT has a large template that question go into that is (largely) what enforces the morality stuff.

You still want the model to have the training on morality since users will want responses based on different systems of ethics.


It's always ulterior motives that drive those add-ons in the first place. sorry executive...no golden parachutes for your political campaign mongering...


Stable diffusion’s model still won’t do pornography, hate speech etc. right? I’ve only run it using DreamStudio.


It's funny to me that they thought they could make systems safer by tacking nanny's onto them, while still accelerating the arms race.


Why funny? Ethics and responsibility are active AI fields and at some point things need to move into the real world.

Are there other approaches (beyond “don’t ship”) you would have hoped they take?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: