As someone who likes to make photographs I am always interested in how people around me are using cameras. Recently I have been observing that in a way we are already living in an augmented synthetic world. What I mean is, the majority of people going, for example, to my cities harbor side park, do not go and look at the scenery with just their own eyes, or just spend time etc with those they came with, they go with cameras and take photos of themselves, they look at the photos they took of each other together. They see a synthetic version of themselves augmented by computational photography code of their phones, or the beautifying filters of their apps, reflected back at themselves, and their experience of their life, their memory of their visit of that place becomes one of augmented reality.
Some people could argue that there is some damage to people by expecting to see a beautified version of your own face, or to always feeling a need to produce images to send to Social media networks, other could argue that this is just part of changing societal norms and that those augmented experiences are still authentic.
I feel like the line between fake/real authentic/unauthentic is going to move and warp in ways I can't even envision. Maybe just thinking in terms of fraud is a bit easier.
It is interesting how the phone cameras modify the actual image eg. Samsung fake moon or modified faces. Feels bad, can only trust reality (meet people in person).
In this article, the FTC has warned businesses to consider the potential harm their synthetic media or generative AI products could cause before offering them to the market. These recommendations given by the FTC seem to be targeted towards moral people who don't need this advice in the first place. Bad actors who are intentionally using AI tools to cause harm are unlikely to adhere to these guidelines.
There's a lot of people out there that actually don't have the imagination to work out how what they're building could be misused. But they probably won't listen to these recommendations either.
> These recommendations given by the FTC seem to be targeted towards moral people who don’t need this advice in the first place.
They are not.
They are targeted toward amoral people, and their lawyers, who do need this advice, and more to the point, need to know that the FTC is actively engaged in this area. Anytime an agency with enforcement authority issues recommendations like this, it comes with a big implicit “…or we may cause you to regret your decision not to.”
The FTC is also warning businesses that they could face fines for selling access to tools (like LLMs) that can be used to deceive if they don’t take sufficient actions to protect against misuse.
Eg.
> If you decide to make or offer a product like that, take all reasonable precautions before it hits the market. The FTC has sued businesses that disseminated potentially harmful technologies without taking reasonable measures to prevent consumer injury.
The FTC is not only stifling innovation by strong-arming companies which produce these tools to limit their capabilities. They're potentially running afoul of the first amendment by imposing restrictions on IP and thus speech.
Can go a step further and consider the 1A protections of generative output.
Fraud mitigation is not quite the same as "oversight and control".
"Gun control doesn't work" as a blanket statement is ridiculous. It hasn't been meaningfully tried in the USA, but it works just fine almost everywhere else. When the US did have any limited gun control at all e.g. (Federal Assault Weapons Ban) it did have limited positive results.
Finally, guns are a distraction, your argument is of the form "rat control doesn't work, why would tiger control?" Well, because, uh, rats and tigers differ quite lot? But not nearly as much as guns and AI do.
There's going to be so much shilling and spamming in the near future. The Dead Internet is coming. Five years from now there won't be any point to online commenting sites like HN and reddit because you know most of the other posters are bots trying to push an agenda.
Anytime I use a traditional search engine I think the dead internet is already here because surely no one is writing all these crappy SEO spam sites.
That said, we’ve had sufficient power to shill already and we haven’t seen it flood traditional sites yet. Even if the comment quality is better, not detection can rely on interaction patterns like posting volume, or the use of a headless browser.
FTC recently levied a half billion dollar fine against Fortnite.
The positions they're taking on AI are conservative and very basic (read: clear) because they're a foundation that the FTC will continue to build upon.
Hostile nation states will be dealt with by the CIA and DoD. Gangs will be dealt with by the CIA & FBI. Etc.
FTC is communicating to the businesses under its purview that they can and will levee massive fines over this stuff.
Anyone who's worked at a large tech company can tell you that the FTC is taken very seriously.
I think giving the law serious teeth in deepfakes will solve some of the problem, like with blatant copyright violation that is straight up fraud and theft. Those laws exist and function. It’s not brazen theft if we are being nuanced and honest. I think if we charge deepfake tech designed and used for fraud as fraudulent activity in the same way, we would be able to stunt adverse development or usages of these tools. So I for one am happy about this article and think they’re going in the right direction. Penalize fraud as fraud. It’s still fraudulent activity.