I'll simply observe that it is easy to tell a fake face when presented an either/or choice and when specifically asked to. Most of the time we aren't looking as closely, so while I see some commenters being very happy about their accomplishments, I don't personally see a reason to rejoice.
Regardless, the AP news article[1] linked under the "methods" page provides some useful reading on how to detect these faces, for anyone interested.
My personal observation is that these generators fail miserably when generating low-detail parts and hair. In many of these pictures you don't have to look at the face at all, but rather look at the background and the one with heavy artifacting will be fake. In "enterprise" style pictures one can look at hairs and find heavy artifacting there.
Sure, I completely agree on this one - AI generated faces have been pretty decent for years now. The quality is currently in some weird space: with careful preselection they can fool unsuspecting reader passing by and yet at the same time reader looking for fakes will detect them (given high enough size/quality) with high confidence.
for me the eyes looked really off in all the fakes, the pupils seems like the wrong size for the level of light and looked like the shape of the eye didn't fit into the bone structure of the face.
This is a very important distinction. With deliberate attention, you can indeed catch many fakes in this kind of scenario (to me it seems the background is often a giveaway, but you do need to focus on it)
But in passing, accompanying a news article, a tweet or an instagram post, are you paying as much attention? Those are the scenarios where the potential for harm is much bigger.
Yeah, I had exactly the same reaction. When I take the time to scan for artifacts, I get close to 100%, but when I try to do it quickly, I get close to 50%.
That 100% will gradually come down as the tech improves. And I'd guess the tech is already good enough that most people won't be able to improve on 50% success at first glance -- I don't think my instinct would noticeably improve with practice.
I think that's true to an extent, but serious errors such as the woman with what looked like a horn growing out of her cheek to match a partially-occluded earring seem to happen often enough that they would call attention to people being fake.
Apart from happiness (smiling), all of the deep fakes showed a blunted affect. Genuine humans tend to have quite expressive faces, and the many of the fakes looked like NPCs from an Elder Scrolls game.
These lead me to believe that a situation where deep fakes might matter e.g. security video presented as evidence in court, it would be possible to start picking up the deepfake artifacts/signatures even for a human expert.
Exactly my thought - I scored 5/6 choosing very quickly the one which seemed more imperfect at a glance, but I'm no Reddit-expert photoshop-identifier or whatever. They all looked real, I'd have assumed any of them were without the 'one of these is fake' context.
I got 5/5 correct just by looking for weird artefacts in the hair and background. Just looking at the eyes alone was much harder (I sometimes couldn't tell).
For me I didn't even look at the face. To me the obvious giveaway that made me spot all of them was, first of all the unnatural bokeh that you see in all AI images that don't look like anything a camera would produce. And the second thing is looking at clothing that folds in strange ways.
Regardless, the AP news article[1] linked under the "methods" page provides some useful reading on how to detect these faces, for anyone interested.
[1] https://apnews.com/article/ap-top-news-artificial-intelligen...