New research suggests folks are relying on 'outdated visual cues' to identify AI-generated faces—but I did get 14/20 on the test
I like to think I'm pretty good at spotting the tell-tale signs of AI-generated images: weirdly soft lighting, blurry background details, accessories or smaller details that simply disappear into nothing. Unfortunately, every 'real or fake' test I've taken thus far has been immediately humbling.
The latest 'real or AI-generated' test I've stumbled across comes from UNSW Sydney and focuses on human faces. The research team's demo presents 20 faces that participants must sort as either 'Human' or 'Computer-generated (AI).' I scored 14/20, which is above the average of 11/20—though I'm sure I don't need to point out how that average performance score is only slightly better than leaving it to chance.
The research team presented the test to a small sample group of 125 participants, and has since published the findings in a paper published in the British Journal of Psychology. Of the 125 participants, 36 were identified as 'super-recognizers'—that is, folks who "excel at a range of face processing tasks including recognizing faces they have only seen briefly before and perceiving differences between very similar faces." The team found these folk were also better than average at discerning an AI face from an actual human one.
The team shared, "AI discrimination ability was also associated with individuals' sensitivity to the ‘hyper-average’ appearance of AI faces." Given that many generative AI models are at their core probability machines, outputting only what is most statistically likely to follow an input, it makes sense that an 'average' appearance would be a dead giveaway for an AI-generated face.
A follow-up post out of the UNSW Sydney newsroom suggests that folks who aren't 'super-recognizers' may be relying on "outdated visual cues" to identify AI faces. If you're still looking for "distorted teeth, glasses that [merge] into faces, ears that [don't] quite attach properly, or strange backgrounds that [bleed] into hair and skin," I've got bad news for you. Simply put, the latest generative models have moved beyond those old tell-tale signs.
"What we saw was that people with average face-recognition ability performed only slightly better than chance," Lead researcher Dr. James D. Dunn shares, "And while super-recognisers performed better than other participants, it was only by a slim margin. What was consistent was people’s confidence in their ability to spot an AI-generated face – even when that confidence wasn’t matched by their actual performance."
Last year, a much larger Microsoft study focusing on a broader range of 'real or fake' images found that participants could only correctly tell the difference 62% of the time. Obviously, none of this means that even a hardened AI sceptic like me should just give up already and embrace the plagiarism slop machines, though.
It's important to note that both studies offer a test where most context clues are restricted to the image itself. In real life, there's often a wealth of context you can seek out. Let me elaborate: Say someone messages you out of the blue on your social media platform of choice. If you look at their profile, does it look like they're posting from a fresh account?
Are they posting the same thing over and over again? If they're posting something particularly eye-grabbing, does a quick search turn up identical posts from different accounts? Are they sending you lots of weird links within their first two messages? Then you definitely shouldn't be clicking them!
Looking for further context clues like this doesn't mean one will never be fooled by an AI-generated front, but perhaps it's premature to say we're completely cooked. Don't get me wrong, generative AI models have already come a long way, and will no doubt continue to improve—but will they ever escape their tell-tale average-ness? I'm not so sure.
