Tech

Fake or the real thing? How AI can make it harder to trust the pictures we see

Published

on


Top row features genuine pictures of people with AI generated versions underneath. Credit: Swansea University

A new study has revealed that artificial intelligence can now generate images of real people that are virtually impossible to tell apart from genuine photographs.

Using AI models ChatGPT and DALL·E, a team of researchers from Swansea University, the University of Lincoln and Ariel University in Israel, created highly realistic images of both fictional and famous faces, including celebrities.

They found that participants were unable to reliably distinguish them from authentic photos—even when they were familiar with the person’s appearance.

Across four , the researchers noted that adding comparison photos or the participants’ prior familiarity with the faces provided only limited help.

The research has just been published in the journal Cognitive Research: Principles and Implications and the team say their findings highlight a new level of “deepfake realism,” showing that AI can now produce convincing fake images of real people which could erode trust in visual media.

Professor Jeremy Tree, from the School of Psychology, said, “Studies have shown that face images of fictional people generated using AI are indistinguishable from real photographs. But for this research we went further by generating synthetic images of real people.

“The fact that everyday AI tools can do this not only raises urgent concerns about misinformation and trust in but also the need for reliable detection methods as a matter of urgency.”

One of the experiments, which involved participants from the US, Canada, the UK, Australia and New Zealand, saw subjects shown a series of facial images, both real and artificially generated, and they were asked to identify which was which. The team say the fact the participants mistook the AI-generated novel faces for real photos indicated just how plausible they were.

Another experiment saw participants asked if they could tell genuine pictures of Hollywood stars such as Paul Rudd and Olivia Wilde from computer-generated versions. Again, the study’s results showed just how difficult individuals can find it to spot the authentic version.

The researchers say AI’s ability to produce novel/synthetic images of real people opens up a number of avenues for use and abuse. For instance, creators might generate images of a celebrity endorsing a certain product or political stance, which could influence of both the identity and the brand/organization they are portrayed as supporting.

Professor Tree added, “This study shows that AI can create synthetic images of both new and known faces that most people can’t tell apart from real photos. Familiarity with a face or having reference images didn’t help much in spotting the fakes. That is why we urgently need to find new ways to detect them.

“While automated systems may eventually outperform humans at this task, for now, it’s up to viewers to judge what’s real.”

More information:
Robin S. S. Kramer et al, AI-generated images of familiar faces are indistinguishable from real photographs, Cognitive Research: Principles and Implications (2025). DOI: 10.1186/s41235-025-00683-w

Provided by
Swansea University


Citation:
Fake or the real thing? How AI can make it harder to trust the pictures we see (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-fake-real-ai-harder-pictures.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version