Andrew Lewis (2018, DPhil Politics) has co-written an article with his supervisor Ray Duch (Nuffield) and two coauthors on deepfake technology and content moderation. The project was funded by the Royal Society.
The study shows that people are not good at discerning quality deepfakes from genuine videos and even when warned at least 1 of the 5 videos they’ll see has been altered, the vast majority (78%) of people still cannot spot the deepfake.
The paper summarises: “These experiments show that human discernment is largely inadequate in detecting deepfakes, even when participants are directly warned that the content they view may have been altered. A practical interpretation [of Experiment 2] is that — unlike how accuracy prompts and other interventions can help individuals better spot textual misinformation — warning labels do not enable individuals to simply look closer and see the irregularities on their own. As such, successful content warnings on deepfakes will rely on trust in moderators’ judgments, raising concerns that any such warnings may be written off as politically motivated or biased.”
The results were funded and presented as part of the Royal Society’s just-launched report, The Online Information Environment.
You can read the paper online.
Published: 1 February 2022