From Nature
How to stop AI deepfakes from sinking society — and science
By Nicola Jones
This June, in the political battle leading up to the 2024 US presidential primaries, a series of images were released showing Donald Trump embracing one of his former medical advisers, Anthony Fauci. In a few of the shots, Trump is captured awkwardly kissing the face of Fauci, a health official reviled by some US conservatives for promoting masking and vaccines during the COVID-19 pandemic.
“It was obvious” that they were fakes, says Hany Farid, a computer scientist at the University of California, Berkeley, and one of many specialists who examined the pictures. On close inspection of three of the photos, Trump’s hair is strangely blurred, the text in the background is nonsensical, the arms and hands are unnaturally placed and the details of Trump’s visible ear are not right. All are hallmarks — for now — of generative artificial intelligence (AI), also called synthetic AI...
Systems that track image provenance should become the workhorse for cutting down the sheer number of dubious files, says Farid, who is on the C2PA steering committee and is a paid consultant for Truepic, a company in San Diego, California, that sells software for tracking authentic photos and videos. But this relies on ‘good actors’ signing up to a scheme such as C2PA, and “things will slip through the cracks”, he says. That makes detectors a good complementary tool...
Hany Farid is a professor in the Department of Electrical Engineering & Computer Sciences and the School of Information at UC Berkeley. He specializes in digital forensics.