By Public Affairs
Every day someone asks Hany Farid, a UC Berkeley professor of electrical engineering and computer science and in the School of Information, to review images, audio and videos to determine if they are real or fake. As one of the world’s leading experts on digital manipulation and misinformation, his views and verification skills are in high demand. With elections being held around the globe this year, including the presidential election in the United States, he’s been especially busy using digital forensic tools to verify or debunk political misinformation as it spreads in real time.
The rapid advances of generative artificial intelligence have only complicated his job. As he points out in this Academic Review video, the ability to make anyone say words that they didn’t utter, appear in photos they were never present for or to wholecloth create a politically charged fiction with a few keystrokes on a computer has led to a proliferation of deepfakes, some of which have already wreaked havoc and tilted elections.
“Nine months ago I was pretty good at it. I could just look at stuff and I’d know almost immediately,” says Farid. “I would say today that’s gotten a lot harder.”
We asked Professor Farid to sit down with a handful of recent examples of political misinformation to explain how he analyzes questionable memes, social media posts and images. Was that photo of the Harris-Walz crowd greeting Air Force 2 manipulated to show a bigger turnout? How about those Swifties for Trump — are they real? And was an image of Donald Trump moments after his attempted assassination a strange echo of a remarkably similar image of Adolph Hitler?
Watch to learn how Farid scrutinizes these images, using both sophisticated digital tools and common sense, and what all of us can do to stop the spread of misinformation online.
Originally published as “Watch a UC Berkeley digital forensics expert break down political deepfakes” published on September 26, 2024, by UC Berkeley Public Affairs.