Aug 15, 2024

Professor Hany Farid Speaks to the Washington Post About Why AI Detection Tools Can Fail to Catch Election Deepfakes

From The Washington Post (Paywall)

See Why AI Detection Tools Can Fail to Catch Election Deepfakes

By Kevin Schaul, Pranshu Verma, and Cat Zakrzewski

Artificial intelligence-created content is flooding the web and making it less clear than ever what’s real this election. From former president Donald Trump falsely claiming images from a Vice President Kamala Harris rally were AI-generated to a spoofed robocall of President Joe Biden telling voters not to cast their ballot, the rise of AI is fueling rampant misinformation. 

Deepfake detectors have been marketing as a silver bullet for identifying AI fakes, or “deepfakes.” Social media giants use them to label fake content on their platforms. Government officials are pressuring the private sector to pour millions into building the software, fearing deepfakes could disrupt elections or allow foreign adversaries to incite domestic turmoil...

Hany Farid, a computer science professor at the University of California at Berkeley, said that the algorithms that fuel deepfake detectors are only as good as the data they train on. The data sets are largely composed of deepfakes created in a lab environment and don’t accurately mimic the characteristics of deepfakes that show up on social media. Detectors are also poor at spotting abnormal patterns in the physics of lighting or body movement, Farid said...

Read more...

Hany Farid is a professor in the Department of Electrical Engineering & Computer Sciences and the School of Information at UC Berkeley.

Last updated: August 22, 2024