Feb 6, 2024

Hany Farid Is Worried AI Crackdowns May Not Be Enough

From WIRED

Meta Will Crack Down on AI-Generated Fakes—but Leave Plenty Undetected

By Vittoria Elliott

Meta, like other leading tech companies, has spent the past year promising to speed up deployment of generative artificial intelligence. Today it acknowledged it must also respond to the technology’s hazards, announcing an expanded policy of tagging AI-generated images posted to Facebook, Instagram, and Threads with warning labels to inform people of their artificial origins...

Hany Farid, a professor at the UC Berkeley School of Information who has advised the C2PA initiative, says that anyone interested in using generative AI maliciously will likely turn to tools that don’t watermark their output or betray its nature. For example, the creators of the fake robocall using President Joe Biden’s voice targeted at some New Hampshire voters last month didn’t add any disclosure of its origins.

And he thinks companies should be prepared for bad actors to target whatever method they try to use to identify content provenance. Farid suspects that multiple forms of identification might need to be used in concert to robustly identify AI-generated images, for example by combining watermarking with hash-based technology used to create watch lists for child sex abuse material. And watermarking is a less developed concept for AI-generated media other than images, such as audio and video...

Read more...

Hany Farid is a professor in the Department of Electrical Engineering & Computer Sciences and the School of Information at UC Berkeley and a senior advisor to the Counter Extremism Project.

Last updated: February 7, 2024