From Nature
This isn’t the Nature Podcast — how deepfakes are distorting reality
By Nick Petrić Howe & Benjamin Thompson
In this episode:
00:45 How to tackle AI deepfakes
It has long been possible to create deceptive images, videos or audio to entertain or mislead audiences. Now, with the rise of AI technologies, such manipulations have become easier than ever. These deepfakes can spread misinformation, defraud people, and damage economies. To tackle this, researchers and companies are developing tools to find and label deepfakes, in an attempt to rob them of their potential to wreak havoc...
Nick Petrić Howe
We are going to explore both of these technological solutions. But we’re going to start with the second one. Algorithms can be used to detect deep fakes trained to quickly pick them out of a lineup. In essence, AI’s may be part of the solution to exposing AI deepfakes. But we also need to stop deepfakes spreading. After all, it is hard to undo misinformation once it has gone viral. And that’s where the first solution, Nicola mentioned comes in. Tagging, adding some kind of marker, which makes it clear that something is AI generated. I reached out to Hany Farid, an AI researcher that advises companies and governments on how to handle deepfakes.
Hany Farid
And so you insert a signal into the very content that you are now about to unleash into the wild. And then your browser or the social media companies are aware of those watermarks and will simply read them and notify you that when you view the image that a watermark has been detected, and that this has been generated by open AI on this date. And it’s important to understand, we’re not saying what you should or shouldn’t do with the content, we’re simply saying, label it, please. So it’s a very low bar...
Hany Farid is a professor in the Department of Electrical Engineering & Computer Sciences and the School of Information at UC Berkeley.