Aug 8, 2023

Scientific American and Hany Farid Call For More Accurate AI Detection Tools

From Scientific American

Tech Companies’ New Favorite Solution for the AI Content Crisis Isn’t Enough

By Lauren Leffer

Thanks to a bevy of easily accessible online tools, just about anyone with a computer can now pump out, with the click of a button, artificial-intelligence-generated images, text, audio and videos that convincingly resemble those created by humans. One big result is an online content crisis, an enormous and growing glut of unchecked, machine-made material riddled with potentially dangerous errors, misinformation and criminal scams. This situation leaves security specialists, regulators and everyday people scrambling for a way to tell AI-generated products apart from human work. Current AI-detection tools are deeply unreliable. Even OpenAI, the company behind ChatGPT, recently took its AI text identifier offline because the tool was so inaccurate.

Now, another potential defense is gaining traction: digital watermarking, or the insertion of an indelible, covert digital signature into every piece of AI-produced content so the source is traceable. Late last month the Biden administration announced that seven U.S. AI companies had voluntarily signed a list of eight risk management commitments, including a pledge to develop “robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system.” Recently passed European Union regulations require tech companies to make efforts to differentiate their AI output from human work. Watermarking aims to rein in the Wild West of the ongoing machine learning boom. It’s only a first step—and a small one at that—overshadowed by generative AI’s risks...

Text poses the biggest challenge because it’s the least data-dense form of generated content, according to Hany Farid, a computer scientist specializing in digital forensics at the University of California, Berkeley. Even text can be watermarked, however. One proposed protocol, outlined in a study published earlier this year in Proceedings of Machine Learning Research, takes all the vocabulary available to a text-generating large language model and sorts it into two boxes at random. Under the study method, developers program their AI generator to slightly favor one set of words and syllables over the other. The resulting watermarked text contains notably more vocabulary from one box so that sentences and paragraphs can be scanned and identified...

Read more...

Hany Farid is a professor in the Department of Electrical Engineering & Computer Sciences and the School of Information at UC Berkeley.

Last updated: September 5, 2023