Catching bad content in the age of AI
By Tate Ryan-Mosley
In the last 10 years, Big Tech has become really good at some things: language, prediction, personalization, archiving, text parsing, and data crunching. But it’s still surprisingly bad at catching, labeling, and removing harmful content. One simply needs to recall the spread of conspiracy theories about elections and vaccines in the United States over the past two years to understand the real-world damage this causes.
And the discrepancy raises some questions. Why haven’t tech companies improved at content moderation? Can they be forced to? And will new advances in AI improve our ability to catch bad information?...
Hany Farid, a professor at the University of California, Berkeley, School of Information, has a more obvious explanation. “Content moderation has not kept up with the threats because it is not in the financial interest of the tech companies,” says Farid. “This is all about greed. Let’s stop pretending this is about anything other than money...”
Hany Farid is a professor at the UC Berkeley School of Information and EECS. He specializes in digital forensics.