By Kara Manke
Despite our society’s collective addiction to scrolling through social media, many of us can’t help but feel a twinge of dread when seeing notifications we’ve missed. For every clever meme, scintillating fact or adorable animal that crosses our feeds, we’re just as likely to encounter a snarky attack, racial slur or hate-filled comment.
But the potential dangers go far beyond anxiety. A 2021 Pew Research poll found that a quarter of Americans have experienced severe forms of harassment online, including physical threats, stalking, sexual harassment and sustained harassment, often tied to their political beliefs. And this does not include the additional harms caused by hate speech, misinformation and disinformation, which are even harder to measure.
While many social media companies are developing community guidelines and investing in both human and algorithmic content moderators to help uphold them, these efforts hardly feel like enough to stem the tide of toxicity. And even when platforms succeed in removing problematic content and banning perpetrators, people who are the target of online hate or harassment remain extremely vulnerable. After all, it only takes a few clicks for a banned user to create a new account or repost removed content.
At UC Berkeley, researchers are reimagining how to support freedom of expression online while minimizing the potential for harm. By holding social media companies accountable, building tools to combat online hate speech, and uplifting the survivors of online abuse, they are working to create safer and more welcoming online spaces for everyone.
“There are a lot of exciting things about social media that, I think, make it irresistible. We can have real- time collaboration, we can share our accomplishments, we can share our dreams, our stories and our ideas. This type of crowdsourced expertise has real value,” said Claudia von Vacano, executive director of Berkeley’s D-Lab and leader of the Measuring Hate Speech Project. “But I think that we need to use tools to ensure that the climate is one where no one is silenced or marginalized or, even worse, put in actual physical danger...”
Supporting survivors
The goal of online content moderation is usually to remove any harmful content and, if appropriate, to suspend or ban the user who posted it.
But this approach ignores a key player in the interaction: The person who was harmed.
Inspired by the framework of restorative justice, Sijia Xiao wants to redirect our focus to the survivors of online harm. As a graduate student at Berkeley, she worked with Professors Niloufar Salehi and Coye Cheshireto study the needs of survivors and build tools to help address them.
“An important aspect of my research is about changing the way people think about online harm and the ways we can address it,” said Xiao, who is now a postdoctoral researcher at Carnegie Mellon University after graduating with a Ph.D. from the I School in May 2024. “It’s not just about getting an apology from the perpetrator or banning the perpetrator. There can be different forms of restorative justice and different ways that we can help survivors...”
Originally published as How UC Berkeley researchers are making online spaces safer for all; by UC Berkeley Press on September 30, 2024.
Sijia Xiao graduated from the I School in 2024 with a doctorate in information science. Her research centers around human-computer interaction and social computing.