On July 1, 2024, Diag Davenport began his position as an assistant professor at the School of Information and the Goldman School of Public Policy. We spoke to Professor Davenport about tech policy, his interests outside of teaching, and his current research.
You previously worked at Princeton’s School of Public and International Affairs as a presidential postdoctoral research fellow. What drew you to UC Berkeley?
I spend a lot of time thinking about how people interact with technology. I try to express those ideas through my research and teaching. The joint appointment offered me an opportunity to bring those ideas to two overlapping communities that I think have a lot to teach me. First, there are the Goldman folks who are focused on all sorts of policy problems around the world. Then there are the I School folks who are thinking, writing, and building to improve the social and technical aspects of algorithms.
Beyond that, there’s an incredibly rich history in the East Bay [Area] that I’ve found inspiring for a while now: civil rights, free speech, the labor movement. That history is intrinsically important to me but it also serves as a reminder to think aspirationally and be proactive in defining the future. We’re at such a remarkable inflection point in our relationship with technology. I hope to add to the rich legacy at Berkeley by contributing to a movement of thoughtful people thinking carefully about responsibly developing and deploying AI. The weather also helps.
You have a joint appointment at the I School and Goldman School of Public Policy. What will you be teaching?
I teach a class on Behavioral Science and Public Policy (BSPP) and one on Decisions and Algorithms (DnA). The protagonist in BSPP is the everyday person who doesn’t have enough time, money, or resources to make every decision perfectly. We try to understand the scope of their decision-making errors and identify potential policy solutions. Then in DnA, the protagonist isn’t a person but a relationship: how do machine predictions interact with human decision-making? We try to arrive at a set of principles to characterize whether that relationship is healthy and what can be done to improve it.
You have recently published research about how human choice can affect algorithmic bias. Tell us more about your current research.
Online algorithms are often watching our online behaviors in order to curate future experiences for us. To identify the patterns in our behaviors, they watch us closely—too closely. Too close, because the algorithms don’t distinguish between our more intentional, deliberate, thoughtful choices versus our more off-the-cuff, automatic, gut reactions. Often this distinction doesn’t matter. But it can matter a lot if these algorithms (often called recommendation systems) come to show us options based on our subconscious biases instead of our conscious preferences.
This is what we demonstrate in an audit study of Facebook. We find that posts from our friends of a similar identity are much more likely to be shown to us. That is, the Facebook newsfeed shows ingroup favoritism. We show that this preferential treatment is not the result of us truly preferring our ingroup friends’ content more, but simply because we are more likely to reveal our implicit biases when scrolling through our feeds.
What interests you most about tech policy?
If you get tech policy right it translates to innovation and social justice. So we grow the pie and we get everyone closer to a fair slice. Good tech policy is about (1) empowering the most forward-thinking ideas about what can be built and (2) making sure what’s built does more social good than harm. Tech policy gives me an opportunity to look across many policy areas for things that might stifle talent — like housing and environmental issues — and settings where we may be particularly sensitive to the potential for harm, like criminal justice, health care, and labor markets.
What’s a fun fact people may not know about you?
I love doing improv.
What are you reading or watching right now?
I’m always rewatching The Sopranos.