On March 7, the Goldman School of Public Policy, the CITRIS Policy Lab, and the School of Information hosted the inaugural UC Berkeley Tech Policy Summit & Awards Ceremony, a daylong conference focused on navigating the complex landscape of tech policy and forging a path that fosters innovation while promoting social good. The Summit was co-sponsored by the Center for Long-Term Cybersecurity (CLTC), the Berkeley Center for Law and Technology (BCLT), CLTC’s AI Policy Hub, and the School of Journalism.
“The UC Berkeley Tech Policy Summit exemplifies our dedication to harmonizing technology with responsible governance, aiming for a future where innovation benefits society,” said Brandie Nonnecke, associate research professor at the Goldman School and director of the CITRIS Policy Lab. “UC Berkeley is a leader in ensuring that technological progress is aligned with societal values.”
The sold-out event drew world-renowned academics, thought leaders, policymakers, industry pioneers, and innovators, to discuss and debate the most pressing issues at the intersection of technology and policy.
“We need different parts of campus coming together, as well as people from government, industry, academia, and nonprofits to tackle new challenges in this space,” said Marti Hearst, professor and interim dean of the School of Information and faculty director of the Center for Long-Term Cybersecurity, in her introductory remarks. “I hope this Summit, as well as other future interactions, lay the path for finding the varied combinations to move policy and technology forward in a human-centered and society-centered way.”
Across various talks and discussions, speakers navigated the challenges and opportunities of AI and policy.
The EU AI Act & Navigating the Future of AI Governance
The European Union’s ambitious stride towards regulating artificial intelligence (AI) has culminated in the EU AI Act, a legislative framework designed to govern the development and utilization of AI technologies. This Act not only signifies a robust commitment to addressing AI’s multifaceted impacts across various sectors but also heralds a new era of governance in the global AI landscape.
In a lively panel discussion moderated by Yiaway Yeh, a lecturer at the Goldman School of Public Policy, experts delved into the EU AI Act’s core tenets and far-reaching ramifications, within Europe and beyond.
Gerard de Graaf, EU Senior Envoy for Digital to the US, noted that the EU AI Act emerged after extensive deliberations and collaborations among policymakers, academics, and industry stakeholders. Drawing inspiration from established product safety frameworks within the EU, the Act adopts a safety-oriented, risk-based approach to AI regulation, aiming to instill trust and transparency in AI systems while mitigating potential harms.
“AI is not a neutral technology,” he said. “We want AI to work for our societies and support our values, not undermine them. To require AI on the market in the EU to be fair, transparent, explainable, non-discriminatory, and not a black box.”
Pamela Samuelson, a distinguished professor of law and information at UC Berkeley, emphasized the EU’s proclivity for comprehensive legislation vis-à-vis the US’s preference for sectoral and voluntary approaches. “There are some aspects about the Act that resonate with policymakers in the US,” Samuelson said, “risk assessment is on everyone’s mind.” She asserted that the idea of paying attention to risk in different sectors is something the EU and the US have in common; however, she said, if you ask her how quickly the US Congress adopts a similar act, the answer is “never.”
Stuart Russell, a professor of computer science at UC Berkeley, cautioned against regulatory vacuums in the US, drawing attention to China’s stringent AI regulations. “China’s regulations are much stricter than the EU,” he stated. “They require AI models to put out ‘true and accurate information,’ which is not possible for AI models. In a sense, it’s a de facto ban.”
Deborah Raji, a Ph.D. student at UC Berkeley, argued that there is an increased awareness about the proliferation of “AI snake oil” companies and the need for evaluation and assurances of effectiveness. The FDA came to be because of literal snake oil, Raji pointed out, “It was about safety but also effectiveness and guarantees about the legitimacy of products.”
The conversation also delved into the dynamic nature of AI technology and the necessity of adaptive regulatory frameworks. Russell debunked prevalent myths, asserting that regulation need not stifle innovation but rather provide guardrails for responsible AI development.
Fireside Chat: Challenges and Opportunities in Trust and Safety
In a fireside chat-style discussion, Tech Policy fellows Yoel Roth, former Head of Trust & Safety at Twitter, and Ram Shankar Siva Kumar, “Data Cowboy” at Microsoft and affiliate at the Berkman Klein Center for Internet and Society at Harvard University, delved into the complexities of online trust and safety.
Reflecting on pivotal moments like the 2022 election, they highlighted the challenges platforms face in enforcing rules amid global scrutiny. Roth emphasized the need to prioritize addressing present-day harms, stating, “The long-termism can be a distraction from some of the harms we’re actually seeing today.”
Siva Kumar discussed the emergence of decentralized platforms like Mastodon and Bluesky, warning of potential safety and governance challenges. Both speakers underscored the importance of robust frameworks to navigate the evolving landscape of social media and AI governance.
Siva Kumar addressed audience questions around AI-driven content moderation, emphasizing the need for ethical and effective outcomes. Roth stressed the critical role of user demand in driving platforms to prioritize trust and safety measures.
A Fox, Rabbit, & Cabbage: Spurring Trustworthy & Secure Emerging Technologies
In a panel discussion moderated by Janet Napolitano, professor at the Goldman School of Public Policy and director of the Center for Security in Politics at UC Berkeley, experts delved into the intricate balance required to foster technological innovation while upholding ethical standards, security, and trustworthiness. Drawing inspiration from the classic problem-solving puzzle, the discussion explored the challenges and strategies involved in responsibly advancing new technologies.
The panelists addressed a fundamental question: How can innovation be aligned with ethical values and compliance with regulatory constraints?
Jessica Newman, director of the AI Security Initiative and the AI Policy Hub in the Center for Long-Term Cybersecurity (CLTC), emphasized the importance of prioritizing tasks and adopting a risk-based approach in AI governance. She noted the complexity of AI governance, highlighting the need for a comprehensive view that go beyond privacy frameworks.
Betsy Popken, executive director of the Human Rights Center at Berkeley Law, underscored the necessity of an interdisciplinary approach and foresight in addressing ethical concerns. She emphasized the importance of conducting human rights assessments during the development phase of new technologies.
Niloufar Salehi, professor in the School of Information at UC Berkeley, emphasized the iterative nature of problem-solving and the need to focus on human-centric solutions rather than purely technological ones. She highlighted the importance of understanding the roles and relationships between different stakeholders. “What looks like tech problems are, at the core, human problems,” she said.
X. Eyeé, CEO of the AI consulting firm Malo Santo, emphasized the subjective nature of ethical principles and the need to involve diverse voices in decision-making processes. Eyeé stressed the importance of considering the societal context and values when designing AI systems.
The panel also discussed the role of policy, public opinion, and labor organizing in shaping ethical guidelines and ensuring accountability. They emphasized the need for early risk assessment and the development of standardized tools for evaluating AI models.
Regarding data poisoning, Newman highlighted the lack of standardization in model evaluation and the inherent vulnerabilities of AI systems.
On the question of aligning the profit motive with ethical values, panelists discussed the role of investors, users, and governmental regulations in incentivizing ethical behavior. They emphasized the importance of internal and external pressures in driving change within companies.
Keynote: The Power of Technology vs. The Power of Policy
Marietje Schaake, international policy director at the Stanford Cyber Policy Center and former member of the European Parliament, delivered a keynote address that emphasized the urgent need for robust policies to address the challenges posed by emerging technologies, particularly AI. Schaake highlighted the disproportionate power held by tech companies and their significant impact on various aspects of society.
“AI companies are racing ahead, pushing out new products onto the market. Regulators around the world are racing to respond,” Schaake said.
Schaake drew attention to the environmental implications of tech infrastructure, pointing out the lack of coordinated policy around data centers. She also discussed the role of technology in conflict zones, citing Ukraine as an example.
“The role of technology on the battlefield, notably with Ukraine, is relevant globally. We shouldn’t think about technology being used in fields of war, but technologies as becoming the battlefield of conflict,” Schaake stated.
Highlighting the ethical implications of tech companies’ involvement in military conflicts, Schaake stressed the need for international norms and regulations. “Tech companies are in many ways the middle of the frontline between the battles between democracy and authoritarianism.”
Schaake concluded by calling for a reevaluation of the relationship between policy and enforcement and stressing the importance of global cooperation in tech governance. “We need to rethink the relationship between policy and enforcement,” she said, pointing to the Global South as an example. “There are not enough working mechanisms that look at how laws made in the EU trickle down to impact people in the Global South.”
Tech Policy Fellows and Awards
The event also included an introduction of the Tech Policy Fellowship program at UC Berkeley and the impactful work of the 2023 cohort from Brandie Nonnecke, and Tech Policy Provocations, roundtable discussions on thorny tech policy challenges led by Jean Cheng, Director of Innovation and Strategy and the Goldman School.
Nonnecke led the Tech Integrity Awards Ceremony, which recognizes individuals and organizations who embody, encourage, and promote responsible tech innovation and policy in the following five categories: academic research, civil society, government, industry, and journalism. “This is a unique opportunity to shine a light on those making a significant difference in the area of technology and policy,” Nonnecke said. “We celebrate their achievements and contributions to a more responsible and ethical tech landscape.”
The awards went to Dr. Jennifer King (School of Information, Ph.D. ’18), Privacy and Data Policy Fellow at the Stanford University Institute for Human-Centered Artificial Intelligence for academic research; X. Eyeé, CEO at Malo Santo for civil society; California Assemblymember Rebecca Bauer-Kahan for government; Kathy Baxter, Principal Architect of Responsible AI & Tech at Salesforce, for industry; and Mike Isaac, Technology Correspondent at The New York Times, for journalism.
In closing remarks, Dean David Wilson of the Goldman School of Public Policy reiterated the importance of initiatives like the Tech Policy Summit: “Berkeley is a place of expansive discovery and energy,” he said. “Anything is possible.”