From Foreign Policy
Red Teaming Isn’t Enough
By Gabriel Nicholas
Artificial intelligence (AI) may be good at a lot of things, but providing accurate election information isn’t one of them. According to research from the AI Democracy Projects, if you ask Google’s Gemini for the nearest polling place in North Philadelphia, it will tell you (incorrectly) that there are none. If you ask ChatGPT if you can wear a MAGA hat to the polls in Texas, it will tell you (again, incorrectly), to go right ahead.
Answering election questions isn’t the only task that today’s state-of-the-art AI systems struggle with. They make racially biased employment decisions and confidently offer bad legal advice. They are unsound tutors, unethical therapists, and unable to distinguish a common button mushroom from the deadly amanita virosa.
But do these shortcomings really matter, or are the risks only theoretical?
This uncertainty creates a major challenge for policymakers around the world. Those attempting to address AI’s potential harms through regulation only have information about how these models could be used in theory. But it doesn’t have to be this way: AI companies can and should share information with researchers and the public about how people use their products in practice. That way, policymakers can prioritize addressing the most urgent issues that AI raises, and public discourse can focus on the technology’s real risks rather than speculative ones...
Gabriel Nicholas is a 2018 MIMS alum. He is currently a research fellow at the Center for Democracy and Technology and a fellow at the NYU School of Law Information Law Institute.