Egaleco: Advancing Fairness in Machine Learning (Product and Policy)
Machine Learning (ML) algorithms increasingly dictate opportunities and outcomes for individuals and groups across economic, social, political, and medical contexts. Recently, numerous commercial and open-source “AI Fairness Toolkits” have emerged to help ML practitioners evaluate their models and proactively reduce algorithmic harms. However, the existing ML fairness toolkits are either too broad or lack the necessary context to support comprehensive bias identification, stakeholder education, and bias mitigation. Our project, Egaleco (meaning equality in Esperanto), seeks to fill these gaps in existing AI fairness toolkits by identifying undue bias, and teaching users the legal and ethical reasons why resolving it matters.
The Egaleco Product and Policy team comprising data scientists, engineers, and policy experts is focused on building the context-aware fairness assessment toolkit while also incorporating policy-informed educational content. We will be working alongside the Egaleco User Experience team.
Goals and Deliverables
Egaleco will deepen practitioners’ ability to understand and actualize fairness in ML by pairing quantitative fairness metrics with visualizations, and narratives explaining the industry and AI policy precedents that will equip data scientists to articulate why investment in algorithmic fairness matters to both technical and non-technical stakeholders. We chose to focus on healthcare because ML services myriad areas within healthcare, from coverage and diagnostics to delivery and patient outcomes. Across these applications, the implications of unfairness in models are matters of life and death. Deliverables and features to meet this challenge are:
- The creation of an intuitive web-based tool that acts as “training wheels” for ML practitioners to help them identify and act on biases in their datasets and algorithms.
- Educational and explanatory narrative sections embedded in the tool, as well as a whitepaper that supports deeper thinking about the meaning of fairness in healthcare, and building a culture of responsible AI across teams. These will help practitioners build models that advance fairness and endure even as new AI laws and compliance mandates come into play.
- A Legal and Ethical Frameworks spreadsheet. A curated, non-exhaustive collection of current and proposed policies and ethics frameworks for US healthcare. Users can learn about these different forms of guidance at a glance through the spreadsheet’s notes.
- A white paper titled, Looking Beyond Quantitative Fairness to Build Responsible AI Systems. It provides a framework for thinking critically about ML fairness and promoting organizational buy-in for work that is commonly deprioritized.
- A Best Practices for Responsible ML guide that gives concrete steps ML practitioners can take to level up their efforts to mitigate algorithmic harms.