Procurement as Policy: Administrative Process for Machine Learning
At every level of government, officials contract for technical systems that employ machine learning — systems that perform tasks without using explicit instructions, relying on patterns and inference instead. These systems frequently displace discretion previously exercised by policymakers or individual front-end government employees with an opaque logic that bears no resemblance to the reasoning processes of agency personnel. However, because agencies acquire these systems through government procurement processes, they and the public have little input into — or even knowledge about — their design or how well that design aligns with public goals and values.
This talk explains the ways that the decisions about goals, values, risk, certainty, and the elimination of case-by-case discretion inherent in machine-learning system design make policy — not just once when they are designed, but over time as they adapt and change. When the adoption of these systems is governed by procurement, the policies they embed receive little or no agency or outside expertise beyond that provided by the vendor. There is no public participation, no reasoned deliberation, and no factual record. Design decisions are left to private third-party developers. Government responsibility for policymaking is abdicated.
I argue for a move from a procurement mindset to policymaking mindset. When policy decisions are made through system design, processes suitable for substantive administrative determinations should be used: processes that foster deliberation reflecting both technocratic demands for reason and rationality informed by expertise, and democratic demands for public participation and political accountability. Specifically, I propose administrative law as the framework to guide the adoption of machine learning governance, describing specific ways that the policy choices embedded in machine learning system design fail the prohibition against arbitrary and capricious agency action absent a reasoned decisionmaking process that both enlists the expertise necessary for reasoned deliberation about, and justification for, such choices, and makes visible the political choices being made.
Finally, I sketch models for machine learning adoption processes that satisfy the prohibition against arbitrary and capricious actions. I explore processes by which agencies might garner technical expertise and overcome problems of system opacity, satisfying administrative law’s technocratic demand for reasoned expert deliberation. I further propose both institutional and engineering design solutions to the challenge of policymaking opacity, offering both process paradigms to ensure the “political visibility” required for public input and political oversight. In doing so, I also propose the importance of using “contestable design” — design that exposes value-laden features and parameters, and provides for iterative human involvement in system evolution and deployment. Together, these institutional and design approaches further both administrative law’s technocratic and democratic mandates.
Deirdre Mulligan is an associate professor in the School of Information working on privacy, fairness, human rights, cybersecurity, technology and governance, and values in design.
The I School Research Exchange offers I School faculty and Ph.D. students opportunities to learn about, discuss, and contribute to research developing within the school, across the campus, and in the region.
Meetings are open to I School faculty, I School Ph.D. students, I School visiting scholars, and invited guests.
Lunch, for those who have signed up, will be served at 12:00. The talks start at 12:30 and we try to wrap up between 1:45 and 2:00.