Calibrating Trust in AI-Assisted Decision Making
AI helps us make decisions ranging from the mundane to the monumental, from what movie to watch to whether or not someone has cancer -- but it's not always successful. The partnership between humans and AI to make decisions can enable better decisions than either party could make on their own. However, in order for this partnership to be effective, people need to know if and when to trust an AI's prediction. Therefore the key to success for AI-assisted decision making is trust calibration, wherein a person's trust in AI matches the capabilities of AI. Misuse occurs when people rely uncritically on AI, trusting it in scenarios when they shouldn't. Disuse occurs when people reject the capabilities of AI, not trusting it in scenarios when they should.
Poor calibration, where there's a mismatch between trust and AI capability, can result in consequences that are costly and sometimes catastrophic. Evidence of poor calibration includes unclear GPS directions (turn right or bear right?) and reliance of tools like COMPAS, which is used to predict the likelihood a defendant will re-offend but has shown to have alarming biases. These scenarios raise questions that require the AI to be explainable.
Explanations help provide transparency, enable assessment of accountability, demonstrate fairness, and facilitate understanding. Current research in this area of explainable AI primarily focuses on making the models themselves interpretable. While this is a significant and important contribution, it often relies on a researcher's intuition of what constitutes a good explanation. As such, there is an open question of the human-side of explainability, of whether explanations are usable and practical in real-world situations.
Our contribution to the human-side of explainability is two-fold:
1. Experiment: An empirical study of how people respond to explanations in a scenario with high uncertainty and risk
2. Website: A resource that bridges the gap between research and industry, translating research such as our experiment into a human-centered design strategy and brainstorming tool for creating explainable interfaces.