On the Fairness of Machine-Assisted Human Decisions
- Date: Jun 30, 2025
- Time: 04:00 PM (Local Time Germany)
- Speaker: Talia B. Gillis (ETH Zürich)
- Room: Basement
When machine-learning algorithms are deployed in
high-stakes decisions, ensuring fair and equitable outcomes is critical. This
concern has motivated a growing body of literature focused on diagnosing and
addressing disparities in machine predictions. However, many machine
predictions are used to assist rather than replace human
decision-makers. In this article, we explore, through a formal model and lab
experiment, how the design of machine-learning algorithms impacts the accuracy
and fairness of human decisions. By explicitly modeling the human
decision-maker’s beliefs, preferences, and updating based on algorithmic aids,
we show that assumptions about accuracy--fairness trade-offs---such as those
involving the inclusion or exclusion of protected characteristics like race or
gender---differ under assistance compared to automation.
Specifically, we find that excluding group information may not reduce
disparities and may even increase them in certain contexts. Our lab experiment
confirms that excluding protected characteristics from algorithmic decision
aids can have unintended consequences for fairness, underscoring the need for
implementing context-sensitive design and evaluation of algorithms in these
settings.