Enhancing fairness in high-stakes decisions: A framework for mitigating human bias
The use of artificial intelligence and algorithms has increased, including in high-stakes situations where algorithms can provide systematic, easily interpretable advice. While efforts have been made to reduce algorithm bias, little research has investigated the human factor in decision-making. In critical fields such as health care, algorithms serve only as advisory tools — humans, like doctors, make the final decisions and can chose to ignore the automated advice. Even the least biased algorithm can’t eliminate human prejudice.
To investigate this issue, Professor Christopher Sun has received a Natural Sciences and Engineering Research Council of Canada Discovery Grant for his project titled “Fairness in Systems with Human-Algorithm Decision Making through Optimization and Machine Learning.” He aims to reduce the harm caused by human bias in high-stakes situations, and to create frameworks that improve fairness by taking that bias into consideration.
“We repeatedly see individuals slip through the cracks of systems or receive disparate treatment, both in everyday life and in high-stakes domains. How can we improve these decision-making processes?” says Sun. “We have the opportunity to develop and use analytical tools to make decisions more equitably, while ensuring it is done reliably and transparently.”
A new approach across many sectors
Sun aims to develop a framework to optimize the assignment of human subjects such as patients to decision-makers such as doctors. Uniquely, it will consider the decision-makers’ historical actions and tendencies when assigning human subjects, thus reducing the influence of decision-makers’ biases.
This project introduces a new research perspective on algorithmic fairness and the concept of assignment optimization. It could redefine how assignment of human subjects is done in many sectors. This could improve well-being and facilitate equal and fair treatment overall.