Research exploring how to explain machine learning scores for child welfare screening

17 May, 2021
iStock-1265594592 400

New research co-authored by Rhema Vaithianathan  explores how to best explain machine learning scores to workers who are using them to support child welfare call screening decisions.

Sibyl: Explaining Machine Learning Models for High-Stakes Decision Making – was presented at the CHI Conference on Human Factors in Computing Systems in May. The authors explored the domain of child welfare, where machine learning tools are beginning to be used but workers are not expected to have any knowledge of machine learning. The authors worked with collaborating US counties that receive referrals for alleged child abuse and use a screening score generated by a machine learning tool to support the decision about whether to screen in the referral for investigation.,

The purpose of this work was to determine if and how the call screening process might benefit if each screening score was accompanied by an explanation.

Through observations and interviews, the authors confirmed that call screeners did want explanations, and were especially interested in getting a list of risk and protective factors that the model considered.

To investigate explanations further, the authors implemented a tool called Sibyl, which includes four explanation interfaces: a SHAP-based feature-contribution explanation, a “what-if” explanation that allows screeners to see the

effects on changing feature values, a visualisation of historical feature distributions, and a global feature importance explanation. They then asked screeners to use the Sibyl interfaces to make decisions on simulated cases in a

formal user study and answer reflective questions about the different explanation approaches.

The research concluded that a local feature contribution explanations (which explain the contribution of each feature to the final score) are most helpful for the child welfare domain.

In future work, the researchers plan to quantitatively evaluate the effect Sibyl has on decision making, and take further steps to deploy the tool.

CHI is an annual conference for the global human-computer interaction (HCI) community, where researchers and practitioners gather from across the world to discuss the latest in interactive technology.