How Do (Human) Child Welfare Workers Respond to Machine-Generated Risk Scores? by Martin Eiermann, Maria Fitzpatrick, Katharine Sadowski, and Christopher Wildeman (2026)

Screenshot from website.

Sociological Science

Abstract:

Algorithmic risk scoring tools have been widely incorporated into governmental decision making, yet little is known about how human decision makers interact with machine-generated risk scores at the street level. We examined such human–machine interactions in the child welfare system, a high-stakes setting where caseworkers ascertain whether government interventions in family life are warranted. Using novel data—verbatim transcripts of caseworker discussions—we found that decision makers: (1) disregarded scores in the middle of the distribution while paying attention to extremely high or low risk scores and (2) rationalized divergences between human decisions and machine-generated scores by highlighting the algorithm’s overemphasis on historical data and specific risk factors and its lack of contextual knowledge. This meant that caseworkers were unlikely to modify their decisions so that they aligned with risk scores. However, we did not find evidence of principled resistance to algorithmic tools. Our findings advance research on such tools by specifying how human perceptions of the utility and limitations of novel technologies shape discretionary decision making by state officials; and they help to explain their uneven and potentially modest impact on the bureaucratic management of social vulnerability.