Does artificial intelligence harm labour? Investigating the limitations of incident trackers as evidence for policymaking
DOI:
https://doi.org/10.47989/ir30iConf47296Keywords:
AI, Labor, Knowledge Organization, Incident Trackers, Evidence, PolicymakingAbstract
Introduction. From the point of view of public policy, artificial intelligence (AI) is an emerging technology with as-yet-unknown risks. AI incident trackers collect harms and risks to inform policymaking. We investigate how labour is represented in two popular AI incident trackers. Our goal is to understand how well the knowledge organization of these incident trackers reveals labour-related risks for AI in the workplace, with a focus on how AI is impacting and expected to impact workers within the United States.
Data and Analysis. We search for and analyse labour-related incidents in two AI incident trackers, the Organization for Economic Cooperation and Development's AI incidents monitor (OECD AIM) and the AI incident database (AIID) from the responsible AI collaborative.
Results. The OECD AIM database categorised workers as stakeholders for 600 incidents with 6,744 associated news reports. From the AIID, we constructed a set of 57 labour-related incidents.
Discussion and Conclusions. The AI incident trackers do not facilitate ready retrieval of labour-related incidents: they used limited existing labour-related terminology. AI incident trackers' reliance on news reports risks overrepresenting some sectors and depends on the news reports' framing of the evidence.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Theodore Dreyfus Ledford

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.