<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "http://jats.nlm.nih.gov/publishing/1.0/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" article-type="research-article" xml:lang="en">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">IR</journal-id>
<journal-title-group>
<journal-title>Information Research</journal-title>
</journal-title-group>
<issn pub-type="epub">1368-1613</issn>
<publisher>
<publisher-name>University of Bor&#x00E5;s</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">ir30iConf47296</article-id>
<article-id pub-id-type="doi">10.47989/ir30iConf47296</article-id>
<article-categories>
<subj-group xml:lang="en">
<subject>Research article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Does artificial intelligence harm labour? Investigating the limitations of incident trackers as evidence for policymaking</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author"><name><surname>Ledford</surname><given-names>Theodore Dreyfus</given-names></name>
<xref ref-type="aff" rid="aff0001"/></contrib>
<aff id="aff0001"><bold>Theodore Dreyfus Ledford</bold> is a PhD student in Information Sciences at the University of Illinois at Urbana-Champaign. <email xlink:href="tledfo2@illinois.edu">tledfo2@illinois.edu</email></aff>
</contrib-group>
<pub-date pub-type="epub"><day>06</day><month>05</month><year>2025</year></pub-date>
<pub-date pub-type="collection"><year>2025</year></pub-date>
<volume>30</volume>
<issue>i</issue>
<fpage>486</fpage>
<lpage>499</lpage>
<permissions>
<copyright-year>2025</copyright-year>
<copyright-holder>&#x00A9; 2025 The Author(s).</copyright-holder>
<license license-type="open-access" xlink:href="https://creativecommons.org/licenses/by-nc/4.0/">
<license-p>This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by-nc/4.0/">http://creativecommons.org/licenses/by-nc/4.0/</ext-link>), permitting all non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<abstract xml:lang="en">
<title>Abstract</title>
<p><bold>Introduction.</bold> From the point of view of public policy, artificial intelligence (AI) is an emerging technology with as-yet-unknown risks. AI incident trackers collect harms and risks to inform policymaking. We investigate how labour is represented in two popular AI incident trackers. Our goal is to understand how well the knowledge organization of these incident trackers reveals labour-related risks for AI in the workplace, with a focus on how AI is impacting and expected to impact workers within the United States.</p>
<p><bold>Data and Analysis.</bold> We search for and analyse labour-related incidents in two AI incident trackers, the Organization for Economic Cooperation and Development&#x2019;s AI incidents monitor (OECD AIM) and the AI incident database (AIID) from the responsible AI collaborative.</p>
<p><bold>Results.</bold> The OECD AIM database categorised workers as stakeholders for 600 incidents with 6,744 associated news reports. From the AIID, we constructed a set of 57 labour-related incidents.</p>
<p><bold>Discussion and Conclusions.</bold> The AI incident trackers do not facilitate ready retrieval of labour-related incidents: they used limited existing labour-related terminology. AI incident trackers&#x2019; reliance on news reports risks overrepresenting some sectors and depends on the news reports&#x2019; framing of the evidence.</p>
</abstract>
</article-meta>
</front>
<body>
<sec id="sec1">
<title>Introduction</title>
<p>To make regulatory decisions, policymakers need to define problems and priorities in a process called agenda setting (<xref rid="R7" ref-type="bibr">Baumgartner, 2015</xref>). Issues raised in the agenda setting phase inform the evidence to be collected from empirical research (<xref rid="R16" ref-type="bibr">Head, 2010</xref>; <xref rid="R37" ref-type="bibr">Pawson et al., 2011</xref>) to shape the space of policies envisioned (<xref rid="R26" ref-type="bibr">MacKillop &#x0026; Downe, 2023</xref>; <xref rid="R46" ref-type="bibr">Schiff, 2024</xref>).</p>
<p>Artificial intelligence (AI), while long studied in the academic sphere, is, from the point of view of public policy, an emerging technology with as-yet-unknown risks. In the public consciousness, two vivid risks people envision from AI are existential threats to humanity (<xref rid="R9" ref-type="bibr">Cameron, 1984</xref>) and the risk of being replaced by machines (<xref rid="R5" ref-type="bibr">Autor, 2015</xref>; 2022). In the past few years, multiple groups have introduced AI incident trackers (<xref rid="R1" ref-type="bibr">Abercrombie et al., 2024</xref>; <xref rid="R19" ref-type="bibr">Hutiri et al., 2024</xref>; <xref rid="R28" ref-type="bibr">McGregor, 2021</xref>; <xref rid="R35" ref-type="bibr">OECD, 2024</xref>) and taxonomies (<xref rid="R3" ref-type="bibr">Arda, 2024</xref>; <xref rid="R10" ref-type="bibr">Cattell et al., 2024</xref>; <xref rid="R11" ref-type="bibr">Critch &#x0026; Russell, 2023</xref>; Shelby et al., 2022; <xref rid="R55" ref-type="bibr">Weidinger, 2022</xref>) to analyse the potential harms and risks of AI.</p>
<p>In this paper, we focus on labour relations with AI (<xref rid="R4" ref-type="bibr">Arntz et al., 2016</xref>), sometimes called the &#x2018;future of work&#x2019; (<xref rid="R24" ref-type="bibr">Laker, 2023</xref>). Specifically, we investigate how two popular AI incident trackers represent labour-related risks for AI in the workplace.</p>
</sec>
<sec id="sec2">
<title>Background</title>
<sec id="sec2_1">
<title>AI and labour</title>
<p>Policymakers seek to mitigate the effects of emerging technology on labour displacement (<xref rid="R25" ref-type="bibr">Lane &#x0026; Saint-Martin, 2021</xref>), deskilling and economic trade competition (<xref rid="R33" ref-type="bibr">OECD, 2022b</xref>). In 2020, US policymakers defined an AI as a <italic>&#x2018;machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments&#x2019;</italic> (ScienceIsUS, 2024). As early as the 20th century, governments and research experts forecasted the effects of automation on the workforce (Acemoglu and Restreppo, 2019). More recently, Frey &#x0026; Osborne (2017) measured which occupations are at the most risk of automation.</p>
</sec>
<sec id="sec2_2">
<title>Incident tracking</title>
<p>We identified a proliferation of AI incident trackers (e.g., <xref rid="R19" ref-type="bibr">Hutiri et al., 2024</xref>; <xref rid="R28" ref-type="bibr">McGregor, 2021</xref>; <xref rid="R20" ref-type="bibr">Rodrigues et al., 2023</xref>; <xref rid="R50" ref-type="bibr">Shrishak, 2023</xref>). These AI incident trackers are informed by earlier incident reporting strategies to address system failures and risks in aviation (<xref rid="R31" ref-type="bibr">NASA Aviation Safety Reporting System, n.d.</xref>; <xref rid="R42" ref-type="bibr">Reynard, 1986</xref>), healthcare (<xref rid="R22" ref-type="bibr">Kohn et al., 2000</xref>; <xref rid="R27" ref-type="bibr">Macrae, 2016</xref>), software development (<xref rid="R8" ref-type="bibr">Booth et al., 2013</xref>) and cybersecurity (<xref rid="R54" ref-type="bibr">van der Kleij et al., 2022</xref>). However, current AI trackers <italic>&#x2018;rely heavily on news coverage of AI incidents&#x2019;</italic> (<xref rid="R53" ref-type="bibr">Turri &#x0026; Dzombak, 2023</xref>). There is no standard structure for incident tracking. Incidents begin as records of an event that are deemed worth reporting. Incident tracking depends on a predefined documentation procedure indicating what information may indicate failure or risk within a system and is <italic>&#x2018;worth&#x2019;</italic> knowing (<xref rid="R53" ref-type="bibr">Turri &#x0026; Dzombak, 2023</xref>). Recording an event as an incident transforms it into labelled data. Experts standardise the vocabulary that guides how incident trackers can assess the implications of a technology or policy (<xref rid="R17" ref-type="bibr">Hoffmann &#x0026; Frase, 2023</xref>; <xref rid="R32" ref-type="bibr">OECD, 2022a</xref>).</p>
</sec>
</sec>
<sec id="sec3">
<title>Aims</title>
<p>To determine how well AI incident trackers inform policymaking, we study how well the knowledge organization of different incident trackers reveals labour-related risks for AI in the workplace. We search for and analyse labour-related incidents in two AI incident trackers, the AI incidents monitor and the AI incident database, with a focus on how AI is impacting and is expected to impact workers within the United States.</p>
</sec>
<sec id="sec4">
<title>Data collection and analysis</title>
<sec id="sec4_1">
<title>AI incident tracker 1: OECD AI incidents monitor (AIM)</title>
<p>The Organisation for Economic Co-operation and Development (OECD) is an intergovernmental organization of 38 mostly high-income, industrialised countries. The OECD AI incidents monitor, (AIM) launched publicly in 2023. OECD AIM was developed as part of their efforts on AI governance to <italic>&#x2018;establish a knowledge foundation&#x2026; and&#x2026; terminology&#x2019;</italic> for an interactive evidence base that could help policymakers to define the scope of AI (<xref rid="R34" ref-type="bibr">OECD, 2023</xref>). The incident tracker&#x2019;s backend process is supported by the event registry (http://eventregistry.org) digital service, a third-party commercial entity. Event registry monitors news reports worldwide, drawing on an expert-created category system and machine learning algorithms to group news into incidents and automatically classify harms. For OECD AIM, each incident is automatically assigned a summary and headline from the primary news report (the news report from the source with the highest Alexa traffic rank).</p>
<fig id="F1">
<label>Figure 1.</label>
<caption><p>The OECD&#x2019;s AIM database query portal, showing some of our query settings</p></caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="images\c40-fig1.jpg"><alt-text>none</alt-text></graphic>
</fig>
<p>To collect data, we downloaded query results (incidents, summaries, headlines) from the OECD&#x2019;s AIM database (<xref ref-type="fig" rid="F1">Figure 1</xref>) on August 1, 2024. We had to depend on the online portal because the most important data for our project<italic>, &#x2018;affected stakeholders&#x2019;</italic>, was only available through this channel. (The batch data download option only provided the incident ID, title, summary, date of creation, concepts, and companies.) We always set <italic>&#x2018;affected stakeholders&#x2019;</italic> to workers, <italic>&#x2018;country&#x2019;</italic> to United States and <italic>&#x2018;date&#x2019;</italic> from January 1, 2023, through August 1, 2024.</p>
<p>Our queries retrieved 600 incidents with 6,744 associated news reports that the OECD AIM database categorised as having workers as <italic>&#x2018;affected stakeholders&#x2019;.</italic> We searched separately for each <italic>&#x2018;industry&#x2019;</italic> in the OECD AIM industry taxonomy and calculated the percentage of incidents with Workers as stakeholders. We searched with <italic>&#x2018;future threat only&#x2019;</italic> both unchecked and checked and calculated the percentage of future threats. We grouped industrial sectors by comparing the percentage of incidents that involved future threats, the percentage of incidents that involved workers as stakeholders, and to what extent these intersected (e.g., workers as stakeholders AND future threats; neither; or just one or the other).</p>
</sec>
<sec id="sec4_2">
<title>AI incident tracker 2: AI incident database (AIID)</title>
<p>The AI Incident Database (AIID) (https://incidentdatabase.ai) was launched in 2020 with support from the Partnership on AI (https://partnershiponai.org/) (<xref rid="R28" ref-type="bibr">McGregor, 2021</xref>). Drawing from crowdsourced submissions, the AIID depends on a taxonomy and annotation guidance developed by the Georgetown University Center for Security and Emerging Technology (<xref rid="R17" ref-type="bibr">Hoffmann et al., 2023</xref>; <xref rid="R41" ref-type="bibr">Responsible AI Collaborative, 2022</xref>). Each news report is assigned to a single incident, but each incident may collect multiple related news reports.</p>
<fig id="F2">
<label>Figure 2.</label>
<caption><p>Screenshot of the AIID&#x2019;s <italic>&#x2018;discovery&#x2019;</italic> query tool</p></caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="images\c40-fig2.jpg"><alt-text>none</alt-text></graphic>
</fig>
<p>Since AIID did not provide a direct method for filtering incidents involving <italic>workers</italic>, we downloaded the whole dataset of 3,545 full-text news reports collected into 721 incidents as bulk data on August 1st, 2024. Initially, we explored the download by querying for each of the three keywords <italic>labor, jobs</italic>, and <italic>worker</italic>; we identified additional terminology from the incidents with these keywords in their associated news reports and ultimately constructed a dataset by searching for the following queries:
<list list-type="bullet">
<list-item><p><italic>Compensation</italic></p></list-item>
<list-item><p><italic>Firing</italic></p></list-item>
<list-item><p><italic>Hiring</italic></p></list-item>
<list-item><p><italic>Scheduling</italic></p></list-item>
<list-item><p><italic>Unemployment</italic></p></list-item>
<list-item><p><italic>Worker death</italic></p></list-item>
<list-item><p><italic>Workplace</italic></p></list-item>
</list></p>
<p>We recorded the number of results for each query and deduplicated, retrieving 304 keyword matches in 266 news reports grouped into 126 incidents, which were associated with a total of 1,183 news reports, both retrieved by and not retrieved by keywords. We calculated the number of news reports retrieved by each keyword and grouped by incident.</p>
<p>Ultimately, we categorised an incident as labour related only when a news report retrieved by a keyword indicated how AI has already contributed to harms or risks for workers&#x2019; well-being in the workplace. Underlying our decision of whether an incident was eligible for inclusion in our labour- related dataset was the question: did automation or implementation of AI in the workplace either replace, disrupt, or augment the tasks and prospects of safe employment for workers? For instance, for celebrities and creative artists, we considered their capacity to extract value, so that we treated their bodies, personality, styles, or likeness as an intrinsic part of their labour.</p>
<p>We kept an inventory of thematic codes to classify recurring types of AI technologies, examples of workplace tasks, reported harms and risk mitigation topics included in the labour-related dataset. We excluded speculated harms that have not occurred, or that do not unambiguously bear directly on the workplace, such as gender or racial bias in search results for professions. Since we wanted to focus on AI&#x2019;s own impact on labour, we also excluded from the labour-related dataset:
<list list-type="order">
<list-item><p>Incidents related to the internal management of AI-focused companies (e.g., hiring, firing and leadership changes)</p></list-item>
<list-item><p>Incidents that originated outside employment, for instance related to:</p>
<list list-type="bullet"><list-item><p>Administration of social benefits, governance, justice, and law enforcement. Though relevant to labour, the administration of unemployment and social security benefits were considered to be outside the scope of the workplace.</p></list-item>
<list-item><p>Harmful representations pertaining to culture (e.g., generative AIs return violent imagery), content delivery and information retrieval (e.g., Image search for <italic>&#x2018;CEO&#x2019;</italic> returns predominantly male-presenting results).</p></list-item>
<list-item><p>Anti-competition and monopolistic firm behaviour, except those specific to the job market</p></list-item>
<list-item><p>Deep fakes, except those generated as a condition of employment</p></list-item>
</list></list-item></list></p>
<p>Ultimately, we manually classified 57 incidents (with 184 keyword matches in 112 of their associated news reports &#x2013; see <xref ref-type="fig" rid="F3">Figure 3</xref>) as directly involving the effects of AI on labour. We compared these manually classified labour-related incidents with a keyword-based retrieval strategy: we collected the number of news reports retrieved by each keyword, grouped them by incident, and collected all news reports associated with the matching incidents.</p>
<fig id="F3">
<label>Figure 3.</label>
<caption><p>Sometimes multiple keyword searches retrieve a single news report: both <italic>&#x2018;scheduling&#x2019;</italic> and <italic>&#x2018;hiring&#x2019;</italic> are found in the news report (<xref rid="R49" ref-type="bibr">Short, 2018</xref>). The news report Short (<xref rid="R49" ref-type="bibr">2018</xref>) is associated with AIID&#x2019;s Incident 37 &#x2018;female applicants down-ranked by Amazon recruiting tool.&#x2019;</p></caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="images\c40-fig3.jpg"><alt-text>none</alt-text></graphic>
</fig>
</sec>
</sec>
<sec id="sec5">
<title>Results</title>
<sec id="sec5_1">
<title>AI incident tracker 1: OECD AIM</title>
<p>We compared the OECD AIM database as a whole to the subset of workers-related incidents (600 incidents with 6,744 associated news reports that the OECD AIM database categorised as having <italic>workers</italic> as <italic>&#x2018;affected stakeholders&#x2019;</italic>). Future threats are more pronounced for Workers: we found that 35% of <italic>workers</italic>-related incidents are marked as <italic>&#x2018;future threat&#x2019;</italic> compared to only 25% of incidents in the OECD AIM database as a whole. <xref ref-type="fig" rid="F4">Figure 4</xref> shows the percentage of all incidents involving workers broken down by industry sectors. The share of future threats with <italic>workers</italic> listed as affected as stakeholders (<xref ref-type="fig" rid="F4">Figure 4</xref> column 3) is particularly high for two sectors: <italic>arts, entertainment, and recreation</italic> (70%) and <italic>business processes and support services</italic> (49%) [Group 4]. Some industries [Group 1] have a larger percentage of future threat incidents but face little AI- related threat specifically to <italic>workers</italic>, now or in the future, such as <italic>agriculture; energy, raw materials, and utilities; real estate;</italic> and <italic>environmental services.</italic> Other sectors [Group 2] not only have incidents with minimal or zero AI impact for workers (<xref ref-type="fig" rid="F4">Figure 4</xref> column 2), but relatively few future threats generally (<xref ref-type="fig" rid="F4">Figure 4</xref> column 4): <italic>food and beverages; construction and air conditioning; consumer products; consumers services; and travel, leisure, and hospitality.</italic> Other industry sectors [Group 5] contain a moderate percentage of future threats, but their incidents do not proportionally affect <italic>workers</italic> more than other stakeholders (<xref ref-type="fig" rid="F4">Figure 4</xref> column 2); however, for this group, an incident is more likely to be a future threat when involving Workers as stakeholders in these industries, as seen in <xref ref-type="table" rid="T1">Table 1</xref> column 1 for Group 5: <italic>healthcare, drugs and biotechnology</italic> (69%); <italic>government, security and defence</italic> (63%); <italic>financial and insurance</italic> (57%); <italic>IT infrastructure and hosting</italic> (58%); <italic>logistics wholesale, retail</italic> (50%); <italic>robots, sensors, IT hardware</italic> (53%). For some other industries [Group 3], <italic>workers</italic> have been identified as stakeholders in incidents, but incidents are less likely to be future threats, from <xref ref-type="table" rid="T1">Table 1</xref> column 1<italic>: digital security</italic> (41%); <italic>education and training</italic> (24%); <italic>media, social platforms, marking</italic> (30%); <italic>mobility</italic> and autonomous vehicles (28%). These sectors face a moderate percentage of future threats that mostly concern other concerns besides <italic>workers,</italic> as seen in <xref ref-type="fig" rid="F4">Figure 4</xref> column 2 for Group 3.</p>
<fig id="F4">
<label>Figure 4.</label>
<caption><p>The proportion of OECD AIM incidents with workers as a stakeholder, broken down by each OECD-classified Industry, indicating the proportion that the database classified as <italic>&#x2018;future threat&#x2019;.</italic></p></caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="images\c40-fig4.jpg"><alt-text>none</alt-text></graphic>
</fig>
</sec>
<sec id="sec5_2">
<title>AI incident tracker 2: AIID</title>
<p>The 57 labour-related incidents were not the only incidents retrieved with the query terms we chose; <xref ref-type="table" rid="T1">Table 1</xref> shows that different percentages of labour-related incidents were retrieved for different query terms, ranging widely, from 6/6 (100%) for scheduling to 11/33 (33%) for compensation. While we classified incidents as labour-related or not labour-related at the incident level, different articles related to the same incident presented the incident differently.</p>
<p>To examine the strength of the keyword-based signal that the incident should be interpreted as labour-related, we examined how many news reports associated with labour-related incidents were returned by a given keyword, also shown in <xref ref-type="table" rid="T1">Table 1</xref>.</p>
<table-wrap id="T1">
<label>Table 1.</label>
<caption><p>The number of incidents and labour-related incidents returned for each query term in AIID. We deduplicated the total since the same news report may be retrieved by multiple keyword searches.</p></caption>
<table>
<thead>
<tr>
<th align="center" valign="top"></th>
<th align="left" valign="top"></th>
<th align="left" valign="top"><bold>compensation</bold></th>
<th align="left" valign="top"><bold>firing</bold></th>
<th align="left" valign="top"><bold>hiring</bold></th>
<th align="left" valign="top"><bold>scheduling</bold></th>
<th align="left" valign="top"><bold>unemployment</bold></th>
<th align="left" valign="top"><bold>worker death</bold></th>
<th align="left" valign="top"><bold>workplace</bold></th>
<th align="left" valign="top"><bold>total matches</bold></th>
<th align="left" valign="top"><bold>deduplica ted</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top" rowspan="2"><bold>Number of news reports retrieved by the keyword</bold></td>
<td align="left" valign="top">From Labour- related Incidents</td>
<td align="left" valign="top">16</td>
<td align="left" valign="top">20</td>
<td align="left" valign="top">55</td>
<td align="left" valign="top">15</td>
<td align="left" valign="top">4</td>
<td align="left" valign="top">30</td>
<td align="left" valign="top">46</td>
<td align="left" valign="top">186</td>
<td align="left" valign="top"><bold>157</bold></td>
</tr>
<tr>
<td align="left" valign="top">From All Incidents</td>
<td align="left" valign="top">51</td>
<td align="left" valign="top">28</td>
<td align="left" valign="top">84</td>
<td align="left" valign="top">15</td>
<td align="left" valign="top">19</td>
<td align="left" valign="top">40</td>
<td align="left" valign="top">68</td>
<td align="left" valign="top">304</td>
<td align="left" valign="top"><bold>266</bold></td>
</tr>
<tr>
<td align="center" valign="top" rowspan="2"><bold>Number of matching incidents (news reports retrieved by the keyword, grouped by incident)</bold></td>
<td align="left" valign="top">Labour-related Incidents</td>
<td align="left" valign="top">11</td>
<td align="left" valign="top">11</td>
<td align="left" valign="top">21</td>
<td align="left" valign="top">6</td>
<td align="left" valign="top">4</td>
<td align="left" valign="top">8</td>
<td align="left" valign="top">22</td>
<td align="left" valign="top">86</td>
<td align="left" valign="top"><bold>57</bold></td>
</tr>
<tr>
<td align="left" valign="top">All Incidents</td>
<td align="left" valign="top">33</td>
<td align="left" valign="top">19</td>
<td align="left" valign="top">44</td>
<td align="left" valign="top">6</td>
<td align="left" valign="top">8</td>
<td align="left" valign="top">17</td>
<td align="left" valign="top">37</td>
<td align="left" valign="top">164</td>
<td align="left" valign="top"><bold>127</bold></td>
</tr>
<tr>
<td align="left" valign="top" rowspan="2"><bold>Total news reports associate d with matching incidents</bold></td>
<td align="left" valign="top">From Labour- related Incidents</td>
<td align="left" valign="top">86</td>
<td align="left" valign="top">164</td>
<td align="left" valign="top">189</td>
<td align="left" valign="top">52</td>
<td align="left" valign="top">40</td>
<td align="left" valign="top">84</td>
<td align="left" valign="top">353</td>
<td align="left" valign="top">968</td>
<td align="left" valign="top"><bold>533</bold></td>
</tr>
<tr>
<td align="left" valign="top">From All Incidents</td>
<td align="left" valign="top">320</td>
<td align="left" valign="top">246</td>
<td align="left" valign="top">425</td>
<td align="left" valign="top">52</td>
<td align="left" valign="top">108</td>
<td align="left" valign="top">137</td>
<td align="left" valign="top">519</td>
<td align="left" valign="top">1807</td>
<td align="left" valign="top"><bold>1183</bold>
</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>We iteratively classified AIID&#x2019;s labour-related AI incidents as shown in <xref ref-type="table" rid="T2">Table 2</xref>. This resulted in a total of 33 labels which we grouped into 4 categories: technology, workplace task, risks, and policy domain.</p>
<table-wrap id="T2">
<label>Table 2.</label>
<caption><p>Our own categorization of AIID&#x2019;s labour-related AI incidents into 33 labels in 4 categories</p></caption>
<table>
<thead>
<tr>
<th align="center" valign="top"><bold>Technology</bold></th>
<th align="center" valign="top"><bold>Workplace Task</bold></th>
<th align="center" valign="top"><bold>Risks</bold></th>
<th align="center" valign="top"><bold>Policy Domain</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="top"><list list-type="bullet">
<list-item><p>Robotics</p></list-item>
<list-item><p>Algorithmic design</p></list-item>
<list-item><p>Autonomous driving</p></list-item>
<list-item><p>Predictive analytics</p></list-item>
<list-item><p>Generative AI</p></list-item>
<list-item><p>Computer vision</p></list-item>
<list-item><p>Networks</p></list-item>
<list-item><p>Manual data classification</p></list-item></list></td>
<td align="left" valign="top"><list list-type="bullet">
<list-item><p>Warehouse operations</p></list-item>
<list-item><p>Recruitment, Personnel and hiring decisions</p></list-item>
<list-item><p>Assembly</p></list-item>
<list-item><p>Automating contracts and business rules</p></list-item>
<list-item><p>Security monitoring and surveillance</p></list-item>
<list-item><p>Termination decisions</p></list-item>
<list-item><p>Data classification</p></list-item>
<list-item><p>Job performance assessment</p></list-item>
<list-item><p>Journalism and reporting</p></list-item>
<list-item><p>Health and safety protocol</p></list-item>
<list-item><p>Creative content production</p></list-item>
<list-item><p>Content filtering</p></list-item>
<list-item><p>Jurisprudence</p></list-item>
<list-item><p>Chatbot</p></list-item>
<list-item><p>Conduct and behaviour</p></list-item></list></td>
<td align="left" valign="top"><list list-type="bullet">
<list-item><p>Occupational hazard overwork or fatigue</p></list-item>
<list-item><p>Physical harm</p></list-item>
<list-item><p>No human override</p></list-item>
<list-item><p>Gender inequity</p></list-item>
<list-item><p>Racial inequity</p></list-item>
<list-item><p>Unreliable information</p></list-item>
<list-item><p>Psychological harm</p></list-item>
<list-item><p>Technological under comprehension</p></list-item>
<list-item><p>Human attribution issues</p></list-item>
<list-item><p>Financial harm or reputational harm</p></list-item>
<list-item><p>Political harm</p></list-item></list></td>
<td align="left" valign="top"><list list-type="bullet">
<list-item><p>Labour regulations and workplace protections</p></list-item>
<list-item><p>Worker privacy, likeness, and labour ownership</p></list-item>
<list-item><p>Scaling automation versus labour cost benefit</p></list-item>
<list-item><p>Employment, contracting and termination</p></list-item>
<list-item><p>Unreasonable job expectations, assessment, and disciplining</p></list-item>
<list-item><p>Worker privacy, likeness, and labour ownership</p></list-item>
<list-item><p>Worker integrity and whistleblowing protections</p></list-item>
<list-item><p>Content moderation/Data labour</p></list-item></list></td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec id="sec6">
<title>Discussion</title>
<p>The AI incident trackers do not facilitate ready retrieval of labour-related incidents. Being unable to readily see the incidents related to labour makes it impossible to understand what problems are associated with the AI-related risks and harms to labour.</p>
<p>The two AI incident trackers we examined used limited existing labour-related terminology. In OECD AIM, we found only one relevant term&#x2014;the stakeholder <italic>workers</italic>&#x2014;which is applied not only to AI end-users whose labour is replaced or augmented by AI systems, but also to AI producers (e.g., workers under pressure to engineer AI systems).</p>
<p>In AIID we identified three relevant term sets&#x2014;<italic>&#x2018;data input&#x2019;</italic>, <italic>&#x2018;AI task&#x2019;</italic> and <italic>&#x2018;end user amateur/expert&#x2019;</italic>&#x2014; from the CSET taxonomy manual for incident classification. However, <italic>&#x2018;data input&#x2019;</italic> did not consider the human labour and roles involved in producing input data (such as training data that the AI system was trained on). <italic>&#x2018;AI task&#x2019;</italic> considers tasks such as <italic>&#x2018;resume reading&#x2019;</italic> or <italic>&#x2018;chatbot&#x2019;,</italic> which can be viewed as labour replacement (<xref rid="R17" ref-type="bibr">Hoffmann et al., 2023</xref>). The inherent problem of staffing a support hotline for eating disorders with chatbots instead of a human call centre workforce, for example, is that this violates our expectations that therapeutic applications may require emotional labour (<xref rid="R40" ref-type="bibr">Posada, 2020</xref>), but <italic>&#x2018;AI task&#x2019;</italic> elides emotional labour.</p>
<p>While the OECD AIM has a taxonomy for industry sectors, its reliance on news reports risks overrepresenting some sectors such as entertainment. <italic>Workers</italic> stakeholder incidents tend to concentrate in the <italic>arts, entertainment, and recreation</italic> industry sector, covering workers&#x2019; copyright infringement, data protections and personal likeness. And the challenges of generative AI in the workplace have been raised by recent Hollywood strikes calling for controls over licensing creative content. More attention should be paid to understanding AI&#x2019;s effects on different sectors, especially given the likelihood that some sectors are overrepresented. It was unclear when reconciling how classification terms like worker and <italic>&#x2018;future threat&#x2019;</italic> should be interpreted when combined. Should policymakers rely on OECD AIM&#x2019;s classification technique when it excludes workers as stakeholders and simultaneously designates incidents in industrial sectors pertaining to earthborn land uses (like real estate, energy, agriculture, etc.) as &#x2018;<italic>future threat?</italic>&#x2019; Representing incidents in this manner presents its own risk and high stakes: Can policymakers properly gauge why heightened future risks in a sector should not portend in harm for workers?</p>
<p>The heavy reliance on news coverage, common to current AI trackers (<xref rid="R53" ref-type="bibr">Turri &#x0026; Dzombak, 2023</xref>), leads to some challenges. News reports can frame different stories with the same evidence, by choosing what to ignore, background or highlight. When Amazon&#x2019;s implementation of a predictive hiring algorithm resulted in gender discrimination (<xref rid="R12" ref-type="bibr">Dastin, 2018</xref>), some news reports attributed the harm to sexist input data (<xref rid="R23" ref-type="bibr">Kraus, 2018</xref>) or mischaracterised output from reinforcement learning algorithms as outperforming human judgment (<xref rid="R49" ref-type="bibr">Short, 2018</xref>) and free from <italic>&#x2018;user abuse&#x2019;</italic> (<xref rid="R14" ref-type="bibr">Doctorow, 2018</xref>; Papadopaulos, 2018). Current AI incident report approaches group incidents geographically and temporally, which reduces the structured data available about where and when incidents took place. Consequently, in AIID, determining which incidents are labour related requires a substantial amount of judgement. When a single news report frames the news event as a labour issue, it makes the case that the incident is labour-related. Yet each incident collects multiple news reports, and labour was often in the background or used as supplemental frame. Finding incidents for labour- related policy topics was different from finding mentions of keywords. Some keywords have high precision, retrieving only labour-related incidents, but low recall, giving a limited picture of the risks and harms. For instance, scheduling only matched incidents about controversial algorithmically driven worker shift management software. Keywords with stronger recall, like <italic>&#x2018;compensation&#x2019;</italic> and <italic>&#x2018;hiring&#x2019;</italic>, tended to be less precisely labour-focused, with matching incidents often referring to internal AI enterprise practices and consumer settlements.</p>
<p>Power relationships between the working class and management (and by extension, capital) are salient in the labour-related incidents we flagged: who introduces AI into the workplace? Who is responsible for AI when it backfires? Automated staffing decisions using AI-supported predictive analytics led to multiple problems (e.g., Southwest Airlines&#x2019; flight cancellations (<xref rid="R51" ref-type="bibr">Sider, 2022</xref>), Tesla&#x2019;s factory delays (<xref rid="R15" ref-type="bibr">Duhigg, 2018</xref>)). Lack of human override and management&#x2019;s technical under- comprehension exacerbated these problems. Values need to be taken into consideration to determine which labour conflicts themselves qualify as harms.</p>
</sec>
<sec id="sec7">
<title>Future work</title>
<p>Future work should examine how the distribution of news reports used in AI incident trackers vary over time, across industries and in relation to AI principles and news media sources and audiences. The ratio of news reports to incidents varies, with some incidents (such as the Hollywood strikes) heavily reported, without regard to the actual exposure to AI-related harms in a given industrial sector. Comparing how news reports frame the same AI incident (e.g., with framing analysis) would be valuable. Likewise, researchers should seek to understand how different groupings into incidents can contribute to policymakers&#x2019; problem definitions, perhaps by examining the variation in how news reports are grouped in different AI incident trackers.</p>
<p>Future research could systematically identify which query terms to use, including testing stemming for words such as hiring, firing, and scheduling as well as considering additional terminology such as <italic>&#x2018;crowd labor&#x2019;</italic>. The use of terminology in AI governance could also be fruitfully examined, e.g., <italic>&#x2018;trustworthy AI&#x2019;, &#x2018;ethical AI&#x2019;, &#x2018;AI for good&#x2019;, &#x2018;beneficial AI&#x2019;,</italic> and <italic>&#x2018;responsible AI&#x2019;</italic> (<xref rid="R52" ref-type="bibr">Stix, 2022</xref>).</p>
<p>Current definitions for harm or hazard (<xref rid="R39" ref-type="bibr">Placani, 2017</xref>; <xref rid="R44" ref-type="bibr">Rowe, 2021</xref>) need to be revisited when evaluating the relationship between AI and labour to enable incident risk prevention (Meyer, 2023). Incident reporting, which has roots in risk management and safety (<xref rid="R20" ref-type="bibr">Johnson, 2003</xref>), may not be sufficient for understanding AI risks to workers. Alternative conceptual frameworks for risk management from epidemiology, audit culture and social work may be more suitable for centring workers.</p>
<p>Future work should investigate what harm, incident, potential harm, or hazard mean in the context of labour. For instance, are <italic>&#x2018;disruptions&#x2019;</italic> ever supposed to be weathered as part of events contributing to the normal functioning of the economy? Tradeoffs must consider multiple perspectives (e.g., worker, manager, capital) in the political economy of labour.</p>
<p>Sociotechnical perspectives will be needed. Above all, it is important to consider <italic>&#x2018;how algorithms may reshape organizational control&#x2019;</italic>, as Kellog et al. (2020) review. Sociotechnical analysis of fairness in machine learning (<xref rid="R47" ref-type="bibr">Selbst et al., 2019</xref>) can inspire new approaches for attending to the power dynamics of AI and labour, drawing on fields such as organizational behaviour and management science, human resources management, labour law and the sociology of labour.</p>
<p>Future classifications of AI and labour should document when a system extracts labour value unjustly. Key examples would be when a <italic>&#x2018;worker&#x2019;</italic> can claim direct credit or likeness to the source data (such as Scarlett Johanssen&#x2019;s voice appropriated by OpenAI (<xref rid="R38" ref-type="bibr">Pisani &#x0026; Albert, 2024</xref>)) or when input data is created with the express purpose of serving an AI system (<xref rid="R45" ref-type="bibr">Satariano &#x0026; Mozur, 2023</xref>) (see our content moderation/data labour category in Table 3). Taxonomies for organizing how AI interacts with prior types of &#x2018;people problems&#x2019; inside the workplace (<xref rid="R30" ref-type="bibr">Moore, 2019</xref>) may be helpful since both AI and AI incident reporting ultimately depend on how people are organised to share information within a firm. Standardised terminologies and definitions for workplace safety can be informed by International Labour Organization worker protections requiring information disclosure in the event of mass dismissal and redundancy (<xref rid="R13" ref-type="bibr">De Stefano, 2019</xref>). Incidents are made visible based on the power structures upheld by a given taxonomy: naming and power is worth specific consideration.</p>
</sec>
<sec id="sec8">
<title>Conclusions</title>
<p>The two incident trackers we investigated do not adequately capture the nuanced impacts of AI on labour. Particular attention needs to be paid to power imbalances that may increase risks for harm in the workplace. Better definitions are needed to capture labour-related AI incidents, to help policymakers gather evidence to anticipate and mitigate the risks and harms threatening workers across diverse industries. AI incident trackers&#x2019; reliance on news reports and limited vocabulary for identifying worker-related harms and risks leads to gaps in understanding AI for policy formulation and problem definition.</p>
<p>Are the labour-related risks of AI intelligible in the incident trackers with a level of detail and reliability that could support comprehensive policy and problem definition for workers&#x2019; protection from the harms of AI? Our answer is no.</p>
</sec>
</body>
<back>
<ack>
<title>Acknowledgements</title>
<p>This research is funded by the United States Institute of Museum and Library Services RE-250162- OLS-21. Thanks to Zachary Kilhoffer for discussions of AI trackers and to Corinne McCumber, Heng Zheng, Yuanxi Fu and Zhixuan &#x2018;Kyrie&#x2019; Zhou for providing feedback on a draft. Jodi Schneider was supported in part as the 2024&#x2013;2025 Perrin Moorhead Grayson and Bruns Grayson Fellow, Harvard Radcliffe institute for advanced study. CRediT roles: TL: conceptualization, data curation, formal analysis, methodology, visualization, writing &#x2013; original draft, writing &#x2013; review and editing; Jodi Schneider: conceptualization, funding acquisition, supervision, writing &#x2013; review and editing.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="R1"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Abercrombie</surname><given-names>G.</given-names></name><name><surname>Benbouzid</surname><given-names>D.</given-names></name><name><surname>Giudici</surname><given-names>P.</given-names></name><name><surname>Golpayegani</surname><given-names>D.</given-names></name><name><surname>Hernandez</surname><given-names>J.</given-names></name><name><surname>Noro</surname><given-names>P.</given-names></name><name><surname>Pandit</surname><given-names>H.</given-names></name><name><surname>Paraschou</surname><given-names>E.</given-names></name><name><surname>Pownall</surname><given-names>C.</given-names></name><name><surname>Prajapati</surname><given-names>J.</given-names></name><name><surname>Sayre</surname><given-names>M. A.</given-names></name><name><surname>Sengupta</surname><given-names>U.</given-names></name><name><surname>Suriyawongkul</surname><given-names>A.</given-names></name><name><surname>Thelot</surname><given-names>R.</given-names></name><name><surname>Vei</surname><given-names>S.</given-names></name><name><surname>Waltersdorfer</surname><given-names>L.</given-names></name></person-group><year>2024</year><source>A collaborative, human-centred taxonomy of AI, algorithmic, and automation harms</source><comment>[Preprint]. arXiv</comment><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.48550/arXiv.2407.01294">https://doi.org/10.48550/arXiv.2407.01294</ext-link></element-citation></ref>
<ref id="R2"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Acemoglu</surname><given-names>D.</given-names></name><name><surname>Restrepo</surname><given-names>P.</given-names></name></person-group><year>2019</year><article-title>Automation and new tasks: How technology displaces and reinstates labor</article-title><source>Journal of Economic Perspectives</source><volume>33</volume><issue>2</issue><fpage>3</fpage><lpage>30</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1257/jep.33.2.3">https://doi.org/10.1257/jep.33.2.3</ext-link></element-citation></ref>
<ref id="R3"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Arda</surname><given-names>S.</given-names></name></person-group><year>2024</year><source>Taxonomy to regulation: A (geo)political taxonomy for AI risks and regulatory measures in the EU AI act</source><comment>[Preprint]. arXiv</comment><ext-link ext-link-type="uri" xlink:href="http://arxiv.org/abs/2404.11476">http://arxiv.org/abs/2404.11476</ext-link></element-citation></ref>
<ref id="R4"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Arntz</surname><given-names>M.</given-names></name><name><surname>Gregory</surname><given-names>T.</given-names></name><name><surname>Zierahn</surname><given-names>U.</given-names></name></person-group><year>2016</year><article-title>The risk of automation for jobs in OECD countries: A comparative analysis (OECD Social, Employment and Migration Working Papers, 189)</article-title><comment>OECD</comment><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1787/5jlz9h56dvq7-en">https://doi.org/10.1787/5jlz9h56dvq7-en</ext-link></element-citation></ref>
<ref id="R5"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Autor</surname><given-names>D. H.</given-names></name></person-group><year>2015</year><article-title>Why are there still so many jobs? The history and future of workplace automation</article-title><source>Journal of Economic Perspectives</source><volume>29</volume><issue>3</issue><fpage>3</fpage><lpage>30</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1257/jep.29.3.3">https://doi.org/10.1257/jep.29.3.3</ext-link></element-citation></ref>
<ref id="R6"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Autor</surname><given-names>D.H.</given-names></name></person-group><year>2022</year><article-title>The labor market impacts of technological change: From unbridled enthusiasm to qualified optimism to vast uncertainty (Working Paper w30074)</article-title><source>National Bureau of Economic Research</source><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3386/w30074">https://doi.org/10.3386/w30074</ext-link></element-citation></ref>
<ref id="R7"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Baumgartner</surname><given-names>F. R.</given-names></name><name><surname>Jones</surname><given-names>B. D.</given-names></name></person-group><year>2015</year><source>The politics of information: Problem definition and the course of public policy in America</source><publisher-name>University of Chicago Press</publisher-name></element-citation></ref>
<ref id="R8"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Booth</surname><given-names>H.</given-names></name><name><surname>Rike</surname><given-names>D.</given-names></name><name><surname>Witte</surname><given-names>G. A.</given-names></name></person-group><year>2013</year><comment>December</comment><article-title>The National Vulnerability Database (NVD): Overview (Information Technology Laboratory Bulletin Series)</article-title><source>National Institute of Standards</source><ext-link ext-link-type="uri" xlink:href="https://www.nist.gov/publications/national-vulnerability-database-nvd-overview">https://www.nist.gov/publications/national-vulnerability-database-nvd-overview</ext-link></element-citation></ref>
<ref id="R9"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Cameron</surname><given-names>J.</given-names></name></person-group><comment>(Director)</comment><year>1984</year><comment>October 26</comment><chapter-title>The Terminator [Film]</chapter-title><source>Cinema &#x2019;84; Euro Film Funding</source><publisher-loc>Hemdale</publisher-loc></element-citation></ref>
<ref id="R10"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Cattell</surname><given-names>S.</given-names></name><name><surname>Ghosh</surname><given-names>A.</given-names></name><name><surname>Kaffee</surname><given-names>L.-A.</given-names></name></person-group><year>2024</year><article-title>Coordinated flaw disclosure for AI: Beyond security vulnerabilities [Preprint]</article-title><source>arXiv</source><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.48550/arXiv.2402.07039">https://doi.org/10.48550/arXiv.2402.07039</ext-link></element-citation></ref>
<ref id="R11"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Critch</surname><given-names>A.</given-names></name><name><surname>Russell</surname><given-names>S.</given-names></name></person-group><year>2023</year><article-title>TASRA: A Taxonomy and Analysis of Societal-Scale Risks from AI [Preprint]</article-title><source>arXiv</source><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.48550/arXiv.2306.06924">https://doi.org/10.48550/arXiv.2306.06924</ext-link></element-citation></ref>
<ref id="R12"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Dastin</surname><given-names>J.</given-names></name></person-group><year>2018</year><comment>October 10</comment><article-title>Insight&#x2014;Amazon scraps secret AI recruiting tool that showed bias against women</article-title><source>Reuters</source><ext-link ext-link-type="uri" xlink:href="https://www.reuters.com/article/us-amazon-com-jobs-automation- insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women- idUSKCN1MK08G/">https://www.reuters.com/article/us-amazon-com-jobs-automation- insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women- idUSKCN1MK08G/</ext-link></element-citation></ref>
<ref id="R13"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>De Stefano</surname><given-names>Valerio.</given-names></name></person-group><year>2019</year><article-title>&#x2018;Negotiating the algorithm&#x2019;: Automation, artificial intelligence, and labor protection</article-title><source>Comparative Labor Law &#x0026; Policy Journal</source><volume>41</volume><issue>1</issue><fpage>15</fpage><lpage>46</lpage></element-citation></ref>
<ref id="R14"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Doctorow</surname><given-names>C.</given-names></name></person-group><year>2018</year><comment>October 11</comment><article-title>Amazon trained a sexism-fighting, resume-screening AI with sexist hiring data, so the bot became sexist</article-title><comment>Boing Boing</comment><ext-link ext-link-type="uri" xlink:href="https://boingboing.net/2018/10/11/garbage-conclusions-out.html">https://boingboing.net/2018/10/11/garbage-conclusions-out.html</ext-link></element-citation></ref>
<ref id="R15"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Duhigg</surname><given-names>C.</given-names></name></person-group><year>2018</year><comment>December 13</comment><article-title>Dr. Elon &#x0026; Mr. Musk: Life inside Tesla&#x2019;s production hell</article-title><source>Wired</source><ext-link ext-link-type="uri" xlink:href="https://www.wired.com/story/elon-musk-tesla-life-inside-gigafactory/">https://www.wired.com/story/elon-musk-tesla-life-inside-gigafactory/</ext-link></element-citation></ref>
<ref id="R16"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Head</surname><given-names>B. W.</given-names></name></person-group><year>2010</year><article-title>Reconsidering evidence-based policy: Key issues and challenges</article-title><source>Policy and Society</source><volume>29</volume><issue>2</issue><fpage>77</fpage><lpage>94</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.polsoc.2010.03.001">https://doi.org/10.1016/j.polsoc.2010.03.001</ext-link></element-citation></ref>
<ref id="R17"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Hoffmann</surname><given-names>M.</given-names></name><name><surname>Frase</surname><given-names>H.</given-names></name></person-group><year>2023</year><article-title>Adding structure to AI harm</article-title><source>Center for Security and Emerging Technology</source><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.51593/20230022">https://doi.org/10.51593/20230022</ext-link></element-citation></ref>
<ref id="R18"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Hoffmann</surname><given-names>M.</given-names></name><name><surname>Narayanan</surname><given-names>M.</given-names></name><name><surname>Mitra</surname><given-names>A.</given-names></name><name><surname>Liao</surname><given-names>Y.-J.</given-names></name><name><surname>Frase</surname><given-names>H.</given-names></name></person-group><year>2023</year><article-title>CSET AI Harm Taxonomy for AIID and Annotation Guide</article-title><ext-link ext-link-type="uri" xlink:href="https://github.com/georgetown-cset/CSET-AIID-harm-taxonomy">https://github.com/georgetown-cset/CSET-AIID-harm-taxonomy</ext-link></element-citation></ref>
<ref id="R19"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Hutiri</surname><given-names>W.</given-names></name><name><surname>Papakyriakopoulos</surname><given-names>O.</given-names></name><name><surname>Xiang</surname><given-names>A.</given-names></name></person-group><year>2024</year><article-title>Not my voice! A taxonomy of ethical and safety harms of speech generators</article-title><source>Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency</source><fpage>359</fpage><lpage>376</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1145/3630106.3658911">https://doi.org/10.1145/3630106.3658911</ext-link></element-citation></ref>
<ref id="R20"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Johnson</surname><given-names>C.</given-names></name></person-group><year>2003</year><source>Failure in safety critical systems: A handbook of accident and incident reporting</source><publisher-name>University of Glasgow Press</publisher-name></element-citation></ref>
<ref id="R21"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kellogg</surname><given-names>K. C.</given-names></name><name><surname>Valentine</surname><given-names>M. A.</given-names></name><name><surname>Christin</surname><given-names>A.</given-names></name></person-group><year>2020</year><article-title>Algorithms at work: The new contested terrain of control</article-title><source>Academy of Management Annals</source><volume>14</volume><issue>1</issue><fpage>366</fpage><lpage>410</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.5465/annals.2018.0174">https://doi.org/10.5465/annals.2018.0174</ext-link></element-citation></ref>
<ref id="R22"><element-citation publication-type="book"><person-group person-group-type="editor"><name><surname>Kohn</surname><given-names>L. T.</given-names></name><name><surname>Corrigan</surname><given-names>J. M.</given-names></name><name><surname>Donaldson</surname><given-names>M. S.</given-names></name></person-group><year>2000</year><chapter-title>Creating safety systems in health care organizations</chapter-title><source>To err is human: Building a safer health system</source><fpage>155</fpage><lpage>204</lpage><publisher-name>National Academies Press</publisher-name><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.17226/9728">https://doi.org/10.17226/9728</ext-link></element-citation></ref>
<ref id="R23"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Kraus</surname><given-names>R.</given-names></name></person-group><year>2018</year><comment>October 10</comment><article-title>Amazon&#x2019;s sexist recruiting algorithm reflects a larger gender bias</article-title><source>Mashable</source><ext-link ext-link-type="uri" xlink:href="https://mashable.com/article/amazon-sexist-recruiting-algorithm-gender-bias-ai">https://mashable.com/article/amazon-sexist-recruiting-algorithm-gender-bias-ai</ext-link></element-citation></ref>
<ref id="R24"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Laker</surname><given-names>B.</given-names></name></person-group><year>2023</year><comment>October 30</comment><article-title>The future of work: Navigating the complex landscape of flexibility</article-title><source>Forbes</source><ext-link ext-link-type="uri" xlink:href="https://www.forbes.com/sites/benjaminlaker/2023/10/30/the-future-of-work- navigating-the-complex-landscape-of-flexibility/">https://www.forbes.com/sites/benjaminlaker/2023/10/30/the-future-of-work- navigating-the-complex-landscape-of-flexibility/</ext-link></element-citation></ref>
<ref id="R25"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Lane</surname><given-names>M.</given-names></name><name><surname>Saint-Martin</surname><given-names>A.</given-names></name></person-group><year>2021</year><article-title>The impact of Artificial Intelligence on the labour market: What do we know so far?</article-title><comment>(OECD Social, Employment and Migration Working Papers 256)</comment><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1787/7c895724-en">https://doi.org/10.1787/7c895724-en</ext-link></element-citation></ref>
<ref id="R26"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>MacKillop</surname><given-names>E.</given-names></name><name><surname>Downe</surname><given-names>J.</given-names></name></person-group><year>2023</year><article-title>What counts as evidence for policy? An analysis of policy actors&#x2019; perceptions</article-title><source>Public Administration Review</source><volume>83</volume><issue>5</issue><fpage>1037</fpage><lpage>1050</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1111/puar.13567">https://doi.org/10.1111/puar.13567</ext-link></element-citation></ref>
<ref id="R27"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Macrae</surname><given-names>C.</given-names></name></person-group><year>2016</year><article-title>The problem with incident reporting</article-title><source>BMJ Quality &#x0026; Safety</source><volume>25</volume><issue>2</issue><fpage>71</fpage><lpage>75</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1136/bmjqs-2015-004732">https://doi.org/10.1136/bmjqs-2015-004732</ext-link></element-citation></ref>
<ref id="R28"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>McGregor</surname><given-names>S.</given-names></name></person-group><year>2021</year><article-title>Preventing repeated real world AI failures by cataloguing incidents: The AI incident database</article-title><source>Proceedings of the AAAI Conference on Artificial Intelligence</source><volume>35</volume><issue>17</issue><fpage>15458</fpage><lpage>15463</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1609/aaai.v35i17.17817">https://doi.org/10.1609/aaai.v35i17.17817</ext-link></element-citation></ref>
<ref id="R29"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Meyer</surname><given-names>C. O.</given-names></name></person-group><year>2024</year><article-title>Can one &#x2018;prove&#x2019; that a harmful event was preventable? Conceptualizing and addressing epistemological puzzles in post incident reviews and investigations</article-title><source>Risk, Hazards &#x0026; Crisis in Public Policy</source><volume>15</volume><issue>3</issue><fpage>374</fpage><lpage>392</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1002/rhc3.12281">https://doi.org/10.1002/rhc3.12281</ext-link></element-citation></ref>
<ref id="R30"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Moore</surname><given-names>P. V.</given-names></name></person-group><year>2019</year><article-title>The mirror for (artificial) intelligence: In whose reflection? Automation, artificial intelligence, &#x0026; labor law</article-title><source>Comparative Labor Law &#x0026; Policy Journal</source><volume>41</volume><issue>1</issue><fpage>47</fpage><lpage>68</lpage></element-citation></ref>
<ref id="R31"><element-citation publication-type="other"><person-group person-group-type="author"><collab>NASA Aviation Safety Reporting System</collab></person-group><comment>n.d.</comment><article-title>ASRS: the case for confidential incident reporting systems (ASRS Research Papers 60)</article-title><source>NASA Aviation Safety Reporting System</source><ext-link ext-link-type="uri" xlink:href="https://asrs.arc.nasa.gov/docs/rs/60_Case_for_Confidential_Incident_Reporting.pdf">https://asrs.arc.nasa.gov/docs/rs/60_Case_for_Confidential_Incident_Reporting.pdf</ext-link></element-citation></ref>
<ref id="R32"><element-citation publication-type="other"><person-group person-group-type="author"><collab>OECD</collab></person-group><year>2022</year><comment>a</comment><article-title>OECD framework for the classification of AI systems (OECD Digital Economy Papers 323)</article-title><comment>OECD</comment><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1787/cb6d9eca-en">https://doi.org/10.1787/cb6d9eca-en</ext-link></element-citation></ref>
<ref id="R33"><element-citation publication-type="other"><person-group person-group-type="author"><collab>OECD</collab></person-group><year>2022</year><comment>b</comment><article-title>Harnessing the power of AI and emerging technologies: Background paper for the CDEP Ministerial meeting (OECD Digital Economy Papers 340)</article-title><comment>OECD</comment><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1787/f94df8ec-en">https://doi.org/10.1787/f94df8ec-en</ext-link></element-citation></ref>
<ref id="R34"><element-citation publication-type="other"><person-group person-group-type="author"><collab>OECD</collab></person-group><year>2023</year><article-title>Stocktaking for the development of an AI incident definition (OECD Artificial Intelligence Papers 4)</article-title><comment>OECD</comment><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1787/c323ac71-en">https://doi.org/10.1787/c323ac71-en</ext-link></element-citation></ref>
<ref id="R35"><element-citation publication-type="book"><person-group person-group-type="author"><collab>OECD</collab></person-group><year>2024</year><source>Defining AI incidents and related terms (OECD Artificial Intelligence Papers 16)</source><publisher-name>OECD</publisher-name><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1787/d1a8d965-en">https://doi.org/10.1787/d1a8d965-en</ext-link></element-citation></ref>
<ref id="R36"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Papadopoulos</surname><given-names>L.</given-names></name></person-group><year>2018</year><comment>Oct 12</comment><article-title>Amazon shuts down secret AI recruiting tool that taught itself to be sexist</article-title><source>Interesting Engineering</source><ext-link ext-link-type="uri" xlink:href="https://interestingengineering.com/innovation/amazon- shuts-down-secret-ai-recruiting-tool-that-taught-itself-to-be-sexist">https://interestingengineering.com/innovation/amazon- shuts-down-secret-ai-recruiting-tool-that-taught-itself-to-be-sexist</ext-link></element-citation></ref>
<ref id="R37"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Pawson</surname><given-names>R.</given-names></name><name><surname>Wong</surname><given-names>G.</given-names></name><name><surname>Owen</surname><given-names>L.</given-names></name></person-group><year>2011</year><article-title>Known knowns, known unknowns, unknown unknowns: The predicament of evidence-based policy</article-title><source>American Journal of Evaluation</source><volume>32</volume><issue>4</issue><fpage>518</fpage><lpage>546</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1177/1098214011403831">https://doi.org/10.1177/1098214011403831</ext-link></element-citation></ref>
<ref id="R38"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Pisani</surname><given-names>J.</given-names></name><name><surname>Albert</surname><given-names>V.</given-names></name></person-group><year>2024</year><comment>May 20</comment><article-title>Scarlett Johansson rebukes OpenAI over &#x2018;Eerily Similar&#x2019; ChatGPT voice</article-title><source>Wall Street Journal</source><ext-link ext-link-type="uri" xlink:href="https://www.wsj.com/tech/ai/openai-chatgpt-sky-voice- scarlett-johansson-43d13bbf">https://www.wsj.com/tech/ai/openai-chatgpt-sky-voice- scarlett-johansson-43d13bbf</ext-link></element-citation></ref>
<ref id="R39"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Placani</surname><given-names>A.</given-names></name></person-group><year>2017</year><article-title>When the risk of harm harms</article-title><source>Law and Philosophy</source><volume>36</volume><fpage>77</fpage><lpage>100</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s10982-016-9277-x">https://doi.org/10.1007/s10982-016-9277-x</ext-link></element-citation></ref>
<ref id="R40"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Posada</surname><given-names>J.</given-names></name></person-group><year>2020</year><article-title>The future of work is here: Toward a comprehensive approach to Artificial Intelligence and labour [Preprint]</article-title><source>arXiv</source><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.48550/arXiv.2007.05843">https://doi.org/10.48550/arXiv.2007.05843</ext-link></element-citation></ref>
<ref id="R41"><element-citation publication-type="other"><person-group person-group-type="author"><collab>Responsible AI Collaborative</collab></person-group><year>2022</year><article-title>Founding Report</article-title><source>Responsible AI Collaborative</source><ext-link ext-link-type="uri" xlink:href="https://asset.cloudinary.com/pai/cf01cce1af65f5fbb3d71fa092d001db">https://asset.cloudinary.com/pai/cf01cce1af65f5fbb3d71fa092d001db</ext-link></element-citation></ref>
<ref id="R42"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Reynard</surname><given-names>W. D.</given-names></name></person-group><year>1986</year><article-title>The development of the NASA aviation safety reporting system (NASA reference publication 114)</article-title><source>National Aeronautics and Space Administration, Scientific and Technical Information Branch</source><comment>Available as ASRS Research Papers</comment><volume>34</volume><ext-link ext-link-type="uri" xlink:href="https://asrs.arc.nasa.gov/docs/rs/34_Development_of_NASA_ASRS.pdf">https://asrs.arc.nasa.gov/docs/rs/34_Development_of_NASA_ASRS.pdf</ext-link></element-citation></ref>
<ref id="R43"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Rodrigues</surname><given-names>R.</given-names></name><name><surname>Resseguier</surname><given-names>A.</given-names></name><name><surname>Santiago</surname><given-names>N.</given-names></name></person-group><year>2023</year><article-title>When Artificial Intelligence fails: The emerging role of incident databases</article-title><source>Public Governance, Administration and Finances Law Review</source><volume>8</volume><issue>2</issue><fpage>17</fpage><lpage>28</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.53116/pgaflr.7030">https://doi.org/10.53116/pgaflr.7030</ext-link></element-citation></ref>
<ref id="R44"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Rowe</surname><given-names>T.</given-names></name></person-group><year>2021</year><source>Can a risk of harm itself be a harm? Analysis</source><volume>81</volume><issue>4</issue><fpage>694</fpage><lpage>701</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1093/analys/anab033">https://doi.org/10.1093/analys/anab033</ext-link></element-citation></ref>
<ref id="R45"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Satariano</surname><given-names>A.</given-names></name><name><surname>Mozur</surname><given-names>P.</given-names></name></person-group><year>2023</year><comment>February 7</comment><chapter-title>The people onscreen are fake</chapter-title><source>The disinformation is real</source><publisher-name>The New York Times</publisher-name><ext-link ext-link-type="uri" xlink:href="https://www.nytimes.com/2023/02/07/technology/artificial-intelligence-training-deepfake.html">https://www.nytimes.com/2023/02/07/technology/artificial-intelligence-training-deepfake.html</ext-link></element-citation></ref>
<ref id="R46"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Schiff</surname><given-names>D. S.</given-names></name></person-group><year>2024</year><article-title>Framing contestation and public influence on policymakers: Evidence from US artificial intelligence policy discourse</article-title><source>Policy and Society</source><volume>43</volume><issue>3</issue><fpage>255</fpage><lpage>288</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1093/polsoc/puae007">https://doi.org/10.1093/polsoc/puae007</ext-link></element-citation></ref>
<ref id="R47"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Selbst</surname><given-names>A. D.</given-names></name><name><surname>Boyd</surname><given-names>D.</given-names></name><name><surname>Friedler</surname><given-names>S. A.</given-names></name><name><surname>Venkatasubramanian</surname><given-names>S.</given-names></name><name><surname>Vertesi</surname><given-names>J.</given-names></name></person-group><year>2019</year><article-title>Fairness and abstraction in sociotechnical systems</article-title><source>Proceedings of the Conference on Fairness, Accountability, and Transparency</source><fpage>59</fpage><lpage>68</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1145/3287560.3287598">https://doi.org/10.1145/3287560.3287598</ext-link></element-citation></ref>
<ref id="R48"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Shelby</surname><given-names>R.</given-names></name><name><surname>Rismani</surname><given-names>S.</given-names></name><name><surname>Henne</surname><given-names>K.</given-names></name><name><surname>Moon</surname><given-names>AJ.</given-names></name><name><surname>Rostamzadeh</surname><given-names>N.</given-names></name><name><surname>Nicholas</surname><given-names>P.</given-names></name><name><surname>Yilla-Akbari</surname><given-names>N.</given-names></name><name><surname>Gallegos</surname><given-names>J.</given-names></name><name><surname>Smart</surname><given-names>A.</given-names></name><name><surname>Garcia</surname><given-names>E.</given-names></name><name><surname>Virk</surname><given-names>G.</given-names></name></person-group><year>2023</year><article-title>Sociotechnical harms of algorithmic systems: Scoping a taxonomy for harm reduction</article-title><source>Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society</source><fpage>723</fpage><lpage>741</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1145/3600211.3604673">https://doi.org/10.1145/3600211.3604673</ext-link></element-citation></ref>
<ref id="R49"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Short</surname><given-names>E.</given-names></name></person-group><year>2018</year><comment>October 11</comment><article-title>It turns out Amazon&#x2019;s AI hiring tool discriminated against women</article-title><source>Silicon Republic</source><ext-link ext-link-type="uri" xlink:href="https://www.siliconrepublic.com/careers/amazon-ai-hiring-tool-women-discrimination">https://www.siliconrepublic.com/careers/amazon-ai-hiring-tool-women-discrimination</ext-link></element-citation></ref>
<ref id="R50"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Shrishak</surname><given-names>K.</given-names></name></person-group><year>2023</year><article-title>How to deal with an AI near-miss: Look to the skies</article-title><source>Bulletin of the Atomic Scientists</source><volume>79</volume><issue>3</issue><fpage>166</fpage><lpage>169</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1080/00963402.2023.2199580">https://doi.org/10.1080/00963402.2023.2199580</ext-link></element-citation></ref>
<ref id="R51"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Sider</surname><given-names>A.</given-names></name></person-group><year>2022</year><comment>December 28</comment><article-title>How Southwest Airlines melted down</article-title><source>Wall Street Journal</source><ext-link ext-link-type="uri" xlink:href="https://www.wsj.com/articles/southwest-airlines-melting-down-flights-cancelled-11672257523">https://www.wsj.com/articles/southwest-airlines-melting-down-flights-cancelled-11672257523</ext-link></element-citation></ref>
<ref id="R52"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Stix</surname><given-names>C.</given-names></name></person-group><year>2022</year><article-title>Artificial intelligence by any other name: A brief history of the conceptualization of &#x201C;trustworthy artificial intelligence.&#x201D;</article-title><source>Discover Artificial Intelligence</source><volume>2</volume><fpage>26</fpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s44163-022-00041-5">https://doi.org/10.1007/s44163-022-00041-5</ext-link></element-citation></ref>
<ref id="R53"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Turri</surname><given-names>V.</given-names></name><name><surname>Dzombak</surname><given-names>R.</given-names></name></person-group><year>2023</year><article-title>Why we need to know more: Exploring the state of AI incident documentation practices</article-title><source>Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society</source><fpage>576</fpage><lpage>583</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1145/3600211.3604700">https://doi.org/10.1145/3600211.3604700</ext-link></element-citation></ref>
<ref id="R54"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>van der Kleij</surname><given-names>R.</given-names></name><name><surname>Schraagen</surname><given-names>J. M.</given-names></name><name><surname>Cadet</surname><given-names>B.</given-names></name><name><surname>Young</surname><given-names>H.</given-names></name></person-group><year>2022</year><article-title>Developing decision support for cybersecurity threat and incident managers</article-title><source>Computers &#x0026; Security</source><volume>113</volume><fpage>102535</fpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.cose.2021.102535">https://doi.org/10.1016/j.cose.2021.102535</ext-link></element-citation></ref>
<ref id="R55"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Weidinger</surname><given-names>L.</given-names></name><name><surname>Uesato</surname><given-names>J.</given-names></name><name><surname>Rauh</surname><given-names>M.</given-names></name><name><surname>Griffin</surname><given-names>C.</given-names></name><name><surname>Huang</surname><given-names>P.-S.</given-names></name><name><surname>Mellor</surname><given-names>J.</given-names></name><name><surname>Glaese</surname><given-names>A.</given-names></name><name><surname>Cheng</surname><given-names>M.</given-names></name><name><surname>Balle</surname><given-names>B.</given-names></name><name><surname>Kasirzadeh</surname><given-names>A.</given-names></name><name><surname>Biles</surname><given-names>C.</given-names></name><name><surname>Brown</surname><given-names>S.</given-names></name><name><surname>Kenton</surname><given-names>Z.</given-names></name><name><surname>Hawkins</surname><given-names>W.</given-names></name><name><surname>Stepleton</surname><given-names>T.</given-names></name><name><surname>Birhane</surname><given-names>A.</given-names></name><name><surname>Hendricks</surname><given-names>L. A.</given-names></name><name><surname>Rimell</surname><given-names>L.</given-names></name><name><surname>Isaac</surname><given-names>W.</given-names></name><name><surname>Gabriel</surname><given-names>I.</given-names></name></person-group><year>2022</year><article-title>Taxonomy of risks posed by language models</article-title><source>Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency</source><fpage>214</fpage><lpage>229</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1145/3531146.3533088">https://doi.org/10.1145/3531146.3533088</ext-link></element-citation></ref>
</ref-list>
<app-group>
<app id="app1">
<title>Appendix</title>
<sec id="app_sec1_1">
<title>Data availability</title>
<p>Data underlying this research is available at https://doi.org/10.13012/B2IDB-1156758_V1</p>
</sec>
</app>
</app-group>
</back>
</article>