<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "http://jats.nlm.nih.gov/publishing/1.0/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" article-type="research-article" xml:lang="en">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">IR</journal-id>
<journal-title-group>
<journal-title>Information Research</journal-title>
</journal-title-group>
<issn pub-type="epub">1368-1613</issn>
<publisher>
<publisher-name>University of Bor&#x00E5;s</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">ir30iConf47143</article-id>
<article-id pub-id-type="doi">10.47989/ir30iConf47143</article-id>
<article-categories>
<subj-group xml:lang="en">
<subject>Research article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Deskilling and upskilling with AI systems</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author"><name><surname>Crowston</surname><given-names>Kevin</given-names></name>
<xref ref-type="aff" rid="aff0001"/></contrib>
<contrib contrib-type="author"><name><surname>Bolici</surname><given-names>Francesco</given-names></name>
<xref ref-type="aff" rid="aff0002"/></contrib>
<aff id="aff0001"><bold>Kevin Crowston</bold> is a Professor of Information Science in the School of Information Studies, Syracuse University, Syracuse, New York, USA. He received his Ph.D. from the Massachusetts Institute of Technology and his research interests are in the futures of work using advanced technologies. He can be contacted at <email xlink:href="crowston@syr.edu">crowston@syr.edu</email></aff>
<aff id="aff0002"><bold>Francesco Bolici</bold> is a Professor of Organization Studies and Information Systems and Director of OrgLab at the Department of Economics and Law, Universit&#x00E0; degli studi di Cassino, Italy. Holding a Ph.D. from Luiss Guido Carli University, Rome, his research examines how information technology enables new possibilities for reimagining work design and organizational practices. He can be contacted at <email xlink:href="f.bolici@unicas.it">f.bolici@unicas.it</email></aff>
</contrib-group>
<pub-date pub-type="epub"><day>06</day><month>05</month><year>2025</year></pub-date>
<pub-date pub-type="collection"><year>2025</year></pub-date>
<volume>30</volume>
<issue>i</issue>
<fpage>1009</fpage>
<lpage>1023</lpage>
<permissions>
<copyright-year>2025</copyright-year>
<copyright-holder>&#x00A9; 2025 The Author(s).</copyright-holder>
<license license-type="open-access" xlink:href="https://creativecommons.org/licenses/by-nc/4.0/">
<license-p>This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by-nc/4.0/">http://creativecommons.org/licenses/by-nc/4.0/</ext-link>), permitting all non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<abstract xml:lang="en">
<title>Abstract</title>
<p><bold>Introduction.</bold> Deskilling is a long-standing prediction of the use of information technology, raised anew by the increased capabilities of AI (AI) systems. A review of studies of AI applications suggests that deskilling (or levelling of ability) is a common outcome, but systems can also require new skills, i.e., upskilling.</p>
<p><bold>Method.</bold> To identify which settings are more likely to yield deskilling vs. upskilling, we propose a model of a human interacting with an AI system for a task. The model highlights the possibility for a worker to develop and exhibit (or not) skills in prompting for, and evaluation and editing of system output, thus yielding upskilling or deskilling.</p>
<p><bold>Findings.</bold> We illustrate these model-predicted effects on work with examples of current studies of AI-based systems.</p>
<p><bold>Conclusions.</bold> We discuss organizational implications of systems that deskill or upskill workers and suggest future research directions. </p>
</abstract>
</article-meta>
</front>
<body>
<sec id="sec1">
<title>Introduction</title>
<p>The increased capability of modern artificial intelligence (AI) systems, generative AI in particular, has increased concerns about their impact. We define AI as &#x2018;<italic>systems that build on machine learning, computation, and statistical techniques, as well as rely on large data sets to generate responses, classifications, or dynamic predictions</italic>&#x2019; (<xref rid="R14" ref-type="bibr">Faraj, Pachidi, &#x0026; Sayegh, 2018</xref>, p. 62). In this paper we focus on a long-standing concern about the impact of automation, namely deskilling, meaning that the work left for the humans requires a lower level of skill than the original job. AI raises the question of deskilling anew since as a general-purpose technology that could impact more kinds of work (<xref rid="R30" ref-type="bibr">Sison, Daza, Gozalo-Brizuela, &#x0026; Garrido-Merch&#x00E1;n, 2023</xref>). Consistent with the fear of deskilling, many AI applications are described as having a levelling effect, meaning that they help novices more than experts (i.e., levelling ability), which we interpret as deskilling. For instance, Brynjolfsson, Li, and Raymond (<xref rid="R4" ref-type="bibr">2023</xref>) found that a chatbot to support customer service workers enabled less experienced operators work at the level of more experienced one.</p>
<p>However, as Crowston and Bolici (<xref rid="R7" ref-type="bibr">2019</xref>) emphasize, automation does not always entail replacing human effort entirely. Instead, it often involves diverse patterns such as decision support or blended decision-making, where human expertise remains integral. This perspective highlights a more nuanced view of AI&#x2019;s role, suggesting that its impact on skills depends on how systems are designed and integrated into tasks. At the same time, some applications demonstrate that AI&#x2019;s benefits are not uniform across all users. While certain tools primarily support novices, others prove more powerful for experienced users, not causing deskilling but rather enhancing their skills. Indeed, some applications might even need new skills to use effectively, another form of upskilling.</p>
<p>The question we seek to address in this paper is, under what conditions do these two outcomes emerge? What are the characteristics of tasks that when automated in particular ways lead to a levelling effect of technology versus those where technology better supports more experienced users? This question is important to identify the implications for workers as AI capabilities are built into more systems. The answer also has implications for how organizations might staff functions using the system and the longer-term implications of system usage.</p>
</sec>
<sec id="sec2">
<title>Literature review</title>
<p>A common and long-standing predicted effect of computerization is deskilling, meaning the replacement of skilled workers by those with less skill or reduced opportunities for the same workers to exercise particular skills. Concern about deskilling has been raised since the dawn of computing (<xref rid="R21" ref-type="bibr">Mann &#x0026; Williams, 1960</xref>; <xref rid="R34" ref-type="bibr">Whisler, 1970</xref>). Use of computer systems can strip a job of its content, leaving only a dull routine. For example, instead of solving a problem, a worker might instead feed relevant data to a computer and have it solve the problem. As a result, workers lose the opportunity or time to develop their skills through experimentation or on-the-job learning, or even to maintain skills previously acquired (<xref rid="R2" ref-type="bibr">Ardichvili, 2022</xref>; <xref rid="R19" ref-type="bibr">Li, Zhang, Niu, Chen, &#x0026; Zhou, 2023</xref>). For instance, Rinta-Kahila et al. (<xref rid="R28" ref-type="bibr">2023</xref>) found that a company&#x2019;s reliance on an accounting package with sophisticated automation rendered its accountants&#x2014;and consequently the organization as a whole&#x2014;unable to perform a specific accounting process without the software, which they refer to as skill erosion. Organizational disruption ensued when the software was replaced with another, less automated system.</p>
<p>Deskilling has knock-on effects for the nature of the work, which can reinforce skill loss. As the flow of work becomes more like an assembly line, an individual worker&#x2019;s pace becomes regulated by the needs of processes on either side, and the need for interaction and resulting opportunity for social ties are reduced. Glenn and Feldberg (<xref rid="R15" ref-type="bibr">1977</xref>) described this process as the &#x2018;<italic>proletarianization of clerical work</italic>&#x2019;. They noted that even fifty years ago, clerical jobs were becoming more like factory jobs, with increased subdivision of work and specialization of workers due to automation and use of scientific management principles from classic organization theory as management attempted to control workers and reduce the variability of their output. Zuboff (<xref rid="R40" ref-type="bibr">1988</xref>) pointed out that a system embodies assumptions about how the work should be done, resulting in a loss of flexibility for the worker. Formal rules replace discretion or specific knowledge, reducing workers&#x2019; opportunities to display their mastery of their jobs. More recently, Holm and Lorenz (<xref rid="R16" ref-type="bibr">2022</xref>) found that when computers were used to give orders, the results for workers were increased work pace, constraints and decreased autonomy, an effect that was more pronounced for medium-skilled jobs. These changes in job content can lead to a loss of overview of the whole process (<xref rid="R2" ref-type="bibr">Ardichvili, 2022</xref>), further reducing workers&#x2019; ability to learn and maintain appropriate skills.</p>
<p>The opposite prediction is upskilling. Computers can be used to automate the repetitive parts of a worker&#x2019;s job, leaving more interesting components for the human, and producing a more desirable job requiring a higher level of skills or having more responsibilities. For example, Zuboff (<xref rid="R40" ref-type="bibr">1988</xref>) presented a case in which the automation of a paper mill increased the role of the first-line production workers since they could control more than the single functions they used to. The jobs, therefore, required more skill, and the operators began to perform some of the functions of the managers. Even 60 years ago, Mann and Williams (<xref rid="R21" ref-type="bibr">1960</xref>) found some cases of job enlargement, noting that systems eliminated many routine jobs. Moreover, Sofia et al. (2023) (among many others) suggested that implementing AI will require new skills. They proposed that companies should help workers to identify which skills transfer and to develop needed new skills.</p>
<p>In practice, both effects, deskilling and upskilling, seem likely to occur simultaneously. There is some recent evidence from firm-level data of both effects. For example, Xue et al. (<xref rid="R35" ref-type="bibr">2022</xref>) found that Chinese companies reporting AI applications hire more employees without formal college education. However, McGuinness, Pouliakas, and Redmond (<xref rid="R22" ref-type="bibr">2023</xref>) found that skill-displacing technologies were positively associated with task variety and job-skill complexity, suggesting upskilling, though mostly for higher-skilled jobs. Zhang, Lai, and Gong (<xref rid="R39" ref-type="bibr">2024</xref>) also suggested an increase in employment for those with higher cognitive skills.</p>
<p>In past studies, deskilling or upskilling has often been viewed as dependent on deliberate choices about how to implement systems, driven by managers&#x2019; preferences, e.g., for controlling versus working with workers (<xref rid="R38" ref-type="bibr">Zetka Jr, 1991</xref>). However, there is also a technical component to the decision as it is possible to design systems to promote hybrid intelligence (<xref rid="R26" ref-type="bibr">Rafner et al., 2022</xref>; <xref rid="R32" ref-type="bibr">Wahlstr&#x00F6;m et al., 2024</xref>) and thus avoid deskilling. For example, Schemmer, K&#x00FC;hl, and Satzger (<xref rid="R29" ref-type="bibr">2022</xref>) described a decision support system that provides advice but requires users to make the final decision, thereby maintaining skill levels. Similarly, Arnold et al. (<xref rid="R3" ref-type="bibr">2023</xref>) designed a system with an interface based on expert knowledge representations and explanations, which improved novices&#x2019; skills. And finally, the nature of the task itself is an important factor, interacting with managerial and technical impetuses.</p>
</sec>
<sec id="sec3">
<title>Deskilling and upskilling due to AI</title>
<p>More recently, there have been a few studies that specifically include the differential impact of AI systems based on experience. We report on several that serve as the basis for our thinking. <xref ref-type="table" rid="T1">Table 1</xref> summarizes these examples.</p>
<p>Brynjolfsson et al. (<xref rid="R4" ref-type="bibr">2023</xref>) reported on a study of an AI-based conversational assistant that supported the work of customer service agents by monitoring their chats with customers and suggesting possibly relevant documents to address the customers&#x2019; problems. In a study with 5,179 customer support agents, they found that access to the tool boosted productivity, as indicated by a 14% increase in issues resolved per hour, while also increasing customer and worker satisfaction. However, the productivity increase was restricted to novice and low-skilled workers, who saw a 34% improvement; experienced and highly skilled workers experienced minimal benefit. They suggested that the AI model spreads the best practices of more proficient workers, which we interpret as evidence for deskilling because a worker need not be as skilled to perform well.</p>
<table-wrap id="T1">
<label>Table 1.</label>
<caption><p>Summary of results from the literature.</p></caption>
<table>
<thead>
<tr>
<th align="center" valign="top"><bold>Paper</bold></th>
<th align="center" valign="top"><bold>Input</bold></th>
<th align="center" valign="top"><bold>Evaluation</bold></th>
<th align="center" valign="top"><bold>Editing</bold></th>
<th align="center" valign="top"><bold>Impact</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Brynjolfsson et al. (<xref rid="R4" ref-type="bibr">2023</xref>)</td>
<td align="left" valign="top">Extracted from customer chat</td>
<td align="left" valign="top">Relevance of document to problem</td>
<td align="left" valign="top">None</td>
<td align="left" valign="top">Deskilling</td>
</tr>
<tr>
<td align="left" valign="top">Dell&#x2019;Acqua et al. (2023)</td>
<td align="left" valign="top">Prompt for problem to solve</td>
<td align="left" valign="top">Evaluation of suitability of suggestions</td>
<td align="left" valign="top">Output text lightly edited</td>
<td align="left" valign="top">Deskilling</td>
</tr>
<tr>
<td align="left" valign="top">Noy and Zhang (<xref rid="R24" ref-type="bibr">2023</xref>)</td>
<td align="left" valign="top">Used task prompts unchanged</td>
<td align="left" valign="top">Evaluate suitability of output</td>
<td align="left" valign="top">Output text lightly or not edited</td>
<td align="left" valign="top">Deskilling</td>
</tr>
<tr>
<td align="left" valign="top">Campero et al. (2022)</td>
<td align="left" valign="top">Prompt for UI element</td>
<td align="left" valign="top">Visual evaluation of appearance</td>
<td align="left" valign="top">Graphical interface to position; option to edit code</td>
<td align="left" valign="top">Deskilling</td>
</tr>
<tr>
<td align="left" valign="top">Peng et al. (<xref rid="R25" ref-type="bibr">2023</xref>) and Cui et al. (2024)</td>
<td align="left" valign="top">Prompt for needed code</td>
<td align="left" valign="top">Evaluate suitability of code</td>
<td align="left" valign="top">Edit code to fix bugs and adapt to need</td>
<td align="left" valign="top">Deskilling <sup>a</sup></td>
</tr>
<tr>
<td align="left" valign="top">Luo et al. (<xref rid="R20" ref-type="bibr">2021</xref>)</td>
<td align="left" valign="top">Extracted from calls</td>
<td align="left" valign="top">Evaluate coaching advice</td>
<td align="left" valign="top">Implement coaching advice</td>
<td align="left" valign="top">Deskilling <sup>b</sup></td>
</tr>
<tr>
<td align="left" valign="top">Wang et al. (2023)</td>
<td align="left" valign="top">Extracted from medical record</td>
<td align="left" valign="top">Determine suitability of billing code</td>
<td align="left" valign="top">Accept or reject code and possibly add others</td>
<td align="left" valign="top">Skill maintaining</td>
</tr>
<tr>
<td align="left" valign="top">Choudhury, Starr, and Agarwal (<xref rid="R6" ref-type="bibr">2020</xref>)</td>
<td align="left" valign="top">Extracted from patent application</td>
<td align="left" valign="top">Determine suitability of search terms</td>
<td align="left" valign="top">Possibly add additional search terms</td>
<td align="left" valign="top">Skill maintaining</td>
</tr>
<tr>
<td align="left" valign="top">Dell&#x2019;Acqua (<xref rid="R11" ref-type="bibr">2024</xref>)</td>
<td align="left" valign="top">Extracted from the job application</td>
<td align="left" valign="top">Evaluate system suggestion</td>
<td align="left" valign="top">Accept or reject suggestion</td>
<td align="left" valign="top">Skill maintaining <sup>c</sup></td>
</tr>
<tr>
<td align="left" valign="top">Kim and Kang (<xref rid="R18" ref-type="bibr">2024</xref>)</td>
<td align="left" valign="top">Preset inputs</td>
<td align="left" valign="top">Evaluate recommendation and important factors</td>
<td align="left" valign="top">Incorporate factors in report</td>
<td align="left" valign="top">Skill maintaining</td>
</tr>
<tr>
<td align="left" valign="top">Jia, Luo, Fang, and Liao (<xref rid="R17" ref-type="bibr">2024</xref>)</td>
<td align="left" valign="top">Preset inputs</td>
<td align="left" valign="top">None</td>
<td align="left" valign="top">None</td>
<td align="left" valign="top">Automation &#x0026; upskilling</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn><p><sup>a</sup> Not statistically significant</p></fn>
<fn><p><sup>b</sup> When volume of suggestions matched to user abilities</p></fn>
<fn><p><sup>c</sup> With less accurate system</p></fn>
</table-wrap-foot>
</table-wrap>
<p>Dell&#x2019;Acqua et al. (2023) reported on two experiments with Boston Consulting Group consultants. We focus on the first, in which 385 consultants carried out a set of 18 realistic consulting tasks designed to be within the capabilities of AI, namely, to conceptualize and develop new product ideas. Consultants either had no AI support, access to ChatGPT-4 or access to ChatGPT-4 with a prompt engineering overview. Consultants with access to ChatGPT were more productive and produced higher quality output, with a much stronger effect for consultants who performed lower on an initial assessment task. Those who received training in prompt engineering performed somewhat better than those who did not. These results show levelling and deskilling, as the system enables those who previously displayed less skill to operate at higher level.</p>
<p>In an experimental study with a writing task carried out by 453 college-educated professionals, Noy and Zhang (<xref rid="R24" ref-type="bibr">2023</xref>) found that those using ChatGPT both saved time and increased quality. Subjects with access to ChatGPT were given examples of prompts as a form of training. The authors reported that subjects seem to have mostly used ChatGPT&#x2019;s output as is, with little or no editing. Those who scored worse on an initial task improved their quality more, again evidence for levelling and deskilling.</p>
<p>Campero et al. (2022) explored having 200 programmers develop HTML code to replicate a web page, half using ChatGPT-3 with prior conditioning to generate relevant HTML code. The users had a graphical interface with which they could reposition the element and could edit the generated HTML if desired. The programmers using ChatGPT completed the task about 30% faster. Interestingly, when they had 50 non-programmers do the same task with ChatGPT, they found that 95% of them finished the task in about the same time as the professional programmers. They concluded that this use of AI can &#x2018;<italic>be seen as a form of deskilling for the programmers whose jobs could now be performed by people with less skill&#x2014;and for lower compensation</italic>&#x2019;.</p>
<p>Peng et al. (<xref rid="R25" ref-type="bibr">2023</xref>) reported on a study of the productivity impacts of GitHub Copilot in which 95 programmers were recruited to write a simple HTTP server in JavaScript, 45 using Copilot and 50 without it. The treatment group finished in less than half the time, with roughly the same level of success. Though the effect was only marginally significant, they also found that developers with fewer years of experience benefited more, evidence again for deskilling.</p>
<p>Cui et al. (2024) reported on three experiments conducted by three large companies who randomly assigned developers to use CoPilot. They concluded that use of the tool leads to a 26% increase in weekly coding tasks completed over all three companies. Focusing just on Microsoft, they found that CoPilot adoption was higher for junior developers and developers with lower tenure in the company. Further, the increase in pull requests, commits and builds was roughly three times higher for lower tenure developers, though the differences were not statistically significant. Overall, these results seem consistent with deskilling.</p>
<p>Luo et al. (<xref rid="R20" ref-type="bibr">2021</xref>) reported on an AI coaching system for sales representatives. The system analyses the agents&#x2019; calls to give advice about improving the interaction with customers. From an experiment with 429 agents, it was found that the system helped middle-ranked agents increase their sales rate the most, to nearly the level of higher-ranked agents. Lower-ranked agents were unable to absorb the volume of suggestions, while higher-ranked agents were averse to AI- generated advice. When the volume of suggestions was reduced, lower-ranked agents also improved, i.e., further levelling.</p>
<p>Wang, Gao, and Agarwal (2023) reported on the effects of an AI system to support coding of medical records. From a study with 80 coders using the system and 468 in the control group, they found that the system increased the productivity of all workers (reportedly with no impact on quality), but more so for those with more task experience, who could more quickly evaluate the proposed codes for suitability. They noted that &#x2018;<italic>If an AI is successfully trained on a task-specific data set, AI can substitute for a worker&#x2019;s task experience</italic>&#x2019;. However, in their study, the benefit went to those with more task experience. On the other hand, the system does not seem to have changed the nature of the skills required. In summary, this study seems to find neither deskilling nor upskilling, rather maintaining the advantage of having more experience.</p>
<p>Choudhury et al. (<xref rid="R6" ref-type="bibr">2020</xref>) examined how expert knowledge can address system shortcomings. Specifically, they studied a system that processes patent applications to suggest search terms to find relevant existing patents. Since patent applications may be deliberately designed to look different from the prior art, an experienced human patent examiner can complement the system by expanding the search. In an experiment, 221 MBA students examined five patent claims that were invalidate by an existing patent that used different language than the application. To simulate expertise, half of the subjects were given expert advice about how to search that included advice on adding terms. The experiment showed that those using the new system found a more precise set of relevant patents that were more like the application, as intended, but that the search was unlikely to find the relevant patent. The expert advice made it more likely that the patent would be found, again suggesting the importance of the existing expertise for this task.</p>
<p>Kim and Kang (<xref rid="R18" ref-type="bibr">2024</xref>) studied 97 mutual fund analysts who write reports rating mutual funds including an explanation for the rating. Half had access to a proprietary rating algorithm that rated the fund and identified the factors that were important for the prediction. They found that access to the predictions improved recommendation quality (i.e., whether the prediction matched the outcome) for simple cases, but had a negative impact on explanation quality, especially for junior analysts. New analysts were less likely to have algorithmic aversion but found it hard to incorporate the system results into their thinking and so wrote shorter reports that were less coherent and included more causal drivers in the explanation. We interpret this case as the system better supporting more skilled users, since skill was needed to make proper use of the system outputs.</p>
<p>In an experiment with recruiters evaluating job applications, Dell&#x2019;Acqua (<xref rid="R11" ref-type="bibr">2024</xref>) varied the quality of the AI support provided. They found that expert recruiters using a less accurate system were more likely to carefully evaluate the applications themselves rather than simply taking the system suggestion, resulting in a more accurate evaluation than those using the more accurate (but imperfect) system without a careful evaluation. Indeed, more experienced recruiters using the better system performed worse than less experienced recruiters. They conclude that &#x2018;<italic>an AI that is &#x2018;too good&#x2019; may induce workers to mindlessly follow algorithmic advice and lead to over-delegation</italic>&#x2019; and suggest that &#x2018;<italic>collaboration should be designed with the goal of keeping humans attentive in tasks where their focus is necessary to improve performance</italic>&#x2019;.</p>
<p>Finally, Jia et al. (<xref rid="R17" ref-type="bibr">2024</xref>) studied 40 sales agents interacting with 3144 potential customers to sell credit cards, working in two phases: first qualifying leads by assessing interest and then engaging to make a sale. Half of the agents used an AI telephone conversational system that autonomously did the first step, while the other half did it themselves. They found that agents using the AI system were more likely to make a sale because the system screened out likely-uninterested leads, allowing them to focus on better prospects. However, top agents were 2.8 times more likely to make a sale than bottom agents, which they attributed to the top agents&#x2019; ability to develop better sales scripts and to answer questions for which they had not been trained, which bottom agents did not do. This case is evidence for upskilling: by taking over the routine part of a job, the system leaves work that requires more skill to perform at a high-level.</p>
</sec>
<sec id="sec4">
<title>Model development</title>
<p>As a basis for analysing different applications of AI, we propose a simple model of the interaction among human, technology and task. In our model, the user performs a task that involves problem assessment and the creation of some output. For the scope of this paper, we focus on information tasks, not physical tasks, covering a broad category such as decision-making, customer care, brainstorming of ideas, etc. The model structure aligns with Crowston and Bolici&#x2019;s (2020) framework, which identifies three patterns of machine learning use&#x2014;decision support, blended decision-making, and complete automation&#x2014;and highlights how automation can affect not only specific tasks but also interdependent processes and coordination mechanisms. When using a system to support a task, rather than performing the task directly, users follow a process including:</p>
<p>1) assessing the task that should be executed, 2) possibly formulating an input and providing it to the AI system, 3) assessing the result, 4) accepting, regenerating, or editing the output, and 5) completing the task. For example, a human interacting with a document repository to find an answer to a problem will formulate a query (or use a query generated by the system), look at results to see if they meet the requirements, pick one or redo the query and try again. For interaction with a large language model (LLM) such as ChatGPT, the human will formulate a prompt, evaluate the generated results, tweak the prompt if the results are unsatisfactory, and possibly edit the output to improve it to complete the task.</p>
<sec id="sec4_1">
<title>Model components</title>
<p>Understanding the deskilling or upskilling impacts of AI requires a comprehensive model that captures the interaction between four main elements: 1) Humans, 2) Systems, 3) the Outputs generated, and 4) the Tasks that must be performed in the organization by humans and/or system. It is based on the previous research on the roles of expertise, prompting, system accuracy and task nature in assessing AI-enabled work. For example, Zuboff (<xref rid="R40" ref-type="bibr">1988</xref>) observed that systems reflect work assumptions, limiting flexibility and eroding skills (Human&#x2013;Task; Human&#x2013;AI), while Rinta-Kahila et al. (<xref rid="R28" ref-type="bibr">2023</xref>) demonstrated skill erosion from over-reliance on automation (Human&#x2013;AI; Human&#x2013;Task). Conversely, application of hybrid intelligence (<xref rid="R29" ref-type="bibr">Schemmer, K&#x00FC;hl and Satzger, 2022</xref>) mitigates deskilling by involving humans in decision-making (Human&#x2013;AI; Human&#x2013;Task). Brynjolfsson et al. (<xref rid="R4" ref-type="bibr">2023</xref>) illustrated a leveling effect, enabling novices to reach intermediate performance, though not expert levels (Human&#x2013;Outputs; Human&#x2013;Task). Wang, Gao, and Agarwal (2023) emphasized human expertise in refining AI outputs, underscoring the dual-edged nature of AI&#x2019;s impact on skills and performance (Human&#x2013;Outputs; Human&#x2013;AI). Our model is intended to analyse these interactions and their implications for skill development, use and retention. The proposed model highlights the interplay between human expertise, system capabilities, and task requirements in shaping task performance outcomes.</p>
<list list-type="order">
<list-item><p><italic>HUMAN</italic>: the persons that must perform the task and who are deciding if and how to use a system to support or to substitute for their work. We focus on two main characteristics that have an impact on the process:
<list list-type="lower-alpha">
<list-item><p>Domain knowledge: the extent to which the human is an expert in the specific domain relevant to the task. Higher domain knowledge enables better assessment and refinement of system outputs.</p></list-item>
<list-item><p>Input formulation knowledge: the human&#x2019;s ability to effectively formulate inputs for the AI system. For instance, when using an LLM, expertise in prompting can significantly influence the quality and relevance of the system&#x2019;s outputs.</p></list-item>
</list></p></list-item>
<list-item><p><italic>AI</italic>: the specific system that can be accessed by the human during the task execution and for which we consider three characteristics:
<list list-type="lower-alpha">
<list-item><p>Input variability: The variability of input to the system. Some systems take a fixed set of variables while in contrast, an LLM can take nearly any text as input.</p></list-item>
<list-item><p>Accuracy/limitations: The constraints of the system, such as the propensity for generating errors or the need for human intervention to correct and refine outputs.</p></list-item>
</list></p></list-item>
<list-item><p><italic>OUTPUTS</italic> of the system: the answers that the system provides in response to the input that have at least the following characteristics:
<list list-type="lower-alpha">
<list-item><p>Quality: The accuracy, relevance, and usability of the system-generated outputs. High- quality outputs require less modification and are more useful for completing the task.</p></list-item>
<list-item><p>Speed: We presume that the system will be able to generate an answer more quickly than the human, leading to the observed increases in speed.</p></list-item>
</list></p></list-item>
<list-item><p><italic>TASK</italic>: the activity that must be performed and for which we can consider:
<list list-type="lower-alpha">
<list-item><p>Nature of the task: the specific characteristics of the task, including whether it is creative, analytical, or procedural.</p></list-item>
<list-item><p>Task division: how the task is split between the human and the system. This could involve the human performing the entire task, the system performing the entire task, or a collaborative effort where both the human and system contribute.</p></list-item>
</list></p></list-item>
</list>
</sec>
<sec id="sec4_2">
<title>Model phases</title>
<p>Considering the model as a representation of the dynamics of interaction between the various elements of the process, we can distinguish four different temporal phases.</p>
<list list-type="order">
<list-item><p><italic>Phase One</italic>: the human responsible for carrying out the task can decide whether to be supported by a system and, if so, in what way. It may be though that usage is non-discretionary, meaning the human user is obligated to use the system.</p></list-item>
<list-item><p><italic>Phase Two</italic>: the human utilizes their domain knowledge and skills to interact with the system to obtain support. It may be that the inputs to the system are predetermined by the task, or the user may have freedom to craft an input.</p></list-item>
<list-item><p><italic>Phase Three</italic>: the results generated by the system are assessed and interpreted by the human, who decides whether to accept them or refine either manually or through further through additional interaction with the system.</p></list-item>
<list-item><p><italic>Phase Four:</italic> the final results are used to execute the task, either in support of or as a substitute for direct human involvement.</p></list-item>
</list>
</sec>
<sec id="sec4_3">
<title>Interaction among the components</title>
<p>The interplay between the components of the model determines the overall impact on skill levels and task performance.</p>
<list list-type="bullet">
<list-item><p><italic>Human and System Interaction</italic><bold></bold> the effectiveness of the system may be dependent on the human&#x2019;s ability to craft an input. Those with high knowledge can generate better initial responses from the system or be better in refining the outputs iteratively tuning the prompting itself. Indeed, it may be that the prompts are created by experts who develop a system rather than by the end-user using the system for a task.</p></list-item>
<list-item><p><italic>System and Output</italic>: the system&#x2019;s capabilities and limitations directly affect the quality and adaptability of the outputs. High-quality outputs reduce the need for extensive human intervention and can be applied to a variety of tasks, enhancing productivity.</p></list-item>
<list-item><p><italic>Human and Output</italic>: the human&#x2019;s role in assessing and interpreting the system&#x2019;s output depends (again) on their domain knowledge. High domain knowledge allows for quicker and more accurate assessment of the output, reducing the risk of simply accepting a wrong or incomplete result. Experts in the domain can better refine system outputs if they are not suitable.</p></list-item>
<list-item><p><italic>Outputs and Task</italic>: the nature of the outputs influences how the task is performed. High-quality, adaptable outputs can enhance productivity and potentially upskill workers by allowing them to focus on higher-level refinements. On the other hand, poor outputs can lead to deskilling if the human&#x2019;s role is reduced to merely accepting or rejecting system-generated content without substantial engagement.</p></list-item>
</list>
<p>This phased approach highlights the iterative and interactive nature of the model, emphasizing the crucial role of human expertise at each stage to maximize the effectiveness of the LLM and ensure the successful completion of the task. The conclusion of the model can lead to different impacts on the need for expertise.</p>
<list list-type="bullet">
<list-item><p><italic>No Effect</italic>: in this scenario, the use of AI has no impact on the skills of the individuals involved. The task is performed similarly whether or not the AI is used, and the human&#x2019;s existing knowledge and skills remain unchanged. However, the system may have other benefits, e.g., for speed or quality.</p></list-item>
<list-item><p><italic>Levelling Effect</italic>: this scenario occurs when AI minimizes the importance of the human&#x2019;s knowledge on task performance. The use of AI flattens the importance of prior knowledge, as a novice using AI can achieve a task performance like that of an expert. In this case, the AI levels the playing field, reducing the skill gap between novices and experts.</p></list-item>
<list-item><p><italic>Multiplier Effect</italic>: in this scenario, the use of AI acts as a multiplier on the human&#x2019;s existing knowledge, thereby increasing the performance gap between novices and experts. The AI enhances the capabilities of those with higher prior knowledge, leading to significantly better task performance compared to novices. This effect underscores the role of AI in amplifying the skills and expertise of experienced users.</p></list-item>
</list>
<p>By understanding these different scenarios, we can better anticipate the implications of AI integration into various workflows and design strategies to optimize both human and AI contributions to task performance.</p>
<p>We speculate that a system with pre-formed prompt with results that are easy to assess and that have little need to edit more likely results in levelling and so deskilling, as a more expert worker does not have an opportunity to employ their expertise. On the other hand, a system could have more flexibility about prompting or more need for output assessment and editing, tasks that experts could be quicker and more accurate in doing. To the extent that the task has these characteristics, it is more likely to benefit from expertise and thus help experts more than non-experts.</p>
</sec>
</sec>
<sec id="sec5">
<title>Findings: deskilling and upskilling due to AI</title>
<p>We illustrate our model by analysing some of the studies surveyed above. For instance, in Brynjolfsson et al. (<xref rid="R4" ref-type="bibr">2023</xref>)&#x2019;s study, the prompt is taken from the customer chat, not the agent. The agent needs to assess if a proposed document is apropos but can also suggest it and let the customer assess. If appropriate, the solution is provided to customer as is. Therefore, our model suggests that the effect of the system will be levelling, as found: the system can provide solutions that a more experienced employee would suggest, but without requiring the same level of expertise. Similarly, in Noy and Zhang (<xref rid="R24" ref-type="bibr">2023</xref>)&#x2019;s study, subjects using ChatGPT seem to have copied the writing prompts from the problem and used ChatGPT&#x2019;s output largely unchanged. They had to evaluate if the output was suitable but given the similarity of the task to their regular work, we expect this evaluation to be straightforward (that is, subjects differed in the quality of their writing, but we think not in the ability to assess suitability of output).</p>
<p>In about half of our examples, the input to the system was generated from the task, eliminating the need or ability of a user to develop expertise in directing the system. However, these cases include our two examples of skill maintenance. In other cases, users reportedly used the given task prompts unchanged, again not developing skills. In two of the cases we reviewed, users could prompt more freely (<xref rid="R10" ref-type="bibr">Dell&#x2019;Acqua et al., 2023</xref>; <xref rid="R25" ref-type="bibr">Peng et al., 2023</xref>). Interestingly though, those cases still resulted in deskilling. In summary, the cases reviewed do not suggest that the ability to better prompt an AI system is as yet a distinguishing characteristic in task performance.</p>
<p>In contrast, ability to evaluate and make use of a system&#x2019;s output does seem to play a role in several cases. Wang et al. (2023)&#x2019;s study poses an interesting example. In this case, the search is based on sentences in the medical record. However, the authors report that evaluation of the suggestions was required to rule out false positives, which was quicker for more experienced workers. In this case, the system does not require new skills (e.g., for prompting) and maintains the value of existing skills (evaluation). In Kim and Kang (<xref rid="R18" ref-type="bibr">2024</xref>)&#x2019;s study, expertise was seemingly needed to evaluate the system&#x2019;s output and to incorporate it into the final product, making the system more useful for more experienced workers. Choudhury et al. (<xref rid="R6" ref-type="bibr">2020</xref>) found that the search terms identified by the system needed to be augmented, which required expertise to do successfully. Luo et al. (<xref rid="R20" ref-type="bibr">2021</xref>) found an inverted U-shape effect for the initial coaching system: it did not help experienced workers, who already knew the job, nor inexperienced workers who could not cope with the volume of suggestions, but did help people in between, the later effect again highlighting the importance of being able to assess and incorporate system output.</p>
<p>Finally, the case of Jia et al. (<xref rid="R17" ref-type="bibr">2024</xref>) is one of few studies we found that reportedly resulted in upskilling. Interestingly, this case is also one in which one subtask was completely automated, namely the initial screening call with a potential customer, while the remaining subtask left to the human is performed without support, a subtask for which greater skill translates into better performance. (In other words, the analysis in <xref ref-type="table" rid="T1">Table 1</xref> does not describe the task the human performs.) We are curious what the impact would be of supporting the sales task and speculate that it could lead to levelling, as found by Luo et al. (<xref rid="R20" ref-type="bibr">2021</xref>).</p>
<p>Overall, we perceive a general pattern: if you have too little skill, you can&#x2019;t make use of the system outputs. If you have moderate skill, the system generally seems to help achieve better performance. If you have a lot of skill, the system doesn&#x2019;t help as much and may even be resisted (Wang et al., 2023).</p>
</sec>
<sec id="sec6">
<title>Discussion</title>
<p>The model and the studies reviewed more broadly suggest several points for consideration.</p>
<p>First, reviewing the papers identified, we note a lacuna, namely we identified no studies in which better prompting skills gave more experienced workers an advantage. We expected that using Copilot to support programming would have these effects. For instance, Mozannar et al. (<xref rid="R23" ref-type="bibr">2024</xref>) observed that programmers using CoPilot spent over 20% of their time thinking about or verifying a CoPilot suggestion, about 10% of the time editing a suggestion, and about 10% crafting prompts. Prompt crafting is often iterative: write a prompt, assess output, tweak the prompt. Often, suggestions were accepted to fully evaluate and tweak them, not necessarily because they were correct. Dibia et al. (2022) found that experienced programmers still found incorrect code suggestions from CoPilot useful even if the code was not entirely correct, as it could be modified with little effort, thereby increasing productivity. Similarly, Zamfirescu-Pereira et al. (<xref rid="R37" ref-type="bibr">2023</xref>) found that while the code generated by CoPilot often had errors, they were easier to fix than errors in code generated by humans. They concluded that &#x2018;<italic>CoPilot can become a liability if it is used by novice developers who may fail to filter its buggy or non-optimal solutions due to a lack of expertise</italic>&#x2019; (<xref rid="R37" ref-type="bibr">Zamfirescu-Pereira et al., 2023</xref>). Randazzo et al. (2024) suggests that ChatGPT users who retain overall control of the task, strategically deciding which tasks to delegate, perform better than those who direct the system through the whole task and much better than those who delegate entirely. However, we only found two studies (<xref rid="R9" ref-type="bibr">Cui et al., 2024</xref>; <xref rid="R25" ref-type="bibr">Peng et al., 2023</xref>) that examined the impacts of individual differences using this technology and unfortunately, these studies do not provide much detail about how developers interacted with the system.</p>
<p>Second, questions about deskilling and upskilling have important organizational implications that need to be considered. For instance, if the system results in deskilling, organizations may be tempted to hire less skilled workers or to invest less in training since performance with the system will still be satisfactory. These temptations will likely be greater for jobs that face high turnover, such as customer support. A consideration is that managers tend to systematically underestimate the expertise needed to do the work of their employees, meaning that they may classify more work as replaceable or low skilled than is appropriate. This consideration reinforces the importance of involving the people doing the work in system design. A further consideration is the implications for organizational learning. If the problem is not static, but the system has a levelling effect, then who will learn the answers to the new questions, if there are no longer any experts doing the tasks? Relatedly, it is an open question whether non-experts using a system learn to do the task independently or whether the system obviates the need to learn.</p>
<p>On the flip side, systems that reward expertise also raise concerns. If expertise is more valued, organizations need to consider how it is developed. For instance, there are anecdotal reports of companies no longer hiring entry-level workers to do what LLMs can do (<xref rid="R13" ref-type="bibr">Edwards, 2023</xref>; <xref rid="R36" ref-type="bibr">Yegge, 2024</xref>). If the work of entry-level positions can be largely automated, organizations will face the problem of how new hires develop the necessary expertise to oversee the automated work.</p>
<p>Third, system use may require user to develop new skills in prompt crafting and in evaluation and use of system output, rather than manual creation of output. There is some evidence for these effects, e.g., the small improvements found by Dell&#x2019;Acqua et al. (2023) for the short prompt-crafting training, and the several studies in which expertise was needed to evaluate outputs (<xref rid="R6" ref-type="bibr">Choudhury et al., 2020</xref>; <xref rid="R18" ref-type="bibr">Kim &#x0026; Kang, 2024</xref>; <xref rid="R20" ref-type="bibr">Luo et al., 2021</xref>; Wang et al., 2023). However, Dell&#x2019;Acqua (<xref rid="R11" ref-type="bibr">2024</xref>) raise the issue of needing to motivate workers to be critical about systems, more critical ironically for systems that work better.</p>
<p>Fourth, the future of work with AI and the related necessary skills requires consideration of the inherent nature of AI, which is best able to provide answers for problems and solutions that frequently appear in its training data. This limitation could lead to a need for workers who have skills primarily in identifying corner cases and their possible solutions that the AI cannot handle. Understanding and designing how to support these types of skills in workers remains an open question. If the management (identification of problems and solutions) of the most common cases is done through or with AI, it cannot represent a learning field for new workers who will need to learn to handle specific and less frequent corner cases.</p>
<p>Fifth, our model also posits limits to the impacts of technology support. Specifically, Amdahl&#x2019;s law (<xref rid="R1" ref-type="bibr">Amdahl, 1967</xref>) says that speed up due to a new system component is limited by fraction of time the using new component. As an example, if only 10% of a job is automated, the maximum speed-up is 1/90% or about an 11% speed-up. Reasoning in reverse, to make someone 2x faster at their work (i.e., a multiplier effect), as found by Peng et al. (<xref rid="R25" ref-type="bibr">2023</xref>) for the programmers using ChatGPT, requires eliminating 50% of what they do. We speculate that such a result implies that programmers are seeing benefits by having the system write entire functions at a time, rather than writing lines of code. Our model does not as yet capture the possibly transformative effects of entirely changing the nature of the task performed.</p>
<p>Finally, our model has at least one design implication. As prompting is a new skill that has a potential to make a difference to the results, it might be beneficial to let users tweak the prompts, even if they are mostly preset or derived from the task data. The visibility may help people to develop expertise in prompting and use this new skill to improve results.</p>
</sec>
<sec id="sec7">
<title>Conclusion</title>
<p>We conclude with some ideas for future research. First, this model is based on examination of a few sample implementations of AI to support work. More systematic studies across a wider range of tasks would help refine it and demonstrate its utility. To carry out these studies will require more details about the nature of the task, the technology and the workers&#x2019; interactions. It would be helpful to have more detail about the specifics of the systems and how people interact. We would like to dig into the details of the system more to understand where skills make a difference. For instance, it could be that crafting a good query for a search is a more important skill than getting an LLM prompt exactly right, given the latter&#x2019;s flexibility and interpretive abilities.</p>
<p>Second, use of the model could guide studies of new systems. For instance, it would be interesting to vary the level of prompt crafting possible for the same task and exploring impact on workers with varying skill levels. We expect more-skilled workers to use the capabilities to further extend their advantage over less-skilled workers, but this is an empirical question that needs study.</p>
<p>Third, it is important to consider that this model, like most studies focused on AI to date, focused on the relationship between work and AI within a single task. However, work is a process made up of multiple interdependent tasks. Therefore, the potential impact of AI on work should be examined within the context of how AI is used across a set of tasks that need to be coordinated. This broader perspective acknowledges that the integration of AI affects not only individual tasks but also the overall workflow, requiring a broader approach to understand its full implications on job performance and skill requirements.</p>
</sec>
</body>
<back>
<ack>
<title>Acknowledgements</title>
<p>Kevin Crowston was partly supported by a grant from the United States National Science Foundation, Grant 21-29047.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="R1"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Amdahl</surname><given-names>G. M.</given-names></name></person-group><year>1967</year><comment>April 18-20</comment><article-title>Validity of the single processor approach to achieving large scale computing capabilities</article-title><source>Proceedings of the Spring Joint Computer Conference</source><fpage>483</fpage><lpage>485</lpage></element-citation></ref>
<ref id="R2"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ardichvili</surname><given-names>A.</given-names></name></person-group><year>2022</year><article-title>The impact of artificial intelligence on expertise development: Implications for HRD</article-title><source>Advances in Developing Human Resources</source><volume>24</volume><issue>2</issue><fpage>78</fpage><lpage>98</lpage><ext-link ext-link-type="uri" xlink:href="https://dx.doi.org/10.1177/15234223221077304">https://dx.doi.org/10.1177/15234223221077304</ext-link></element-citation></ref>
<ref id="R3"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Arnold</surname><given-names>V.</given-names></name><name><surname>Collier</surname><given-names>P. A.</given-names></name><name><surname>Leech</surname><given-names>S. A.</given-names></name><name><surname>Rose</surname><given-names>J. M.</given-names></name><name><surname>Sutton</surname><given-names>S. G.</given-names></name></person-group><year>2023</year><article-title>Can knowledge-based systems be designed to counteract deskilling effects?</article-title> <source>International Journal of Accounting Information Systems</source><volume>50</volume><fpage>100638</fpage><ext-link ext-link-type="uri" xlink:href="https://dx.doi.org/10.1016/j.accinf.2023.100638">https://dx.doi.org/10.1016/j.accinf.2023.100638</ext-link></element-citation></ref>
<ref id="R4"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Brynjolfsson</surname><given-names>E.</given-names></name><name><surname>Li</surname><given-names>D.</given-names></name><name><surname>Raymond</surname><given-names>L. R.</given-names></name></person-group><year>2023</year><article-title>Generative ai at work (Tech. Rep. No. Working Paper 31161)</article-title><source>National Bureau of Economic Research</source></element-citation></ref>
<ref id="R5"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Campero</surname><given-names>A.</given-names></name><name><surname>Vaccaro</surname><given-names>M.</given-names></name><name><surname>Song</surname><given-names>J.</given-names></name><name><surname>Wen</surname><given-names>H.</given-names></name><name><surname>Almaatouq</surname><given-names>A.</given-names></name><name><surname>Malone</surname><given-names>T. W.</given-names></name></person-group><year>2022</year><article-title>A test for evaluating performance in human-computer systems</article-title><ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/2206.12390">https://arxiv.org/abs/2206.12390</ext-link></element-citation></ref>
<ref id="R6"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Choudhury</surname><given-names>P.</given-names></name><name><surname>Starr</surname><given-names>E.</given-names></name><name><surname>Agarwal</surname><given-names>R.</given-names></name></person-group><year>2020</year><article-title>Machine learning and human capital complementarities: Experimental evidence on bias mitigation</article-title><source>Strategic Management Journal</source><volume>41</volume><issue>8</issue><fpage>1381</fpage><lpage>1411</lpage><ext-link ext-link-type="uri" xlink:href="https://dx.doi.org/10.1002/smj.3152">https://dx.doi.org/10.1002/smj.3152</ext-link></element-citation></ref>
<ref id="R7"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Crowston</surname><given-names>K.</given-names></name><name><surname>Bolici</surname><given-names>F.</given-names></name></person-group><year>2019</year><chapter-title>Impacts of machine learning on work</chapter-title><source>Proceedings of the Hawaii International Conference on System Sciences</source><publisher-loc>Hawai&#x2019;i, USA</publisher-loc><ext-link ext-link-type="uri" xlink:href="https://dx.doi.org/10.24251/HICSS.2019.719">https://dx.doi.org/10.24251/HICSS.2019.719</ext-link></element-citation></ref>
<ref id="R8"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Crowston</surname><given-names>K.</given-names></name><name><surname>Bolici</surname><given-names>F.</given-names></name></person-group><year>2020</year><article-title>Impacts of the Use of Machine Learning on Work Design</article-title><source>Proceedings of the 8th International Conference on Human-Agent Interaction</source><fpage>163</fpage><lpage>170</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1145/3406499.3415070">https://doi.org/10.1145/3406499.3415070</ext-link></element-citation></ref>
<ref id="R9"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Cui</surname><given-names>Z.</given-names></name><name><surname>Demirer</surname><given-names>M.</given-names></name><name><surname>Jaffe</surname><given-names>S.</given-names></name><name><surname>Musolff</surname><given-names>L.</given-names></name><name><surname>Peng</surname><given-names>S.</given-names></name><name><surname>Salz</surname><given-names>T.</given-names></name></person-group><year>2024</year><article-title>The effects of generative ai on high skilled work: Evidence from three field experiments with software developers</article-title><source>Social Science Research Network (SSRN)</source><ext-link ext-link-type="uri" xlink:href="https://dx.doi.org/10.2139/ssrn.4945566">https://dx.doi.org/10.2139/ssrn.4945566</ext-link></element-citation></ref>
<ref id="R10"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Dell&#x2019;Acqua</surname><given-names>F.</given-names></name><name><surname>McFowland</surname><given-names>E.</given-names></name><name><surname>Mollick</surname><given-names>E. R.</given-names></name><name><surname>Lifshitz-Assaf</surname><given-names>H.</given-names></name><name><surname>Kellogg</surname><given-names>K.</given-names></name><name><surname>Rajendran</surname><given-names>S.</given-names></name><name><surname>Krayer</surname><given-names>L.</given-names></name><name><surname>Candelon</surname><given-names>F.</given-names></name><name><surname>Lakhani</surname><given-names>K. R.</given-names></name></person-group><year>2023</year><article-title>Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality (Tech. Rep. No. 24-013)</article-title><source>Harvard Business School Technology &#x0026; Operations Management Unit</source><ext-link ext-link-type="uri" xlink:href="https://www.hbs.edu/ris/Publication%20Files/24-013_d9b45b68-9e74-42d6-a1c6-c72fb70c7282.pdf">https://www.hbs.edu/ris/Publication%20Files/24-013_d9b45b68-9e74-42d6-a1c6-c72fb70c7282.pdf</ext-link></element-citation></ref>
<ref id="R11"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Dell&#x2019;Acqua</surname><given-names>F.</given-names></name></person-group><year>2024</year><chapter-title>Falling asleep at the wheel: Human/AI collaboration in a field experiment on HR recruiters</chapter-title><source>Laboratory for Innovation Science</source><publisher-name>Harvard Business School</publisher-name><ext-link ext-link-type="uri" xlink:href="https://www.fabriziodellacqua.com/">https://www.fabriziodellacqua.com/</ext-link></element-citation></ref>
<ref id="R12"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Dibia</surname><given-names>V.</given-names></name><name><surname>Fourney</surname><given-names>A.</given-names></name><name><surname>Bansal</surname><given-names>G.</given-names></name><name><surname>Poursabzi-Sangdeh</surname><given-names>F.</given-names></name><name><surname>Liu</surname><given-names>H.</given-names></name><name><surname>Amershi</surname><given-names>S.</given-names></name></person-group><year>2022</year><article-title>Aligning offline metrics and human judgments of value of AI-pair programmers</article-title><ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/2210.16494">https://arxiv.org/abs/2210.16494</ext-link></element-citation></ref>
<ref id="R13"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Edwards</surname><given-names>B.</given-names></name></person-group><year>2023</year><article-title>IBM plans to replace 7,800 jobs with AI over time, pauses hiring certain positions</article-title><ext-link ext-link-type="uri" xlink:href="https://arstechnica.com/information-technology/2023/05/ibm-pauses-hiring-around-7800-ro">https://arstechnica.com/information-technology/2023/05/ibm-pauses-hiring-around-7800-ro</ext-link></element-citation></ref>
<ref id="R14"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Faraj</surname><given-names>S.</given-names></name><name><surname>Pachidi</surname><given-names>S.</given-names></name><name><surname>Sayegh</surname><given-names>K.</given-names></name></person-group><year>2018</year><article-title>Working and organizing in the age of the learning algorithm</article-title><source>Information and Organization</source><volume>28</volume><issue>1</issue><fpage>62</fpage><lpage>70</lpage><ext-link ext-link-type="uri" xlink:href="https://dx.doi.org/10.1016/j.infoandorg.2018.02.005">https://dx.doi.org/10.1016/j.infoandorg.2018.02.005</ext-link></element-citation></ref>
<ref id="R15"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Glenn</surname><given-names>E. N.</given-names></name><name><surname>Feldberg</surname><given-names>R. L.</given-names></name></person-group><year>1977</year><article-title>Degraded and deskilled: The proletarianization of clerical work</article-title><source>Social Problems</source><volume>25</volume><issue>1</issue><fpage>52</fpage><lpage>64</lpage></element-citation></ref>
<ref id="R16"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Holm</surname><given-names>J. R.</given-names></name><name><surname>Lorenz</surname><given-names>E.</given-names></name></person-group><year>2022</year><article-title>The impact of artificial intelligence on skills at work in Denmark</article-title><source>New Technology, Work and Employment</source><volume>37</volume><issue>1</issue><fpage>79</fpage><lpage>101</lpage></element-citation></ref>
<ref id="R17"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Jia</surname><given-names>N.</given-names></name><name><surname>Luo</surname><given-names>X.</given-names></name><name><surname>Fang</surname><given-names>Z.</given-names></name><name><surname>Liao</surname><given-names>C.</given-names></name></person-group><year>2024</year><article-title>When and how artificial intelligence augments employee creativity</article-title><source>Academy of Management Journal</source><volume>67</volume><issue>1</issue><fpage>5</fpage><lpage>32</lpage></element-citation></ref>
<ref id="R18"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Kim</surname><given-names>H.</given-names></name><name><surname>Kang</surname><given-names>X.</given-names></name></person-group><year>2024</year><comment>Aug</comment><chapter-title>Machine predictions and causal explanations: Evidence from a field experiment</chapter-title><source>Symposium presentation at the Academy of Management Meeting</source><publisher-loc>Chicago, IL, USA</publisher-loc><ext-link ext-link-type="uri" xlink:href="https://dx.doi.org/10.5465/AMPROC.2024.13661symposium">https://dx.doi.org/10.5465/AMPROC.2024.13661symposium</ext-link></element-citation></ref>
<ref id="R19"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Li</surname><given-names>C.</given-names></name><name><surname>Zhang</surname><given-names>Y.</given-names></name><name><surname>Niu</surname><given-names>X.</given-names></name><name><surname>Chen</surname><given-names>F.</given-names></name><name><surname>Zhou</surname><given-names>H.</given-names></name></person-group><year>2023</year><article-title>Does artificial intelligence promote or inhibit on-the-job learning? Human reactions to AI at work</article-title><source>Systems</source><volume>11</volume><issue>3</issue><fpage>114</fpage></element-citation></ref>
<ref id="R20"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Luo</surname><given-names>X.</given-names></name><name><surname>Qin</surname><given-names>M. S.</given-names></name><name><surname>Fang</surname><given-names>Z.</given-names></name><name><surname>Qu</surname><given-names>Z.</given-names></name></person-group><year>2021</year><article-title>Artificial intelligence coaches for sales agents: Caveats and solutions</article-title><source>Journal of Marketing</source><volume>85</volume><issue>2</issue><fpage>14</fpage><lpage>32</lpage><ext-link ext-link-type="uri" xlink:href="https://dx.doi.org/10.1177/0022242920956676">https://dx.doi.org/10.1177/0022242920956676</ext-link></element-citation></ref>
<ref id="R21"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Mann</surname><given-names>F. C.</given-names></name><name><surname>Williams</surname><given-names>L. K.</given-names></name></person-group><year>1960</year><article-title>Observations on the dynamics of a change to electronic data- processing equipment</article-title><source>Administrative Science Quarterly</source><fpage>217</fpage><lpage>256</lpage></element-citation></ref>
<ref id="R22"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>McGuinness</surname><given-names>S.</given-names></name><name><surname>Pouliakas</surname><given-names>K.</given-names></name><name><surname>Redmond</surname><given-names>P.</given-names></name></person-group><year>2023</year><article-title>Skills-displacing technological change and its impact on jobs: Challenging technological alarmism?</article-title> <source>Economics of Innovation and New Technology</source><volume>32</volume><issue>3</issue><fpage>370</fpage><lpage>392</lpage></element-citation></ref>
<ref id="R23"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Mozannar</surname><given-names>H.</given-names></name><name><surname>Bansal</surname><given-names>G.</given-names></name><name><surname>Fourney</surname><given-names>A.</given-names></name><name><surname>Horvitz</surname><given-names>E.</given-names></name></person-group><year>2024</year><article-title>Reading between the lines: Modelling user behaviour and costs in AI-assisted programming</article-title><source>Proceedings of the CHI Conference on Human Factors in Computing Systems</source><ext-link ext-link-type="uri" xlink:href="https://dx.doi.org/10.1145/3613904.3641936">https://dx.doi.org/10.1145/3613904.3641936</ext-link></element-citation></ref>
<ref id="R24"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Noy</surname><given-names>S.</given-names></name><name><surname>Zhang</surname><given-names>W.</given-names></name></person-group><year>2023</year><article-title>Experimental evidence on the productivity effects of generative artificial intelligence</article-title><source>Science</source><volume>381</volume><issue>6654</issue><fpage>187</fpage><lpage>192</lpage></element-citation></ref>
<ref id="R25"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Peng</surname><given-names>S.</given-names></name><name><surname>Kalliamvakou</surname><given-names>E.</given-names></name><name><surname>Cihon</surname><given-names>P.</given-names></name><name><surname>Demirer</surname><given-names>M.</given-names></name></person-group><year>2023</year><article-title>The impact of AI on developer productivity: Evidence from GitHub Copilot</article-title><ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/2302.06590">https://arxiv.org/abs/2302.06590</ext-link></element-citation></ref>
<ref id="R26"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Rafner</surname><given-names>J.</given-names></name><name><surname>Dellermann</surname><given-names>D.</given-names></name><name><surname>Hjorth</surname><given-names>A.</given-names></name><name><surname>Veraszto</surname><given-names>D.</given-names></name><name><surname>Kampf</surname><given-names>C.</given-names></name><name><surname>MacKay</surname><given-names>W.</given-names></name><name><surname>Sherson</surname><given-names>J.</given-names></name></person-group><year>2022</year><article-title>Deskilling, upskilling, and reskilling: A case for hybrid intelligence</article-title><source>Morals &#x0026; Machines</source><volume>1</volume><issue>2</issue><fpage>24</fpage><lpage>39</lpage></element-citation></ref>
<ref id="R27"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Randazzo</surname><given-names>S.</given-names></name><name><surname>Lifshitz-Assaf</surname><given-names>H.</given-names></name><name><surname>Kellogg</surname><given-names>K.</given-names></name><name><surname>Dell&#x2019;Acqua</surname><given-names>F.</given-names></name><name><surname>Mollick</surname><given-names>E. R.</given-names></name><name><surname>Lakhani</surname><given-names>K. R.</given-names></name></person-group><year>2024</year><article-title>Cyborgs, centaurs and self automators: Human-genAI fused, directed and abdicated knowledge co-creation processes and their implications for skilling</article-title><ext-link ext-link-type="uri" xlink:href="https://ssrn.com/abstract=4921696">https://ssrn.com/abstract=4921696</ext-link></element-citation></ref>
<ref id="R28"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Rinta-Kahila</surname><given-names>T.</given-names></name><name><surname>Penttinen</surname><given-names>E.</given-names></name><name><surname>Salovaara</surname><given-names>A.</given-names></name><name><surname>Soliman</surname><given-names>W.</given-names></name><name><surname>Ruissalo</surname><given-names>J.</given-names></name></person-group><year>2023</year><article-title>The vicious circles of skill erosion: A case study of cognitive automation</article-title><source>Journal of the Association for Information Systems</source><volume>24</volume><issue>5</issue><fpage>1378</fpage><lpage>1412</lpage></element-citation></ref>
<ref id="R29"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Schemmer</surname><given-names>M.</given-names></name><name><surname>K&#x00FC;hl</surname><given-names>N.</given-names></name><name><surname>Satzger</surname><given-names>G.</given-names></name></person-group><year>2022</year><article-title>Intelligent decision assistance versus automated decision-making: Enhancing knowledge workers through explainable artificial intelligence</article-title><source>Proceedings of the 55th Hawai&#x2019;i International Conference on System Sciences. Virtual</source><ext-link ext-link-type="uri" xlink:href="https://hdl.handle.net/10125/79517">https://hdl.handle.net/10125/79517</ext-link></element-citation></ref>
<ref id="R30"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Sison</surname><given-names>A. J. G.</given-names></name><name><surname>Daza</surname><given-names>M. T.</given-names></name><name><surname>Gozalo-Brizuela</surname><given-names>R.</given-names></name><name><surname>Garrido-Merch&#x00E1;n</surname><given-names>E. C.</given-names></name></person-group><year>2023</year><article-title>ChatGPT: More than a &#x201C;weapon of mass deception&#x201D;: Ethical challenges and responses from the human-centered artificial intelligence (HCAI) perspective</article-title><source>International Journal of Human&#x2013;Computer Interaction</source><fpage>1</fpage><lpage>20</lpage></element-citation></ref>
<ref id="R31"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sofia</surname><given-names>M.</given-names></name><name><surname>Fraboni</surname><given-names>F.</given-names></name><name><surname>De Angelis</surname><given-names>M.</given-names></name><name><surname>Puzzo</surname><given-names>G.</given-names></name><name><surname>Giusino</surname><given-names>D.</given-names></name><name><surname>Pietrantoni</surname><given-names>L.</given-names></name><etal/></person-group><year>2023</year><article-title>The impact of artificial intelligence on workers&#x2019; skills: Upskilling and reskilling in organisations</article-title><source>Informing Science: The International Journal of an Emerging Trans-discipline</source><volume>26</volume><fpage>39</fpage><lpage>68</lpage></element-citation></ref>
<ref id="R32"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wahlstr&#x00F6;m</surname><given-names>M.</given-names></name><name><surname>Tammentie</surname><given-names>B.</given-names></name><name><surname>Salonen</surname><given-names>T.-T.</given-names></name><name><surname>Karvonen</surname><given-names>A.</given-names></name></person-group><year>2024</year><article-title>AI and the transformation of industrial work: Hybrid intelligence vs double-black box effect</article-title><source>Applied Ergonomics</source><volume>118</volume><fpage>104271</fpage></element-citation></ref>
<ref id="R33"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Wang</surname><given-names>W.</given-names></name><name><surname>Gao</surname><given-names>G.</given-names></name><name><surname>Agarwal</surname><given-names>R.</given-names></name></person-group> <comment>(In press)</comment><article-title>Friend or foe? Teaming between artificial intelligence and workers with variation in experience</article-title><source>Management Science</source><ext-link ext-link-type="uri" xlink:href="https://dx.doi.org/10.1287/mnsc.2021.00588">https://dx.doi.org/10.1287/mnsc.2021.00588</ext-link></element-citation></ref>
<ref id="R34"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Whisler</surname><given-names>T. L.</given-names></name></person-group><year>1970</year><source>Information Technology and Organizational Change</source><publisher-loc>Belmont, CA</publisher-loc><publisher-name>Wadsworth</publisher-name></element-citation></ref>
<ref id="R35"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Xue</surname><given-names>M.</given-names></name><name><surname>Cao</surname><given-names>X.</given-names></name><name><surname>Feng</surname><given-names>X.</given-names></name><name><surname>Gu</surname><given-names>B.</given-names></name><name><surname>Zhang</surname><given-names>Y.</given-names></name></person-group><year>2022</year><article-title>Is college education less necessary with AI? Evidence from firm-level labor structure changes</article-title><source>Journal of Management Information Systems</source><volume>39</volume><issue>3</issue><fpage>865</fpage><lpage>905</lpage></element-citation></ref>
<ref id="R36"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Yegge</surname><given-names>S.</given-names></name></person-group><year>2024</year><article-title>The death of the junior developer</article-title><ext-link ext-link-type="uri" xlink:href="https://sourcegraph.com/blog/the-death-of-the-junior-developer">https://sourcegraph.com/blog/the-death-of-the-junior-developer</ext-link></element-citation></ref>
<ref id="R37"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Zamfirescu-Pereira</surname><given-names>J.</given-names></name><name><surname>Wong</surname><given-names>R. Y.</given-names></name><name><surname>Hartmann</surname><given-names>B.</given-names></name><name><surname>Yang</surname><given-names>Q.</given-names></name></person-group><year>2023</year><article-title>Why Johnny can&#x2019;t prompt: How non-AI experts try (and fail) to design LLM prompts</article-title><source>Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems</source><ext-link ext-link-type="uri" xlink:href="https://dx.doi.org/10.1145/3544548.3581388">https://dx.doi.org/10.1145/3544548.3581388</ext-link></element-citation></ref>
<ref id="R38"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Zetka Jr</surname><given-names>J. R.</given-names></name></person-group><year>1991</year><article-title>Automated technologies, institutional environments, and skilled labor processes: Toward an institutional theory of automation outcomes</article-title><source>Sociological Quarterly</source><volume>32</volume><issue>4</issue><fpage>557</fpage><lpage>574</lpage></element-citation></ref>
<ref id="R39"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname><given-names>W.</given-names></name><name><surname>Lai</surname><given-names>K.-H.</given-names></name><name><surname>Gong</surname><given-names>Q.</given-names></name></person-group><year>2024</year><article-title>The future of the labor force: Higher cognition and more skills</article-title><source>Humanities and Social Sciences Communications</source><volume>11</volume><issue>1</issue><fpage>1</fpage><lpage>9</lpage></element-citation></ref>
<ref id="R40"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Zuboff</surname><given-names>S.</given-names></name></person-group><year>1988</year><source>In the Age of the Smart Machine: The Future of Work and Power</source><publisher-name>Basic Books, Inc</publisher-name></element-citation></ref>
</ref-list>
</back>
</article>