<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "http://jats.nlm.nih.gov/publishing/1.0/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" article-type="research-article" xml:lang="en">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">IR</journal-id>
<journal-title-group>
<journal-title>Information Research</journal-title>
</journal-title-group>
<issn pub-type="epub">1368-1613</issn>
<publisher>
<publisher-name>University of Bor&#x00E5;s</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">ir30iConf47548</article-id>
<article-id pub-id-type="doi">10.47989/ir30iConf47548</article-id>
<article-categories>
<subj-group xml:lang="en">
<subject>Research article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Discourses of fear around AI and their implications for library and information science</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author"><name><surname>Appedu</surname><given-names>Sarah</given-names></name>
<xref ref-type="aff" rid="aff0001"/></contrib>
<contrib contrib-type="author"><name><surname>Qin</surname><given-names>Yigang</given-names></name>
<xref ref-type="aff" rid="aff0001"/></contrib>
<aff id="aff0001"><bold>Sarah Appedu</bold> is a third-year Doctoral student in Information Science and Technology at the School of Information Studies, Syracuse University. She received her master&#x2019;s in library and information science from the University of Illinois, Urbana-Champaign. Her research interests include feminist and decolonial science and technology studies and critical library, information, &#x0026; data studies. She can be contacted at <email xlink:href="spappedu@syr.edu">spappedu@syr.edu</email>.</aff>
<aff id="aff0002"><bold>Yigang Qin</bold> is a second-year Doctoral student in Information Science and Technology at the School of Information Studies, Syracuse University. He earned his B.S. in Computer Science at City University of Hong Kong. His research falls into the intersection of human-computer interaction (HCI) and science and technology studies (STS), specifically on issues of labor, gender, and political economy of technologies. He can be contacted at <email xlink:href="yqin27@syr.edu">yqin27@syr.edu</email>.</aff>
</contrib-group>
<pub-date pub-type="epub"><day>06</day><month>05</month><year>2025</year></pub-date>
<pub-date pub-type="collection"><year>2025</year></pub-date>
<volume>30</volume>
<issue>i</issue>
<fpage>180</fpage>
<lpage>188</lpage>
<permissions>
<copyright-year>2025</copyright-year>
<copyright-holder>&#x00A9; 2025 The Author(s).</copyright-holder>
<license license-type="open-access" xlink:href="https://creativecommons.org/licenses/by-nc/4.0/">
<license-p>This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by-nc/4.0/">http://creativecommons.org/licenses/by-nc/4.0/</ext-link>), permitting all non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<abstract xml:lang="en">
<title>Abstract</title>
<p><bold>Introduction.</bold> Since its inception, the seemingly unlimited potential of artificial intelligence (AI) to alter human existence has evoked feelings of fear and amazement. Today, all sectors of industry, academia, and society are anticipating the potential changes new AI technologies are forecasted to bring and mitigate their harms. In this climate, there is a clear need to centre the complex interactions between discourse, power, and individual/institutional actors within sociotechnical systems and their material consequences.</p>
<p><bold>Theoretical framing.</bold> While scholars have previously made connections between discourses of fear and library and information science (LIS), there has not yet been an attempt to understand how discourses of fear may currently be shaping the field&#x2019;s response to AI. In this paper, we argue that focusing our critical gaze on the discourses of fear shaping the material interactions between LIS, technological artifacts, industry, and society better positions us to intervene in the predicted trajectory of AI innovation.</p>
<p><bold>Conclusion.</bold> We posit that cultivating discourses of refusal &#x2013; which are committed to the belief that more just worlds must be possible &#x2013; requires both individual and collective consideration of how fear has and continues to shape our own responses to new technologies.</p>
</abstract>
</article-meta>
</front>
<body>
<sec id="sec1">
<title>Introduction</title>
<p>Since its inception in the minds of scientists, industrialists, and sci-fi writers, the seemingly unlimited potential of artificial intelligence (AI) to alter human existence has evoked feelings of fear and amazement (<xref rid="R16" ref-type="bibr">Cave &#x0026; Dihal, 2019</xref>). Today, all sectors of industry, academia, and society are anticipating the potential changes new AI technologies are forecasted to bring and mitigate their predicted and current harms. In this climate, there is a clear need to move past sensationalized narratives around the utopian and dystopian possibilities of AI <italic>&#x2018;monsters&#x2019;</italic> and centre the complex interactions between discourse, power, and individual/institutional actors within sociotechnical systems and their material consequences for human, environmental, and technological agency (<xref rid="R8" ref-type="bibr">Barad, 2007</xref>; <xref rid="R29" ref-type="bibr">Haraway, 2002</xref>). Fear plays an important role in these interactions, where it may be simultaneously triggered by personal experiences with new technologies that push the boundaries of taken-for-granted binaries like human/machine and by the discourses of fear entangled with these artifacts. While fear has previously been discussed within the context of new technologies (i.e., <italic>&#x2018;technophobia&#x2019;</italic> (<xref rid="R14" ref-type="bibr">Brosnan, 1998</xref>).), it is imperative that these fears are situated within the sociotechnical arrangements that incite these feelings and the actors who benefit from them. In our view, dystopian and utopian narratives around AI can be summarized as discourses of fear (<xref rid="R4" ref-type="bibr">Altheide &#x0026; Michalowski, 1999</xref>), which aim to shape the social, political, and economic arrangements that support the continued dominance of <italic>&#x2018;technological elites&#x2019;</italic> (Noble &#x0026; <xref rid="R22" ref-type="bibr">Roberts, 2019</xref>) and the long-standing systems of power that sustain them (<xref rid="R9" ref-type="bibr">Barbrook &#x0026; Cameron, 1996</xref>; <xref rid="R12" ref-type="bibr">Benjamin, 2022</xref>).</p>
<p>While scholars have previously made connections between discourses of fear and library and information science (LIS) (Radford &#x0026; <xref rid="R44" ref-type="bibr">Radford, 1997</xref>, 2001), there has not yet been an attempt to understand how discourses of fear may currently be shaping the field&#x2019;s response to AI. Indeed, in its growing emphasis on data science-based research and curricula (<xref rid="R52" ref-type="bibr">Wang, 2018</xref>), LIS is inherently implicated not only in present discourse around AI, where students and faculty assimilate into and resist cultures of innovation (<xref rid="R31" ref-type="bibr">Hilbert, 2018</xref>) but its history, serving as part of the epistemological and ontological foundation on which AI was designed (<xref rid="R36" ref-type="bibr">Maruyama, 2021</xref>). Therefore, our framing of concerns and aspirations around AI as discourses of fear attunes us toward classification and control as central mechanisms of this discourse that reinforce the images of monstrousness on which these discourses rely.</p>
<p>In this paper, we argue that focusing our critical gaze on the discourses of fear shaping the material interactions between LIS, technological artifacts, industry, and society (among others, including the military and non-human environment) better positions us to intervene in the predicted trajectory of AI innovation by moving past sensationalized narratives and centring the agency of individuals and institutions as responsible for technological change and its consequences in every regard. The literature we utilize is not meant to be exhaustive and serves to ground our theorizing of discourses of fear around AI and LIS in the scholarship we have encountered thus far as researchers and practitioners in computer science, philosophy, science and technology studies, and LIS. We also do not mean to dismiss fear as a natural and potentially reasonable response to new technologies (<xref rid="R37" ref-type="bibr">McClure, 2018</xref>), a fear that is evoked not only by fear-based discourses but from the lived experiences of harm that result from them and their predecessors (<xref rid="R10" ref-type="bibr">Bender et al., 2021</xref>). Instead, we hope to draw more acute attention to how LIS scholars and professionals shape and are shaped by discourses of fear and inspire our communities to take agency over our role in these arrangements and actively participate in discourses of resistance and refusal.</p>
</sec>
<sec id="sec2">
<title>Expanding prior conceptualizations of discourse(s) of fear</title>
<p>Fear has historically been wielded by those in power as an influential motivator for conforming to dominant ideologies (<xref rid="R3" ref-type="bibr">Altheide, 2017</xref>; <xref rid="R22" ref-type="bibr">Ericson et al., 1993</xref>). The term <italic>&#x2018;discourse of fear&#x2019;</italic> originates in the communication to articulate how the media intentionally perpetuates fear-based narratives to undermine readers&#x2019; sense of agency and reinforce systems of social control that claim to offer protection from the unknown. Altheide and Michalowski (<xref rid="R4" ref-type="bibr">1999</xref>) conceptualize a discourse of fear as <italic>&#x2018;the pervasive communication, symbolic awareness, and expectation that danger and risk are a central feature of the environment&#x2019;</italic> (p. 476). Fear has already been discussed as a material-discursive phenomenon in the context of new technologies (<xref rid="R26" ref-type="bibr">Gleason, 2014</xref>; <xref rid="R27" ref-type="bibr">Goertzel, 2015</xref>; <xref rid="R34" ref-type="bibr">Kothare-Arora, 2020</xref>; <xref rid="R37" ref-type="bibr">McClure, 2018</xref>). Jassanoff &#x0026; Kim (2009) theorize the concept of <italic>&#x2018;sociotechnical imaginaries&#x2019;</italic> to analyse how technological development is intertwined with national interests, including the leveraging, and containing of what we would call discourses of fear to influence public perception of potentially dangerous technologies, like nuclear weapons. Grenham (<xref rid="R28" ref-type="bibr">2020</xref>) discusses the public&#x2019;s <italic>&#x2018;fear and fascination&#x2019;</italic> (p. 8) after computers entered the mainstream, sparked in part by science fiction stories and in part by their perception that computers replicated abilities previously reserved for humans. The technology industry worked with the media to counteract these fears and convince people to bring computers into their homes and workplaces by portraying them as subservient to the human operator. Discourses of fear have also been invoked to theorize the material implications of libraries and knowledge organization systems, though this is less common. <xref rid="R43" ref-type="bibr">Radford &#x0026; Radford (2001</xref>) conceptualize discourse of fear from a Foucauldian perspective to examine portrayals of librarians and libraries as enforcers of social control. Finally, Andersen and Skoouvig (2006) use a similar framework to assess the discursive power of knowledge organization systems and better account for how they shape political and social realities.</p>
<p>These perspectives represent valuable contributions to our theorization of discourses of fear surrounding AI. However, they do not always account for the multi-layered and entangled nature of these discourses (<xref rid="R8" ref-type="bibr">Barad, 2007</xref>), ignoring their co-constitutive nature across academic and professional sectors, nor do they always explicitly acknowledge the actors that manipulate these discourses to evade accountability and profit off their consequences. Therefore, we expand prior conceptualizations to theorize this phenomena as multiple, dynamic discourses of fear, which simultaneously attunes us toward the intersecting dimensions of technological artifacts, the rhetoric through which they are presented, the actors that create and adopt them, and the social, political, and economic factors that shape their development. Furthermore, theorizing discourses of fear requires acute attention to and accountability for their harms and consequences, both tangible and predicted. Therefore, to contribute to critical conversations meaningfully and ethically around AI and its impacts, we must understand how discourses of fear may be resisted and refused within our own research activities in LIS, particularly regarding the quickly growing conversations about the influence of new AI technologies and the possible worlds they might create or prevent.</p>
<p>Next, we describe how narratives around AI function as discourses of fear. While a full investigation of these discourses within LIS is beyond the scope of this paper, we suggest implications of discourses of fear around AI on the field and directions for future empirical research.</p>
</sec>
<sec id="sec3">
<title>Theorising discourses of fear around AI</title>
<p>A sentient, all-knowing, and often hostile AI entity has been the subject of sci-fi novels and films for decades and is increasingly becoming a common way of discussing the AI technologies now being released for public use (<xref rid="R17" ref-type="bibr">Compagna &#x0026; Steinhart, 2020</xref>; <xref rid="R20" ref-type="bibr">Dishon, 2024</xref>). For many, the AI <italic>&#x2018;monster&#x2019;</italic> is turning from a product of pure imagination into a very real and concerning possibility. For example, the New York Times published a story about a reporter&#x2019;s <italic>&#x2018;scary&#x2019;</italic> interaction with Bing&#x2019;s then newly released AI chatbot. While their conversation started out harmlessly enough, the author ultimately felt disturbed by the encounter after the bot told him it loved him and tried to convince him to leave his wife (<xref rid="R47" ref-type="bibr">Roose, 2023</xref>)<italic>.</italic></p>
<p>However, these fears do not just stem from sci-fi stories and personal encounters with unfamiliar and aggressive AI technologies. Industry leaders are responsible for participating in this monstrous rhetoric, like Elon Musk in his contradictory moves of first warning against summoning an AI <italic>&#x2018;demon&#x2019;</italic> (<xref rid="R38" ref-type="bibr">McFarland, 2016</xref>) to investing billions of dollars into its creation (NBC News, 2024). These leaders reinforce feelings of fear through dystopic images of the world if AI were to become sentient, including the risk of human extinction (<xref rid="R13" ref-type="bibr">Bostrom, 2002</xref>; <xref rid="R42" ref-type="bibr">Ord, 2020</xref>), while paradoxically promoting dystopian views of the world if technologists let these risks stop them from advancing AI, including economic collapse, social unrest, and climate disaster &#x2013; as if most people globally are not already experiencing these near-apocalyptic conditions (<xref rid="R2" ref-type="bibr">Ali, 2019</xref>). Concentrating the public&#x2019;s fear on the AI <italic>&#x2018;monster&#x2019;</italic> is not only useful for generating support for their continued control over this trajectory as the supposedly only ones with the knowledge and money to save humanity from the <italic>&#x2018;existential risks&#x2019;</italic> associated with AI development. Discourses of fear also conveniently distract from the material harms being perpetuated by the leaders of this industry and the social, political, and economic systems of inequality that sustain their efforts.</p>
<p>Critical scholars have revealed how fear is deliberately perpetuated by discourses that catalyse AI technologies&#x2019; development and implementation (<xref rid="R15" ref-type="bibr">Cave et al., 2019</xref>; <xref rid="R16" ref-type="bibr">Cave &#x0026; Dihal, 2019</xref>; <xref rid="R39" ref-type="bibr">Nguyen &#x0026; Hekman, 2022</xref>; <xref rid="R51" ref-type="bibr">Wajcman, 2017</xref>). Several have recognized the legacies of racism, capitalism, and colonialism that have shaped AI innovation and their leveraging of discourses of containment, classification, and control to justify their pursuits (<xref rid="R2" ref-type="bibr">Ali, 2019</xref>; <xref rid="R11" ref-type="bibr">Benjamin, 2019</xref>; <xref rid="R24" ref-type="bibr">Eubanks, 2018</xref>; <xref rid="R50" ref-type="bibr">Tacheva &#x0026; Ramasubramanian, 2023</xref>). In their landmark paper, Torres &#x0026; Gebru (2024) <italic>&#x2018;trace the goal of building AGI to the Anglo-American eugenics movement&#x2019;</italic> (p. 2). While they do not name fear as an element of the TESCREAL bundle, its apocalyptic predictions, where AI is simultaneously the harbinger of Armageddon and the sole saviour of humanity from all imaginable apocalyptic futures, implicitly evoke fear to convince the public that their lives are in technologists&#x2019; hands. A supposedly alternative conception of AI that may appear to remedy this discourse is techno-optimism, where technological advancement is seen as <italic>&#x2018;...the glory of human ambition and achievement, the spearhead of progress, and the realization of our potential&#x2019;</italic> (<xref rid="R6" ref-type="bibr">Andreessen, 2023</xref>). However, the logic underlying these sentiments maintains capitalism, colonialism, and heteropatriarchy as the necessary mechanisms for human prosperity and similarly relies on long-standing discourses of fear to justify this claim (<xref rid="R1" ref-type="bibr">Alexander &#x0026; Rutherford, 2019</xref>). These critical perspectives can be understood as revealing both the deep historical roots AI innovation shares with other social and political discourses of fear that serve the ends of containment, classification, and control, as well as the present-day consequences that may be exacerbated if these discourses are not interrogated within our own research practices in LIS.</p>
</sec>
<sec id="sec4">
<title>Implications for library and information science</title>
<p>Given the dire consequences associated with AI-related discourses of fear, we must take a critical look at how LIS as material and discursive fields of scholarship and practice may be implicated in discourses of fear to successfully resist them. As mentioned previously, the fields of LIS are deeply entangled with the technology industry and its <italic>&#x2018;innovations&#x2019;</italic> through our professional practices, research epistemologies, curriculum, funding sources, and iSchool-to-industry professional pipeline (<xref rid="R19" ref-type="bibr">Detlefsen, 2007</xref>; <xref rid="R31" ref-type="bibr">Hilbert, 2018</xref>; <xref rid="R36" ref-type="bibr">Maruyama, 2021</xref>). Some prior examples examining discourses of fear within librarianship and knowledge organization exist in LIS literature, yet little attention has been paid to how discourses of fear may be presently shaping and shaped by our professional and personal responses to AI. Notably, <xref rid="R43" ref-type="bibr">Radford &#x0026; Radford (2001</xref>) described libraries as devoted to discursive domination, particularly through practices of classification and surveillance that serve to control patrons&#x2019; behaviours. They assert that <italic>&#x2018;it is not the contents of the library that inspire awe; it is the practices that occur within the institution&#x2019;</italic> (p. 303). In today&#x2019;s climate, we must consider how library&#x2019;s use of and response to AI may echo this history of discursive control and order.</p>
<p>One way to conceive of librarians&#x2019; role in the AI era is as responsible for preserving the moral order of a democracy threatened by increased mis- and disinformation by teaching their communities digital and information literacy skills, including those needed to critically use AI and navigate its impacts (<xref rid="R18" ref-type="bibr">Cooke, 2018</xref>; <xref rid="R32" ref-type="bibr">Jaeger &#x0026; Taylor, 2021</xref>; <xref rid="R46" ref-type="bibr">Ridley &#x0026; Pawlick-Potts, 2021</xref>). While this mission aligns with librarians&#x2019; professional values, we should be wary of how discourses of fear may shape our narratives about AI and render patrons ultimately powerless to confront AI without the intervention of the librarian. Ettarh (<xref rid="R23" ref-type="bibr">2018</xref>) coined the term <italic>&#x2018;vocational awe&#x2019;</italic> to summarize the ways in which librarians historically have held saviour roles in their communities that were defined by the racialized and gendered expectations of upper-class white women preserving the social order of racism and patriarchy. These roles can be reprised in discourses around AI that prioritize the ways in which information professionals may <italic>&#x2018;save&#x2019;</italic> society from the harms of AI without interrogating how we may be complicit in them. Instead, librarians should consider how they might resist and refuse AI adoption in their local contexts by recognizing their own agency within complex sociotechnical systems (Appedu &#x0026; Tacheva, in press).</p>
<p>Beyond librarianship, knowledge organization and information management scholarship and practice must be interrogated for how they may implicitly converse with the same systems of power that inform discourses of fear around AI. While classification may be vital to information organization, access, and retrieval (<xref rid="R35" ref-type="bibr">Kwasnik, 1999</xref>), its discursive and material association with fear and control cannot be overlooked (<xref rid="R50" ref-type="bibr">Tacheva &#x0026; Ramasubramanian, 2023</xref>; Torres &#x0026; Gebru, 2023). Anderson &#x0026; Skouvig (2006) previously acknowledged that the role of information classification in reinforcing social and political structures has been overlooked among information scientists, thereby allowing them to evade accountability for their role in preserving discourses of fear. They advocate for a <italic>&#x2018;change in consciousness&#x2019;</italic> (p. 318), imploring scholars in information management to attend to the social and political order of society as the fundamental knowledge organization system and take ownership for how this informs our ontologies and the technologies that render them material. We agree with Hauser&#x2019;s (<xref rid="R30" ref-type="bibr">2023</xref>) portrayal of sociotechnical information systems, or <italic>&#x2018;systems of record&#x2019;</italic>, <italic>&#x2018;as a critical site for projects that hold justice to be a central concern within information studies&#x2019;</italic> (p. 2) and argue that conceptualizing these systems as part of larger discourses of fear allows us to direct our attention towards the interlocking and mutually constitutive nature of information systems, technological designers, knowledge managers, and society. Future research must investigate how else discourses of fear may manifest in and contribute to our work across the information sciences utilizing theoretical frameworks that acknowledge not only the entangled agencies of humans, technologies, and social and natural structures, but the ethical obligations they make visible (<xref rid="R8" ref-type="bibr">Barad, 2007</xref>; <xref rid="R29" ref-type="bibr">Haraway, 2002</xref>; <xref rid="R45" ref-type="bibr">Ricaurte, 2023</xref>). Such work is necessary for not just living in, but actively and ethically shaping our algorithmic, yet always material, world.</p>
</sec>
<sec id="sec5">
<title>Conclusion</title>
<p>This paper contributes to discussions about the implications of AI on LIS research and practice through critically conceptualizing discourses of fear as a material-discursive strategy that aims to enact social control through various sociotechnical systems. Among others, <xref rid="R43" ref-type="bibr">Radford &#x0026; Radford (2001</xref>), have recognized that these discourses reveal a more potent fear held by those that curate them &#x2013; that the very groups over which technological elites desire control will recognize their capacity to resist. As they argued two decades ago, <italic>&#x2018;The fear [of those in power] is of the potential to transgress dominant discursive forms and of making all things and all worlds possible&#x2019;</italic> (p. 307). If we take this to be true, the power of individuals working in collective solidarity to shape technological futures becomes clear. What would it look like to accept Benjamin&#x2019;s (<xref rid="R11" ref-type="bibr">2019</xref>) <italic>&#x2018;invitation to refuse the illusion of inevitability in which technologies of race come wrapped and to &#x2018;hotwire&#x2019; more habitable forms of social organization in the process&#x2019;</italic> (p. 23)? These questions cause us to reckon with the reality that, while it has certainly led to new ways of building community (Kennedy, 2018), technological advancement has always been used to maintain the power of the elite and disempower those on whose subjugation they rely. Furthermore, they allow us to embody the many other possible responses to new technologies beyond fear, including resistance and refusal. We posit that cultivating discourses of refusal &#x2013; which are committed to the belief that more just worlds must be possible (<xref rid="R41" ref-type="bibr">Nxumalo, 2021</xref>; <xref rid="R48" ref-type="bibr">Simpson, 2007</xref>, <xref rid="R49" ref-type="bibr">2014</xref>) &#x2013; requires both individual and collective consideration of how fear has and continues to shape our own responses to new technologies.</p>
<p>We have already seen how these discourses manifest in our field through our studies, professional activities, and personal conversations. Future directions for this research include systematic examination of the impact of discourses of fear within LIS and the ways in which information professionals across sectors are already participating in this resistance. This research will allow us to further develop our theorization of discourses of fear around AI and its implications for LIS, as well as contribute to the broader global movement of resistance against oppressive systems within our professional work. Methodologically, this paradigmatic requires a diffractive approach that shifts our gaze from actors and structures and sees technological change as a complex apparatus for which we are all accountable (<xref rid="R8" ref-type="bibr">Barad, 2007</xref>). While computers may be incapable of <italic>&#x2018;thinking&#x2019;</italic> beyond classification, people working in solidarity have the potential to imagine new ways of orienting ourselves to the world and to one another that do not overlook our deeply entangled futures, seeing technological innovation as inseparable from our quest for liberation from oppressive systems (<xref rid="R26" ref-type="bibr">Gleason, 2014</xref>; <xref rid="R29" ref-type="bibr">Haraway, 2002</xref>).</p>
</sec>
</body>
<back>
<ack>
<title>Acknowledgements</title>
<p>We would like to thank our colleagues at the Critical AI Research and Education Lab at Syracuse University (Dr. Jasmina Tacheva, Mirakle Wright, and Jeongbae Choi) for their feedback on early drafts of this paper. We also thank the reviewers for their detailed insights.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="R1"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Alexander</surname><given-names>S.</given-names></name><name><surname>Rutherford</surname><given-names>J.</given-names></name></person-group> <year>(2019)</year> <article-title>A critique of techno-optimism: Efficiency without sufficiency is lost</article-title><source>Routledge Handbook of Global Sustainability Governance</source><publisher-name>Routledge</publisher-name></element-citation></ref>
<ref id="R2"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ali</surname><given-names>S. M.</given-names></name></person-group> <year>(2019)</year> <article-title>White Crisis&#x2019; and/as &#x2018;Existential Risk,&#x2019; or the Entangled Apocalypticism of Artificial Intelligence</article-title><source>Zygon&#x00AE;</source><volume>54</volume><issue>1</issue><fpage>207</fpage><lpage>224</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1111/zygo.12498">https://doi.org/10.1111/zygo.12498</ext-link></element-citation></ref>
<ref id="R3"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Altheide</surname><given-names>D. L.</given-names></name></person-group> <year>(2017)</year> <source>Creating Fear: News and the Construction of Crisis</source><publisher-name>Routledge</publisher-name><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.4324/9780203794494">https://doi.org/10.4324/9780203794494</ext-link></element-citation></ref>
<ref id="R4"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Altheide</surname><given-names>D. L.</given-names></name><name><surname>Michalowski</surname><given-names>R. S.</given-names></name></person-group> <year>(1999)</year> <article-title>Fear in the News: A Discourse of Control</article-title><source>The Sociological Quarterly</source><volume>40</volume><issue>3</issue><fpage>475</fpage><lpage>503</lpage></element-citation></ref>
<ref id="R5"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Andersen</surname><given-names>J.</given-names></name><name><surname>Skouvig</surname><given-names>L.</given-names></name></person-group> <year>(2006)</year> <article-title>Knowledge Organization: A Sociohistorical Analysis and Critique</article-title><source>The Library Quarterly: Information, Community, Policy</source><volume>76</volume><issue>3</issue><fpage>300</fpage><lpage>322</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1086/511139">https://doi.org/10.1086/511139</ext-link></element-citation></ref>
<ref id="R6"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Andreessen</surname><given-names>M.</given-names></name></person-group> <year>(2023)</year> <comment>October 16</comment><article-title>The Techno-Optimist Manifesto</article-title><source>Andreessen Horowitz</source><ext-link ext-link-type="uri" xlink:href="https://a16z.com/the-techno-optimist-manifesto/">https://a16z.com/the-techno-optimist-manifesto/</ext-link></element-citation></ref>
<ref id="R7"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Appedu</surname><given-names>S.</given-names></name><name><surname>Tacheva</surname><given-names>J.</given-names></name></person-group><comment>(In press)</comment><article-title>Transcending Binaries of Agency through Librarians&#x2019; Discursive Representations of AI</article-title><source>Library Trends</source><volume>73</volume><issue>3</issue></element-citation></ref>
<ref id="R8"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Barad</surname><given-names>K.</given-names></name></person-group> <year>(2007)</year> <source>Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning</source><publisher-name>Duke University Press</publisher-name></element-citation></ref>
<ref id="R9"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Barbrook</surname><given-names>R.</given-names></name><name><surname>Cameron</surname><given-names>A.</given-names></name></person-group> <year>(1996)</year> <article-title>The Californian ideology</article-title><source>Science as Culture</source><volume>6</volume><issue>1</issue><fpage>44</fpage><lpage>72</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1080/09505439609526455">https://doi.org/10.1080/09505439609526455</ext-link></element-citation></ref>
<ref id="R10"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Bender</surname><given-names>E. M.</given-names></name><name><surname>Gebru</surname><given-names>T.</given-names></name><name><surname>McMillan-Major</surname><given-names>A.</given-names></name><name><surname>Shmitchell</surname><given-names>S.</given-names></name></person-group> <year>(2021)</year> <article-title>On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?</article-title><source>Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency</source><fpage>610</fpage><lpage>623</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1145/3442188.3445922">https://doi.org/10.1145/3442188.3445922</ext-link></element-citation></ref>
<ref id="R11"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Benjamin</surname><given-names>R.</given-names></name></person-group> <year>(2019)</year> <article-title>Race After Technology: Abolitionist Tools for the New Jim Code</article-title><comment>Polity</comment></element-citation></ref>
<ref id="R12"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Benjamin</surname><given-names>R.</given-names></name></person-group> <year>(2022)</year> <source>Viral Justice</source><publisher-name>Princeton University Press</publisher-name><ext-link ext-link-type="uri" xlink:href="https://press.princeton.edu/books/hardcover/9780691222882/viral-justice">https://press.princeton.edu/books/hardcover/9780691222882/viral-justice</ext-link></element-citation></ref>
<ref id="R13"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bostrom</surname><given-names>N.</given-names></name></person-group> <year>(2002)</year> <article-title>Existential risks: Analyzing human extinction scenarios and related hazards</article-title><source>Journal of Evolution and Technology</source><volume>9</volume><ext-link ext-link-type="uri" xlink:href="https://ora.ox.ac.uk/objects/uuid:827452c3-fcba-41b8-86b0-407293e6617c">https://ora.ox.ac.uk/objects/uuid:827452c3-fcba-41b8-86b0-407293e6617c</ext-link></element-citation></ref>
<ref id="R14"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Brosnan</surname><given-names>M. J.</given-names></name></person-group> <year>(1998)</year> <source>Technophobia: The Psychological Impact of Information Technology</source><publisher-name>Routledge</publisher-name><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.4324/9780203436707">https://doi.org/10.4324/9780203436707</ext-link></element-citation></ref>
<ref id="R15"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Cave</surname><given-names>S.</given-names></name><name><surname>Coughlan</surname><given-names>K.</given-names></name><name><surname>Dihal</surname><given-names>K.</given-names></name></person-group> <year>(2019)</year> <article-title>&#x201C;Scary Robots&#x201D;: Examining Public Responses to AI</article-title><source>Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society</source><fpage>331</fpage><lpage>337</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1145/3306618.3314232">https://doi.org/10.1145/3306618.3314232</ext-link></element-citation></ref>
<ref id="R16"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Cave</surname><given-names>S.</given-names></name><name><surname>Dihal</surname><given-names>K.</given-names></name></person-group> <year>(2019)</year> <article-title>Hopes and fears for intelligent machines in fiction and reality</article-title><source>Nature Machine Intelligence</source><volume>1</volume><issue>2</issue><fpage>74</fpage><lpage>78</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/s42256-019-0020-9">https://doi.org/10.1038/s42256-019-0020-9</ext-link></element-citation></ref>
<ref id="R17"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Compagna</surname><given-names>D.</given-names></name><name><surname>Steinhart</surname><given-names>S.</given-names></name></person-group> <year>(2020)</year> <source>Monsters, Monstrosities, and the Monstrous in Culture and Society</source><publisher-name>Vernon Press</publisher-name></element-citation></ref>
<ref id="R18"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Cooke</surname><given-names>N. A.</given-names></name></person-group> <year>(2018)</year> <source>Fake News and Alternative Facts: Information Literacy in a Post-Truth Era</source><comment>ALA Editions</comment></element-citation></ref>
<ref id="R19"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Detlefsen</surname><given-names>E. G.</given-names></name></person-group> <year>(2007)</year> <article-title>The pipeline problem: Where do we go from here?</article-title><source>Journal of the Medical Library Association</source><volume>95</volume><issue>2</issue><fpage>115</fpage><lpage>116</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3163/1536-5050.95.2.115">https://doi.org/10.3163/1536-5050.95.2.115</ext-link></element-citation></ref>
<ref id="R20"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Dishon</surname><given-names>G.</given-names></name></person-group> <year>(2024)</year> <article-title>From Monsters to Mazes: Sociotechnical Imaginaries of AI Between Frankenstein and Kafka</article-title><source>Postdigital Science and Education</source><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s42438- 024-00482-4">https://doi.org/10.1007/s42438- 024-00482-4</ext-link></element-citation></ref>
<ref id="R21"><element-citation publication-type="other"><person-group person-group-type="author"><collab>Elon Musk&#x2019;s AI start-up now valued at $24 billion after fresh funding</collab></person-group> <year>(2024)</year> <comment>May 27</comment><source>NBC News</source><ext-link ext-link-type="uri" xlink:href="https://www.nbcnews.com/business/business-news/elon-musks-ai-startup-now-valued-24- billion-fresh-funding-rcna154171">https://www.nbcnews.com/business/business-news/elon-musks-ai-startup-now-valued-24- billion-fresh-funding-rcna154171</ext-link></element-citation></ref>
<ref id="R22"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ericson</surname><given-names>R.</given-names></name><name><surname>Baranek</surname><given-names>P.</given-names></name><name><surname>Chan</surname><given-names>J.</given-names></name></person-group> <year>(1993)</year> <article-title>Negotiating Control: A Study of News Sources</article-title><source>Canadian Journal of Communication</source><volume>18</volume><issue>1</issue><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.22230/cjc.1993v18n1a733">https://doi.org/10.22230/cjc.1993v18n1a733</ext-link></element-citation></ref>
<ref id="R23"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Ettarh</surname><given-names>F.</given-names></name></person-group> <year>(2018)</year> <article-title>Vocational Awe and Librarianship: The Lies We Tell Ourselves</article-title><source>In the Library with the Lead Pipe</source><ext-link ext-link-type="uri" xlink:href="https://www.inthelibrarywiththeleadpipe.org/2018/vocational-awe/">https://www.inthelibrarywiththeleadpipe.org/2018/vocational-awe/</ext-link></element-citation></ref>
<ref id="R24"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Eubanks</surname><given-names>V.</given-names></name></person-group> <year>(2018)</year> <article-title>Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor</article-title><publisher-name>St. Martin&#x2019;s Press</publisher-name></element-citation></ref>
<ref id="R25"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Gebru</surname><given-names>T.</given-names></name><name><surname>Torres</surname><given-names>&#x00C9;. P.</given-names></name></person-group> <year>(2024)</year> <article-title>The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence</article-title><source>First Monday</source><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.5210/fm.v29i4.13636">https://doi.org/10.5210/fm.v29i4.13636</ext-link></element-citation></ref>
<ref id="R26"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Gleason</surname><given-names>S. C.</given-names></name></person-group> <year>(2014)</year> <article-title>Don&#x2019;t Fear the Cyborg: Toward Embracing Posthuman and Feminist Cyborg Discourses in Teacher Education and Educational Technology Research</article-title><source>Canadian Journal of Science, Mathematics and Technology Education</source><volume>14</volume><issue>2</issue><fpage>120</fpage><lpage>134</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1080/14926156.2014.903320">https://doi.org/10.1080/14926156.2014.903320</ext-link></element-citation></ref>
<ref id="R27"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Goertzel</surname><given-names>B.</given-names></name></person-group> <year>(2015)</year> <article-title>Superintelligence: Fears, Promises and Potentials: Reflections on Bostrom&#x2019;s Superintelligence, Yudkowsky&#x2019;s from AI to Zombies, and Weaver and Veitas&#x2019;s &#x2018;Open-Ended Intelligence.&#x2019;</article-title><source>Journal of Ethics and Emerging Technologies</source><volume>25</volume><issue>2</issue><fpage>55</fpage><lpage>87</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.55613/jeet.v25i2.48">https://doi.org/10.55613/jeet.v25i2.48</ext-link></element-citation></ref>
<ref id="R28"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Grenham</surname><given-names>H.</given-names></name></person-group> <year>(2020)</year> <article-title>The mechanical monster and discourses of fear and fascination in the early history of the computer</article-title><source>Humanities and Social Sciences Communications</source><volume>7</volume><issue>1</issue><fpage>1</fpage><lpage>11</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1057/s41599-020-00650-4">https://doi.org/10.1057/s41599-020-00650-4</ext-link></element-citation></ref>
<ref id="R29"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Haraway</surname><given-names>D.</given-names></name></person-group> <year>(2002)</year> <article-title>&#x2018;A Manifesto for Cyborgs: Science, Technology and Socialist Feminism in the 1980s&#x2019; (1985)</article-title><person-group person-group-type="author"><name><surname>Nicol</surname><given-names>B.</given-names></name></person-group><source>Postmodernism and the Contemporary Novel</source><fpage>396</fpage><lpage>420</lpage><publisher-name>Edinburgh University Press</publisher-name></element-citation></ref>
<ref id="R30"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Hauser</surname><given-names>E.</given-names></name></person-group> <year>(2023)</year> <article-title>Facts in the machine: Systems of record and the performance of sociotechnical truth</article-title><source>Journal of the Association for Information Science and Technology</source><comment>n/a(n/a)</comment><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1002/asi.24820">https://doi.org/10.1002/asi.24820</ext-link></element-citation></ref>
<ref id="R31"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Hilbert</surname><given-names>M.</given-names></name></person-group> <year>(2018)</year> <article-title>Post-human condition&#x2013;Epistemic disruption: How Information (Science) lost its body to Data-Based Knowledge</article-title><source>The Future of Education in Information Science</source><fpage>86</fpage><lpage>93</lpage></element-citation></ref>
<ref id="R32"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Jaeger</surname><given-names>P. T.</given-names></name><name><surname>Taylor</surname><given-names>N. G.</given-names></name></person-group> <year>(2021)</year> <article-title>Arsenals of Lifelong Information Literacy: Educating Users to Navigate Political and Current Events Information in World of Ever-Evolving Misinformation</article-title><source>The Library Quarterly</source><volume>91</volume><issue>1</issue><fpage>19</fpage><lpage>31</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1086/711632">https://doi.org/10.1086/711632</ext-link></element-citation></ref>
<ref id="R33"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Jasanoff</surname><given-names>S.</given-names></name><name><surname>Kim</surname><given-names>S.-H.</given-names></name></person-group> <year>(2009)</year> <article-title>Containing the Atom: Sociotechnical Imaginaries and Nuclear Power in the United States and South Korea</article-title><source>Minerva: A Review of Science, Learning &#x0026; Policy</source><volume>47</volume><issue>2</issue><fpage>119</fpage><lpage>146</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s11024-009-9124-4">https://doi.org/10.1007/s11024-009-9124-4</ext-link></element-citation></ref>
<ref id="R34"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Kothare-Arora</surname><given-names>M.</given-names></name></person-group> <year>(2020)</year> <source>Our Fear of AI: Exploring Its Creators and Creations in Fiction</source><ext-link ext-link-type="uri" xlink:href="https://hdl.handle.net/2152/84406">https://hdl.handle.net/2152/84406</ext-link></element-citation></ref>
<ref id="R35"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kwasnik</surname><given-names>B. H.</given-names></name></person-group> <year>(1999)</year> <article-title>The Role of Classification in Knowledge Representation and Discovery&#x2019;</article-title><source>Library Trends</source><volume>48</volume><issue>1</issue><fpage>22</fpage><lpage>47</lpage></element-citation></ref>
<ref id="R36"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Maruyama</surname><given-names>Y.</given-names></name></person-group> <year>(2021)</year> <article-title>Post-Truth AI and Big Data Epistemology: From the Genealogy of Artificial Intelligence to the Nature of Data Science as a New Kind of Science</article-title><person-group person-group-type="editor"><name><surname>Abraham</surname><given-names>A.</given-names></name></person-group><person-group person-group-type="editor"><name><surname>Siarry</surname><given-names>P.</given-names></name></person-group><person-group person-group-type="editor"><name><surname>Ma</surname><given-names>K.</given-names></name></person-group><person-group person-group-type="author"><name><surname>Kaklauskas</surname><given-names>A.</given-names></name></person-group><source>Intelligent Systems Design and Applications</source><fpage>540</fpage><lpage>549</lpage><publisher-name>Springer International Publishing</publisher-name><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/978-3-030-49342-4_52">https://doi.org/10.1007/978-3-030-49342-4_52</ext-link></element-citation></ref>
<ref id="R37"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>McClure</surname><given-names>P. K.</given-names></name></person-group> <year>(2018)</year> <article-title>&#x2018;You&#x2019;re Fired,&#x2019; Says the Robot: The Rise of Automation in the Workplace, Technophobes, and Fears of Unemployment</article-title><source>Social Science Computer Review</source><volume>36</volume><issue>2</issue><fpage>139</fpage><lpage>156</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1177/0894439317698637">https://doi.org/10.1177/0894439317698637</ext-link></element-citation></ref>
<ref id="R38"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>McFarland</surname><given-names>M.</given-names></name></person-group> <year>(2016)</year> <comment>February 5</comment><article-title>Elon Musk: &#x2018;With artificial intelligence we are summoning the demon.&#x2019;</article-title><comment>Washington Post</comment><ext-link ext-link-type="uri" xlink:href="https://www.washingtonpost.com/news/innovations/wp/2014/10/24/elon-musk-with-artificial-intelligence-we-are-summoning-the-demon/">https://www.washingtonpost.com/news/innovations/wp/2014/10/24/elon-musk-with-artificial-intelligence-we-are-summoning-the-demon/</ext-link></element-citation></ref>
<ref id="R39"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Nguyen</surname><given-names>D.</given-names></name><name><surname>Hekman</surname><given-names>E.</given-names></name></person-group> <year>(2022)</year> <article-title>The news framing of artificial intelligence: A critical exploration of how media discourses make sense of automation</article-title><source>AI &#x0026; SOCIETY</source><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s00146-022-01511-1">https://doi.org/10.1007/s00146-022-01511-1</ext-link></element-citation></ref>
<ref id="R40"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Noble</surname><given-names>S. U.</given-names></name><name><surname>Roberts</surname><given-names>S. T.</given-names></name></person-group> <year>(2019)</year> <article-title>Technological Elites, the Meritocracy, and Postracial Myths in Silicon Valley</article-title><source>In 6 Technological Elites, the Meritocracy, and Postracial Myths in Silicon Valley</source><fpage>113</fpage><lpage>130</lpage><publisher-name>Duke University Press</publisher-name><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1515/9781478003250-007">https://doi.org/10.1515/9781478003250-007</ext-link></element-citation></ref>
<ref id="R41"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Nxumalo</surname><given-names>F.</given-names></name></person-group> <year>(2021)</year> <article-title>Disrupting Anti-Blackness in Early Childhood Qualitative Inquiry: Thinking With Black Refusal and Black Futurity</article-title><source>Qualitative Inquiry</source><volume>27</volume><issue>10</issue><fpage>1191</fpage><lpage>1199</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1177/10778004211021810">https://doi.org/10.1177/10778004211021810</ext-link></element-citation></ref>
<ref id="R42"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Ord</surname><given-names>T.</given-names></name></person-group> <year>(2020)</year> <source>The precipice: Existential risk and the future of humanity</source><publisher-name>Hachette Books</publisher-name></element-citation></ref>
<ref id="R43"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Radford</surname><given-names>G. P.</given-names></name><name><surname>Radford</surname><given-names>M. L.</given-names></name></person-group> <year>(2001)</year> <article-title>Libraries, Librarians, and the Discourse of Fear</article-title><source>The Library Quarterly</source><volume>71</volume><issue>3</issue><fpage>299</fpage><lpage>329</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1086/603283">https://doi.org/10.1086/603283</ext-link></element-citation></ref>
<ref id="R44"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Radford</surname><given-names>M. L.</given-names></name><name><surname>Radford</surname><given-names>G. P.</given-names></name></person-group> <year>(1997)</year> <article-title>Power, Knowledge, and Fear: Feminism, Foucault, and the Stereotype of the Female Librarian</article-title><source>The Library Quarterly</source><volume>67</volume><issue>3</issue><fpage>250</fpage><lpage>266</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1086/629951">https://doi.org/10.1086/629951</ext-link></element-citation></ref>
<ref id="R45"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Ricaurte</surname><given-names>P.</given-names></name></person-group> <year>(2023)</year> <article-title>AI for/by the majority world: From technologies of dispossession to technologies of radical care</article-title><source>Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society</source><fpage>3</fpage><lpage>4</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1145/3600211.3607544">https://doi.org/10.1145/3600211.3607544</ext-link></element-citation></ref>
<ref id="R46"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ridley</surname><given-names>M.</given-names></name><name><surname>Pawlick-Potts</surname><given-names>D.</given-names></name></person-group> <year>(2021)</year> <article-title>Algorithmic Literacy and the Role for Libraries</article-title><source>Information Technology and Libraries</source><volume>40</volume><issue>2</issue><comment>Article 2</comment><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.6017/ital.v40i2.12963">https://doi.org/10.6017/ital.v40i2.12963</ext-link></element-citation></ref>
<ref id="R47"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Roose</surname><given-names>K.</given-names></name></person-group> <year>(2023)</year> <comment>February 16</comment><article-title>A Conversation With Bing&#x2019;s Chatbot Left Me Deeply Unsettled</article-title><source>The New York Times</source><ext-link ext-link-type="uri" xlink:href="https://www.nytimes.com/2023/02/16/technology/bing-chatbot- microsoft-chatgpt.html">https://www.nytimes.com/2023/02/16/technology/bing-chatbot- microsoft-chatgpt.html</ext-link></element-citation></ref>
<ref id="R48"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Simpson</surname><given-names>A.</given-names></name></person-group> <year>(2007)</year> <article-title>On Ethnographic Refusal: Indigeneity, &#x2018;Voice&#x2019; and Colonial Citizenship</article-title><source>Junctures: The Journal for Thematic Dialogue</source><volume>9</volume><comment>Article 9</comment><ext-link ext-link-type="uri" xlink:href="https://junctures.org/junctures/index.php/junctures/article/view/66">https://junctures.org/junctures/index.php/junctures/article/view/66</ext-link></element-citation></ref>
<ref id="R49"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Simpson</surname><given-names>A.</given-names></name></person-group> <year>(2014)</year> <source>Mohawk Interruptus: Political Life Across the Borders of Settler States</source><publisher-name>Duke University Press</publisher-name></element-citation></ref>
<ref id="R50"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Tacheva</surname><given-names>J.</given-names></name><name><surname>Ramasubramanian</surname><given-names>S.</given-names></name></person-group> <year>(2023)</year> <article-title>AI Empire: Unraveling the interlocking systems of oppression in generative AI&#x2019;s global order</article-title><source>Big Data &#x0026; Society</source><volume>10</volume><issue>2</issue><fpage>20539517231219241</fpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1177/20539517231219241">https://doi.org/10.1177/20539517231219241</ext-link></element-citation></ref>
<ref id="R51"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wajcman</surname><given-names>J.</given-names></name></person-group> <year>(2017)</year> <source>Automation: Is it really different this time? The British Journal of Sociology</source><volume>68</volume><issue>1</issue><fpage>119</fpage><lpage>127</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1111/1468-4446.12239">https://doi.org/10.1111/1468-4446.12239</ext-link></element-citation></ref>
<ref id="R52"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname><given-names>L.</given-names></name></person-group> <year>(2018)</year> <article-title>Twinning data science with information science in schools of library and information science</article-title><source>Journal of Documentation</source><volume>74</volume><issue>6</issue><fpage>1243</fpage><lpage>1257</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1108/JD-02-2018-0036">https://doi.org/10.1108/JD-02-2018-0036</ext-link></element-citation></ref>
</ref-list>
</back>
</article>