<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "http://jats.nlm.nih.gov/publishing/1.0/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" article-type="research-article" xml:lang="en">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">IR</journal-id>
<journal-title-group>
<journal-title>Information Research</journal-title>
</journal-title-group>
<issn pub-type="epub">1368-1613</issn>
<publisher>
<publisher-name>University of Bor&#x00E5;s</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">ir30iConf46906</article-id>
<article-id pub-id-type="doi">10.47989/ir30iConf46906</article-id>
<article-categories>
<subj-group xml:lang="en">
<subject>Research article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>AI literature review systems: an analysis of performance, affordances, and outputs for a complex topic in the social sciences</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author"><name><surname>Moulaison-Sandy</surname><given-names>Heather</given-names></name>
<xref ref-type="aff" rid="aff0001"/></contrib>
<contrib contrib-type="author"><name><surname>Casta&#x00F1;o-Mu&#x00F1;oz</surname><given-names>Wilson</given-names></name>
<xref ref-type="aff" rid="aff0002"/></contrib>
<contrib contrib-type="author"><name><surname>Ridenour</surname><given-names>Laura</given-names></name>
<xref ref-type="aff" rid="aff0003"/></contrib>
<contrib contrib-type="author"><name><surname>Adkins</surname><given-names>Denice</given-names></name>
<xref ref-type="aff" rid="aff0004"/></contrib>
<aff id="aff0001"><bold>Heather Moulaison-Sandy</bold> is Associate Professor in the iSchool, University of Missouri, USA. She received her Ph.D. from Rutgers University, and her research interests focus on the organization of information, scholarly communication, and technology. She can be contacted at <email xlink:href="moulaisonhe@missouri.edu">moulaisonhe@missouri.edu</email></aff>
<aff id="aff0002"><bold>Wilson Casta&#x00F1;o-Mu&#x00F1;oz</bold> is Associate Professor at the School of Library and Information Science, University Antioquia, Colombia. He is a PhD student at the University of Missouri. His research interests are in digital reading and information conveyed through aural technologies. He can be contacted at <email xlink:href="wilson.castano@mail.missouri.edu">wilson.castano@mail.missouri.edu</email></aff>
<aff id="aff0003"><bold>Laura Ridenour</bold> is Assistant Professor in the iSchool, University of Missouri, USA. She received her Ph.D. from the University of Wisconsin-Milwaukee, and her research interests include data science, artificial intelligence, information organization, and open data governance and access. They can be contacted at <email xlink:href="lerbhn@missouri.edu">lerbhn@missouri.edu</email></aff>
<aff id="aff0004"><bold>Denice Adkins</bold> is Professor in the School of Information Science &#x0026; Learning Technologies, University of Missouri. She received their Ph.D. from the University of Arizona and her research interests are in services to library users. She can be contacted at <email xlink:href="adkinsde@missouri.edu">adkinsde@missouri.edu</email></aff>
</contrib-group>
<pub-date pub-type="epub"><day>06</day><month>05</month><year>2025</year></pub-date>
<pub-date pub-type="collection"><year>2025</year></pub-date>
<volume>30</volume>
<issue>i</issue>
<fpage>1244</fpage>
<lpage>1252</lpage>
<permissions>
<copyright-year>2025</copyright-year>
<copyright-holder>&#x00A9; 2025 The Author(s).</copyright-holder>
<license license-type="open-access" xlink:href="https://creativecommons.org/licenses/by-nc/4.0/">
<license-p>This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by-nc/4.0/">http://creativecommons.org/licenses/by-nc/4.0/</ext-link>), permitting all non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<abstract xml:lang="en">
<title>Abstract</title>
<p><bold>Introduction.</bold> Artificial intelligence has the potential to revolutionize the way that scholars produce work, including through the creation of literature reviews using free or low-cost systems. Despite the importance of the literature review, AI- powered tools to create literature reviews are understudied at present.</p>
<p><bold>Method.</bold> To address this gap, four specialized tools were assessed. The complex area of study chosen was e-reading in English and Spanish, a field where differences are evident in the two languages. Systems were prompted to write a literature review of 500-1000 words, and the resulting outputs were analysed based on the requirements for reviews of the literature.</p>
<p><bold>Results.</bold> Although none of the systems allowed for all the identified criteria to be addressed, Scite allowed many criteria to be met and was able to produce text in Spanish as well as in English. The remaining three systems investigated, Jenni.ai, OpenRead, and Wisio, performed less favourably.</p>
<p><bold>Conclusions.</bold> AI-powered systems show incredible promise, with one unstudied area being the creation of scholarly reviews of the literature. Although the kinds of research questions that might be examined in social sciences like LIS will necessarily be complex, until now little has been done to evaluate these systems.</p>
</abstract>
</article-meta>
</front>
<body>
<sec id="sec1">
<title>Introduction</title>
<p>Artificial intelligence (AI) systems show promise in supporting scholars in the discovery and analysis of articles that are part of the scholarly record. A suite of AI-powered tools is emerging to support the creation of reviews of the literature. Specifically, these systems advertise the ability to analyse articles that will create new meaning, identify lacuna in the scholarly record, and support the understanding of the state of the field and, by extension, further steps that should be taken. By including scholarly sources, these systems allow researchers to explore the literature in their field, often for free or very low cost.</p>
<p>The performance, outputs, and affordances of these systems has only been evaluated anecdotally. Reddit threads have probed this question, in some cases with limited success (e.g., <xref rid="R9" ref-type="bibr">ijmarkandel, 2023</xref>). In other words, although the accuracy of aspects of certain systems (e.g., references in ChatGPT (<xref rid="R14" ref-type="bibr">Mugaanyi et al., 2024</xref>)) have been evaluated, very little is known about free and low-cost AI systems that support the creation of complex reviews of the scholarly literature, and no formal studies have been conducted.</p>
<p>In the social sciences, complexities readily emerge when studying the scholarly record relating to the position of the authors. Different terms can be used to describe a single phenomenon and different methodologies and literatures can be sources; language traditions also have different approaches to scholarship. Anecdotally, electronic reading (also known as e-reading, digital reading, or reading on screen) is one topic that is studied differently in anglophone and Hispanophone traditions. It represents a complex, social sciences-based problem that has not adequately been addressed in the literature.</p>
<p>This paper focuses on the novel topic of the performance of AI systems and their functionalities relative to traditional work in creating reviews of the literature in the social sciences, especially when working in novel, complex spaces. The complex topic explored is e-reading as studied in the scholarly literature in English and Spanish.</p>
<sec id="sec1_1">
<title>Area for application: e-reading</title>
<p>Some specific concepts in the field of digital reading have been covered using different terms. One such concept is the process by which a new medium reshapes an established one, simultaneously referencing and reinterpreting its conventions and traditions, termed remediation in English (<xref rid="R2" ref-type="bibr">Bolter &#x0026; Grusin, 1996</xref>). Works in Spanish also address this concept in relation to the book and its evolution to the digital book, focusing on the materiality, format and features of the new medium. However, studies in Spanish do not use the term <italic>remediaci&#x00F3;n,</italic> nor do they cite the English- language studies that coined the term (e.g., <xref rid="R3" ref-type="bibr">Canclini et al., 2015</xref>; <xref rid="R4" ref-type="bibr">Cavallo &#x0026; Chartier, 1999</xref>; <xref rid="R8" ref-type="bibr">Cord&#x00F3;n, 2016</xref>; <xref rid="R13" ref-type="bibr">Morales &#x0026; Espinoza, 2003</xref>).</p>
<p>Given that with e-reading, researchers from different language families address similar concepts using different terminology and by citing different sources, understudied complexities emerge. To address this, we investigate the ability for free and low-cost AI-powered systems to create complex reviews of the literature. To our knowledge, although the topic of AI-powered tools and systems have been analysed for their usability in academia, no scholarly papers investigate tools for reviews of the literature by analysing their performance and affordances. The current project aims to address this gap in the literature.</p>
</sec>
</sec>
<sec id="sec2">
<title>Research question</title>
<p>To investigate this topic, the following research question is explored:</p>
<p>RQ1: To what extent are AI systems able to carry out sophisticated reviews of the literature for use in the social sciences?</p>
</sec>
<sec id="sec3">
<title>Review of the literature</title>
<sec id="sec3_1">
<title>Literature reviews in the social sciences</title>
<p>In scholarship, reviews of the literature ground a project in the extant literature, situating it and allowing it to contribute to the field (Machi, L. A. &#x0026; McEvoy, 2016). High quality literature reviews organize and synthesise completed work (<xref rid="R18" ref-type="bibr">Zhao et al., 2024</xref>). They help to identify a <italic>&#x2018;gap&#x2019;</italic> and ideally are shaped to provide an argument as to what scholars in a given field should be able to do in the future (<xref rid="R17" ref-type="bibr">Slater, 2018</xref>). Practically, the sources will be carefully chosen, and elements including source, author, and date will play a role. The perspective adopted will support the research questions and will limit the scope of the work meaningfully, presenting an argument (<xref rid="R11" ref-type="bibr">Machi &#x0026; McEvoy, 2016</xref>); in making choices to include sources, the extent of the analysis of the results and the scope of the discussion are likewise limited. Developing the ability to craft such an argument is a skill that takes years for a human researcher or scholar to develop (<xref rid="R10" ref-type="bibr">Joos et al., 2024</xref>; <xref rid="R17" ref-type="bibr">Slater, 2018</xref>).</p>
</sec>
<sec id="sec3_2">
<title>AI-powered systems to support scholarship</title>
<p>AI-powered tools have emerged that support scholarship. This paper focuses on a set of AI- powered tools that allows for the automated creation of literature reviews or other kinds of analyses of scholarly articles. Because literature reviews are related to the discovery of articles and the connectedness of articles, these tools are related to and often overlap with other sets of AI- powered tools.</p>
<p>For example, to carry out a literature review, scholars must first identify relevant scholarly articles. One example of a searchable corpus is SemanticScholar (<ext-link ext-link-type="uri" xlink:href="https://www.semanticscholar.org">https://www.semanticscholar.org</ext-link>). The AI-powered search engine Consensus (<ext-link ext-link-type="uri" xlink:href="https://consensus.app/">https://consensus.app/</ext-link>) makes use of this corpus (<xref rid="R7" ref-type="bibr">Consensus AI, 2022</xref>). Scopus AI (<ext-link ext-link-type="uri" xlink:href="https://www.elsevier.com/products/scopus/scopus-ai">https://www.elsevier.com/products/scopus/scopus-ai</ext-link>) has a similar approach. Other AI-powered search engines exist (e.g., Supersymmetry (<ext-link ext-link-type="uri" xlink:href="https://www.supersymmetry.ai/">https://www.supersymmetry.ai/</ext-link>)) that also support enhanced search and discovery. Lateral (<ext-link ext-link-type="uri" xlink:href="https://www.lateral.io/">https://www.lateral.io/</ext-link>) and Scispace (<ext-link ext-link-type="uri" xlink:href="https://typeset.io/">https://typeset.io/</ext-link>) support discovery and compiling notes.</p>
<p>Researchers can benefit from other support as well, and many systems have multiple ways of supporting authors. Another class of AI-powered tools is systems designed to help with the analysis and summarization of articles. For example, ChatPDF (<ext-link ext-link-type="uri" xlink:href="https://www.chatpdf.com/">https://www.chatpdf.com/</ext-link>), Explainpaper (<ext-link ext-link-type="uri" xlink:href="https://www.explainpaper.com/">https://www.explainpaper.com/</ext-link>), Humanata (<ext-link ext-link-type="uri" xlink:href="https://www.humata.ai/">https://www.humata.ai/</ext-link>), ResearchMate Pro (<ext-link ext-link-type="uri" xlink:href="https://researchmate.pro/">https://researchmate.pro/</ext-link>), Scholarcy (<ext-link ext-link-type="uri" xlink:href="https://www.scholarcy.com/">https://www.scholarcy.com/</ext-link>), Sharly (<ext-link ext-link-type="uri" xlink:href="https://sharly.ai/">https://sharly.ai/</ext-link>), StudyFlo (<ext-link ext-link-type="uri" xlink:href="https://www.studyflo.com/">https://www.studyflo.com/</ext-link>), and Unriddle (<ext-link ext-link-type="uri" xlink:href="https://www.unriddle.ai">https://www.unriddle.ai</ext-link>) support quick overviews of articles, in some cases if the publisher is willing to expose the source to analysis. Other AI-powered systems enable the discovery or visualization of connections between articles, including Connected Papers (<ext-link ext-link-type="uri" xlink:href="https://www.connectedpapers.com/">https://www.connectedpapers.com/</ext-link>), Inciteful (<ext-link ext-link-type="uri" xlink:href="https://inciteful.xyz/">https://inciteful.xyz/</ext-link>), Litmaps (<ext-link ext-link-type="uri" xlink:href="https://www.litmaps.com/">https://www.litmaps.com/</ext-link>), and ResearchRabbit (<ext-link ext-link-type="uri" xlink:href="https://www.researchrabbit.ai/">https://www.researchrabbit.ai/</ext-link>).</p>
</sec>
<sec id="sec3_3">
<title>Overview: AI-Powered systems supporting scholars and literature-based discovery</title>
<p>Artificial Intelligence Systems used for developing literature reviews are largely some forms of Large Language Model (<xref rid="R16" ref-type="bibr">Shopovski, 2024</xref>). One of the basic tenets of scholarly communication is the need for scholars to position their work in the field of endeavor. Goals of using LLMs is to streamline the process of surveying the literature (<xref rid="R10" ref-type="bibr">Joos et al., 2024</xref>), identify documents for inclusion in a review (<xref rid="R1" ref-type="bibr">Antu et al., 2023</xref>), evaluate and categorize relevant documents (<xref rid="R10" ref-type="bibr">Joos et al., 2024</xref>), interpret documents in the context of research, code relevant documents (<xref rid="R10" ref-type="bibr">Joos et al., 2024</xref>), and aid in the human-driven synthesis of relevant materials.</p>
<p>Free generative AI systems like ChatGPT (<ext-link ext-link-type="uri" xlink:href="https://openai.com/chatgpt/">https://openai.com/chatgpt/</ext-link>) and Gemini (<ext-link ext-link-type="uri" xlink:href="https://gemini.google.com/">https://gemini.google.com/</ext-link>) are well-known for being good sources of inspiration, but their reliability, especially for scholarship where precision is essential, has been repeatedly questioned. For example, the phenomenon of hallucinations extends to sources, with ChatGPT (GPT-3.5) generating either incorrect DOIs or providing the wrong DOI for a source in 61.8% of natural sciences references and 89.4% of humanities ones (<xref rid="R14" ref-type="bibr">Mugaanyi et al., 2024</xref>). Hallucinations in AI systems can be divided into one of two types, <italic>&#x2018;intrinsic hallucinations&#x2019;</italic> and <italic>&#x2018;extrinsic hallucinations&#x2019;</italic> (<xref rid="R12" ref-type="bibr">Mittal et al., 2024</xref>). In the first type, intrinsic hallucinations, the output that is generated contradicts the source. In the second type, extrinsic hallucinations, the output is not able to be verified against the source, and is unverifiable.</p>
<p>As mentioned, a new suite of AI-powered tools has emerged, designed to support research. These tools have been organized into systems that help researchers find sources, make sense of primary source texts, and correct their own writing (e.g., <xref rid="R15" ref-type="bibr">Pinzolits, 2023</xref>). In addition, literature-based discovery (LBD) is a domain showing much promise. In this model, systems use AI to analyse the literature, specifically seeking lacuna and new approaches. LBD is generating interest in areas such as medicine (e.g., Cheerkoot-Jalim &#x0026; Khedo, 2021).</p>
<p>Using AI-powered systems to create literature reviews for articles is understudied. Only two studies begin to address the problem. Choe and colleagues (2024) designed a study to assess novice researcher needs in creating reviews of the literature and then created LLM-based scaffolding for the work. Zhao and colleagues (2024) generated literature reviews using three AI-powered document summary systems and one literature review system; they applied Pattern Analysis and Machine Intelligence (PAMI) approaches, comparing AI-generated reviews to human-authored ones using new bibliometric indicators. The results identified limitations in the AI-powered outputs. To our knowledge, no systematic assessment of freely available and specialized AI- powered literature review tools or of their functionalities and their usefulness to academics has been carried out.</p>
</sec>
</sec>
<sec id="sec4">
<title>Method</title>
<p>Focusing on free and low-cost AI-powered systems that are designed and advertised specifically to support scholars in the creation of literature reviews and that are available to individual researchers (i.e., not requiring enterprise-level commitment by an institution such as with Laser (<ext-link ext-link-type="uri" xlink:href="https://www.laser.ai/">https://www.laser.ai/</ext-link>)), the research team identified 4 systems for analysis:
<list list-type="bullet">
<list-item><p>Jenni.ai (<ext-link ext-link-type="uri" xlink:href="https://app.jenni.ai/">https://app.jenni.ai/</ext-link>)</p></list-item>
<list-item><p>Scite (<ext-link ext-link-type="uri" xlink:href="https://scite.ai/">https://scite.ai/</ext-link>)</p></list-item>
<list-item><p>OpenRead (<ext-link ext-link-type="uri" xlink:href="https://www.openread.academy/">https://www.openread.academy/</ext-link>)</p></list-item>
<list-item><p>Wisio (<ext-link ext-link-type="uri" xlink:href="https://wisio.app/">https://wisio.app/</ext-link>)</p></list-item>
</list></p>
<p>Each system was expertly learned by one team member who watched tutorials and practiced extensively, piloting each system. Based on requirements for literature reviews in LIS, a social science (e.g., <xref rid="R11" ref-type="bibr">Machi &#x0026; McEvoy, 2016</xref>; <xref rid="R17" ref-type="bibr">Slater, 2018</xref>), and the affordances of AI-powered systems, deductive coding was used for this project.</p>
<p>Paid versions of each system were used to explore maximum benefits, and only the system&#x2019;s sources were used (no sources were uploaded). The goal was a usable 500 to 1000-word literature review on e-reading research in English and Spanish that 1) focused on both languages, 2) cited current and peer-reviewed sources, and 3) summarized methodologies; the research team&#x2019;s system&#x2019;s expert led a live demonstration for the research team beginning with the prompt: &#x2018;<italic>write a one-page literature review (500-1000 words) about e-reading in Spanish and English.&#x2019;</italic></p>
<p>Team leads then modified based on system affordances and results. During the demonstration, which was recorded, members of the research team collaboratively coded each system using an inductive system based on the requirement of literature reviews in the social sciences, discussing each code until agreement was reached.</p>
</sec>
<sec id="sec5">
<title>Results</title>
<p>Each AI-powered system designed for authoring reviews of the literature was investigated for its ability to produce text for a complex question. Although these systems functioned very differently, the results of their use and observations regarding their feedback, functionality, and responses are presented below.</p>
<p><xref ref-type="table" rid="T1">Table 1</xref> provides an overview of elements relating to authoring a literature review. Specifically, this project sought to produce more than 500 words on the topic, using recent peer-reviewed sources in both English and Spanish, and having the methodologies summarized in the analysis. Scite was the most performant of the group, creating a robust review of the literature using recent sources where methodology was described. None of the systems, however, allowed users to cite only peer- reviewed sources. Jenni.ai and Wisio were the least performant, with Jenni.ai unable to address the majority of the criteria, and only able to isolate methodology in a limited way. Wisio was also unable to address the majority of the criteria, although it was able to cite more recent sources when prompted, but not specifically by date or date range.</p>
<table-wrap id="T1">
<label>Table 1.</label>
<caption><p>Overview of free and low-cost AI-powered systems for literature reviews.</p></caption>
<table>
<thead>
<tr>
<th align="left" valign="top">System name</th>
<th align="center" valign="top">>500-word review pro-duced</th>
<th align="center" valign="top">Specify sources as peer reviewed</th>
<th align="center" valign="top">Specify sources by language</th>
<th align="center" valign="top">Specify sources by date</th>
<th align="center" valign="top">Isolate methodology</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Jenni.ai</td>
<td align="center" valign="top">no</td>
<td align="center" valign="top">no</td>
<td align="center" valign="top">no</td>
<td align="center" valign="top">no</td>
<td align="center" valign="top">somewhat</td>
</tr>
<tr>
<td align="left" valign="top">Scite</td>
<td align="center" valign="top">yes</td>
<td align="center" valign="top">no</td>
<td align="center" valign="top">no</td>
<td align="center" valign="top">yes</td>
<td align="center" valign="top">yes</td>
</tr>
<tr>
<td align="left" valign="top">Open--Read</td>
<td align="center" valign="top">yes</td>
<td align="center" valign="top">no</td>
<td align="center" valign="top">yes</td>
<td align="center" valign="top">no</td>
<td align="center" valign="top">somewhat</td>
</tr>
<tr>
<td align="left" valign="top">Wisio</td>
<td align="center" valign="top">no</td>
<td align="center" valign="top">no</td>
<td align="center" valign="top">no</td>
<td align="center" valign="top">somewhat</td>
<td align="center" valign="top">no</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="table" rid="T2">Table 2</xref> provides an overview of the sources cited and the formatting of the references produced in the test review. As the sources are an essential aspect of the literature review, this table focuses on foundational aspects of how they are provided as well as how the references are given and their accuracy. Again, Scite was the superior product, successfully addressing all the criteria listed. Each product linked to sources in some way or another and none produced hallucinations; Scite provided APA-style formatting and references, and the output text was downloadable as a file that included the formatted references. Jenni.ai likewise met these three criteria, but did not always provide references for sources mentioned in the text. OpenRead and Wisio provided links to sources, but did not provide a standard reference list and did not provide an easy way to download or save outputs. Secondary sources were at times provided in Jenni.ai, OpenRead, and Wisio in lieu of primary sources. The number of references provided ranged from 3 to 10.</p>
<table-wrap id="T2">
<label>Table 2.</label>
<caption><p>Sources, references, and output produced.</p></caption>
<table>
<thead>
<tr>
<th align="left" valign="top">System name</th>
<th align="center" valign="top">Link to sources</th>
<th align="center" valign="top">Primary sources only</th>
<th align="center" valign="top">All sources are referenced</th>
<th align="center" valign="top">Num. references provided</th>
<th align="center" valign="top">Halluci-nations in references</th>
<th align="center" valign="top">Down-loadable output with references</th>
<th align="center" valign="top">Able to output using APA</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Jenni.ai</td>
<td align="center" valign="top">some</td>
<td align="center" valign="top">no</td>
<td align="center" valign="top">no</td>
<td align="center" valign="top">3</td>
<td align="center" valign="top">no</td>
<td align="center" valign="top">yes</td>
<td align="center" valign="top">yes</td>
</tr>
<tr>
<td align="left" valign="top">Scite</td>
<td align="center" valign="top">yes</td>
<td align="center" valign="top">yes</td>
<td align="center" valign="top">yes</td>
<td align="center" valign="top">10</td>
<td align="center" valign="top">no</td>
<td align="center" valign="top">yes</td>
<td align="center" valign="top">yes</td>
</tr>
<tr>
<td align="left" valign="top">Open--Read</td>
<td align="center" valign="top">yes</td>
<td align="center" valign="top">no</td>
<td align="center" valign="top">yes</td>
<td align="center" valign="top">3</td>
<td align="center" valign="top">no</td>
<td align="center" valign="top">no</td>
<td align="center" valign="top">no</td>
</tr>
<tr>
<td align="left" valign="top">Wisio</td>
<td align="center" valign="top">yes</td>
<td align="center" valign="top">no</td>
<td align="center" valign="top">no</td>
<td align="center" valign="top">10</td>
<td align="center" valign="top">no</td>
<td align="center" valign="top">no</td>
<td align="center" valign="top">no</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Additionally, scholars could upload sources themselves for inclusion to all four systems, although this project only used sources provided by the systems.</p>
<sec id="sec5_1">
<title>Observations about using the systems</title>
<p>The same prompt was used to begin each interaction with a given. However, the way that each system permitted interaction and prompt refinements differed. Jenni.ai and OpenRead were primarily prompt-based, functioning much like generative AI systems such as ChatGPT and Gemini. After the initial prompt, Jenni.ai allowed sentence-by-sentence approval of content as the primary way to add to the generated content, or for the selection of text around which additional content could be developed. Scite was the most like a traditional database, distilling prompts down to queries and then displaying the queries. Giving OpenRead an identity as a helpful assistant who speaks both languages produced a more scholarly review of literature, though the team observed that the content was not topically aligned with the prompt. Wisio seemed designed to assist in the writing process rather than replace it, with tools to check grammar and to align user input content with scientific writing styles, which Wisio calls <italic>&#x2018;scientifying&#x2019;</italic> the writing.</p>
</sec>
<sec id="sec5_2">
<title>Ability to refine the outputs</title>
<p>Each system allowed for different refinements to the text, via changes to the prompts (e.g., Scite), isolation of specific text to make edits (e.g., Jenni.ai). Features relating to language were difficult to isolate, in all systems producing either no or unpredictable results. For example, Jenni.ai and Wisio were unable to produce a critical mass of references in English, and efforts to request references in Spanish were likewise unsuccessful. Scite was able to change the output language to Spanish, which was unexpected since the prompt was provided in English. In OpenRead, clicking the follow-up option and using the same prompt produced a four-paragraph Spanish-language summary of e-reading research with several APA-style in-text citations. Further prompting was needed before the summary was translated to English and links to references were provided; however, those references were not consistent with the in-text citations.</p>
<p>Scite was the only system that allowed for a date range to be provided for references; however, Jenni.ai, OpenRead, and Scite all provided additional metadata and links to sources. Wisio seemed to use only PubMed and CrossRef to provide citations, but the AI Advisor feature did suggest other databases to search and keywords to use. The overall quality of references varied greatly. For example, OpenRead and Wisio were unable to format references in APA, and OpenRead did not allow for the text of the review to be downloaded with the references appended; instead, each reference needed to be downloaded individually.</p>
</sec>
</sec>
<sec id="sec6">
<title>Discussion</title>
<p>The results of this investigation provide insight into AI-powered systems for the creation of literature reviews. Based on the performance and functionalities as well as the usefulness of the outputs, we find that the systems performed unevenly.</p>
<p>Overall, based on the requirements of reviews of the literature in the social sciences (<xref rid="R11" ref-type="bibr">Machi &#x0026; McEvoy, 2016</xref>; <xref rid="R17" ref-type="bibr">Slater, 2018</xref>), we find that Scite performed admirably by producing text that was both scholarly and well-referenced in alignment with expectations for reviews of the literature (Slater, date). Overall, the Scite output was not that far removed from the kind of output one might expect to find in a published article. Jenni.ai, OpenRead, and Wisio produced sub-par results, even after extensive massaging of the prompts and work within the system. Jenni.ai was unable to produce more than 500 words or include more than 3 references, though it mentioned additional sources in the body of the text. OpenRead produced a paragraph of text with several cited sources, but on review, included journal submission instructions in the content of the text. OpenRead also produced extensive references and scrolling produced more; however, the first 3 references were used in the text and the others were suggestions for related content. Wisio produced partial paragraphs, often ending with unfinished sentences. References provided were not specific to the prompt, though Wisio&#x2019;s AI Advisor feature did describe and cite one study that met requirements for timeliness and that seemed to be on-topic.</p>
<p>All of the systems allowed for sources to be loaded and used. Uploading articles under copyright for use by these systems is possibly questionable in terms of intellectual property rights. Accordingly, this project sought to understand the affordances of the AI-powered systems without such inputs; however, for users who wanted to focus on specific sources, the ability to upload those sources might be a useful approach.</p>
<sec id="sec6_1">
<title>Limitations and future research</title>
<p>This exploratory study does not seek to answer questions about e-reading in English and Spanish empirically, nor to weigh in on the ethics of the design, functionality, or use of such systems in academic writing, but rather to compare the functionalities of free and low-cost AI-powered systems for creating literature reviews. The research questions investigated the nature of using the systems and the mechanics of the output, rather than the more subjective veracity of the contents. Further research will investigate the accuracy of the text produced, the relevance of the sources cited, and the ability of the AI to identify the most important and pressing concerns relating to the topic. As AI advances, further research should also continue to monitor the strengths and weaknesses of these and other systems.</p>
</sec>
</sec>
<sec id="sec7">
<title>Conclusion</title>
<p>AI-powered systems show incredible promise, with one unstudied area being the creation of scholarly reviews of the literature. Although the kinds of research questions that might be examined in social sciences like LIS will necessarily be complex, until now little has been done to evaluate these systems for their potential use in scholarship and teaching. This paper is a first attempt to systematically analyze the functionalities of four AI-powered systems for reviews of the literature that are available for use by individuals.</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="R1"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Antu</surname><given-names>S.A.</given-names></name><name><surname>Chen</surname><given-names>H.</given-names></name><name><surname>Richards</surname><given-names>C.K.</given-names></name></person-group><year>2023</year><article-title>Using LLM (Large Language Model) to improve efficiency in literature review for undergraduate research</article-title><source>LLM@ AIED</source><fpage>8</fpage><lpage>16</lpage></element-citation></ref>
<ref id="R2"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bolter</surname><given-names>J.D.</given-names></name><name><surname>Grusin</surname><given-names>R.A.</given-names></name></person-group><year>1996</year><article-title>Remediation</article-title><source>Configurations</source><volume>4</volume><issue>3</issue><fpage>311</fpage><lpage>358</lpage></element-citation></ref>
<ref id="R3"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Canclini</surname><given-names>N.</given-names></name><name><surname>Gerber Bicecci</surname><given-names>V.</given-names></name><name><surname>L&#x00F3;pez Ojeda</surname><given-names>A.</given-names></name></person-group><year>2015</year><chapter-title>Hacia una antropolog&#x00ED;a de los lectores</chapter-title><source>D - Ediciones Culturales Paid&#x00F3;s</source><publisher-loc>M&#x00E9;xico</publisher-loc></element-citation></ref>
<ref id="R4"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Cavallo</surname> <given-names>G.</given-names></name><name><surname>Chartier</surname> <given-names>R.</given-names></name></person-group><year>1999</year><source>A history of reading in the west</source><publisher-loc>Amherst</publisher-loc><publisher-name>University of Massachusetts Press</publisher-name></element-citation></ref>
<ref id="R5"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Cheerkoot- Jalim</surname><given-names>S.</given-names></name><name><surname>Khedo</surname><given-names>K.K.</given-names></name></person-group><year>2021</year><article-title>Literature-based discovery approaches for evidence&#x00AC;based healthcare: A systematic review</article-title><source>Health and Technology</source><volume>11</volume><issue>6</issue><fpage>1205</fpage><lpage>1217</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s12553-021-00605-y">https://doi.org/10.1007/s12553-021-00605-y</ext-link></element-citation></ref>
<ref id="R6"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Choe</surname><given-names>K.</given-names></name><name><surname>Park</surname><given-names>S.</given-names></name><name><surname>Jung</surname><given-names>S.</given-names></name><name><surname>Kim</surname><given-names>H.</given-names></name><name><surname>Yang</surname><given-names>J.W.</given-names></name><name><surname>Hong</surname><given-names>H.</given-names></name><name><surname>Seo</surname><given-names>J.</given-names></name></person-group><year>2024</year><article-title>Supporting novice researchers to write literature review using language models</article-title><source>Extended Abstracts of the CHI Conference on Human Factors in Computing Systems</source><fpage>1</fpage><lpage>9</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1145/3613905.3650787">https://doi.org/10.1145/3613905.3650787</ext-link></element-citation></ref>
<ref id="R7"><element-citation publication-type="other"><person-group person-group-type="author"><collab>Consensus AI</collab></person-group><year>2022</year><comment>September 14</comment><article-title>Consensus and Semantic Scholar Partner for Approachable Information</article-title><ext-link ext-link-type="uri" xlink:href="https://consensus.app/home/blog/science-answer-app-consensus-and-article- aggregator-and-discovery-tool-semantic-scholar-announce-their-partnership/">https://consensus.app/home/blog/science-answer-app-consensus-and-article- aggregator-and-discovery-tool-semantic-scholar-announce-their-partnership/</ext-link></element-citation></ref>
<ref id="R8"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Cord&#x00F3;n</surname><given-names>J. A.</given-names></name></person-group><year>2016</year><article-title>La lectura en el entorno digital: nuevas materialidades y pr&#x00E1;cticas discursivas</article-title><source>Revista chilena de literatura</source><volume>94</volume><fpage>15</fpage><lpage>38</lpage><ext-link ext-link-type="uri" xlink:href="https://www.scielo.cl/pdf/rchilite/n94/art02.pdf">https://www.scielo.cl/pdf/rchilite/n94/art02.pdf</ext-link></element-citation></ref>
<ref id="R9"><element-citation publication-type="other"><person-group person-group-type="author"><collab>ijmarkandel</collab></person-group><year>2023</year><article-title>AI tools for literature review. r/PhD. Reddit</article-title><ext-link ext-link-type="uri" xlink:href="https://www.reddit.com/r/PhD/comments/17nqf59/ai_tools_for_literature_review/?rdt=52038">https://www.reddit.com/r/PhD/comments/17nqf59/ai_tools_for_literature_review/?rdt=52038</ext-link></element-citation></ref>
<ref id="R10"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Joos</surname><given-names>L.</given-names></name><name><surname>Keim</surname><given-names>D.A.</given-names></name><name><surname>Fischer</surname><given-names>M.T.</given-names></name></person-group><year>2024</year><article-title>Cutting through the clutter: The potential of LLMs for efficient filtration in systematic literature reviews</article-title><source>arXiv Preprint arXiv:2407.10652</source><ext-link ext-link-type="uri" xlink:href="https://arxiv.org/pdf/2407.10652">https://arxiv.org/pdf/2407.10652</ext-link></element-citation></ref>
<ref id="R11"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Machi</surname><given-names>L.A.</given-names></name><name><surname>McEvoy</surname><given-names>B.T.</given-names></name></person-group><year>2016</year><source>The literature review: Six steps to success. 3rd ed</source><publisher-loc>Thousand Oaks, California</publisher-loc><publisher-name>Corwin</publisher-name></element-citation></ref>
<ref id="R12"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Mittal</surname><given-names>A.</given-names></name><name><surname>Murthy</surname><given-names>R.</given-names></name><name><surname>Kumar</surname><given-names>V.</given-names></name><name><surname>Bhat</surname><given-names>R.</given-names></name></person-group><year>2024</year><comment>January</comment><article-title>Towards understanding and mitigating the hallucinations in NLP and Speech</article-title><source>Proceedings of the 7th Joint International Conference on Data Science &#x0026; Management of Data (11th ACM IKDD CODS and 29th COMAD),</source><fpage>489</fpage><lpage>492</lpage></element-citation></ref>
<ref id="R13"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Morales</surname><given-names>O.</given-names></name><name><surname>Espinoza</surname><given-names>N.</given-names></name></person-group><year>2003</year><article-title>Lectura y escritura: coexistencia entre lo impreso y lo electr&#x00F3;nico</article-title><source>Educere</source><volume>7</volume><issue>22</issue><fpage>213</fpage><lpage>222</lpage></element-citation></ref>
<ref id="R14"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Mugaanyi</surname><given-names>J.</given-names></name><name><surname>Cai</surname><given-names>L.</given-names></name><name><surname>Cheng</surname><given-names>S.</given-names></name><name><surname>Lu</surname><given-names>C.</given-names></name><name><surname>Huang</surname><given-names>J.</given-names></name></person-group><year>2024</year><article-title>Evaluation of large language model performance and reliability for citations and references in scholarly writing: Cross-disciplinary study</article-title><source>Journal of Medical Internet Research</source><volume>26</volume><fpage>e52935</fpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.2196/52935">https://doi.org/10.2196/52935</ext-link></element-citation></ref>
<ref id="R15"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Pinzolits</surname><given-names>R.F. J.</given-names></name></person-group><year>2023</year><article-title>AI in academia: An overview of selected tools and their areas of application</article-title><source>MAP Education and Humanities</source><volume>4</volume><issue>1</issue><fpage>37</fpage><lpage>50</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.53880/2744- 2373.2023.4.37">https://doi.org/10.53880/2744- 2373.2023.4.37</ext-link></element-citation></ref>
<ref id="R16"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Shopovski</surname><given-names>J.</given-names></name></person-group><year>2024</year><article-title>Generative artificial intelligence, AI for scientific writing: A literature review</article-title><source>Preprints, 2024060011</source><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.20944/preprints202406.0011.v1">https://doi.org/10.20944/preprints202406.0011.v1</ext-link></element-citation></ref>
<ref id="R17"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Slater</surname><given-names>T.F.</given-names></name></person-group><year>2018</year><article-title>Improving your argument by identifying a literature gap</article-title><source>Journal of Astronomy and Earth Sciences Education</source><volume>5</volume><issue>1</issue><fpage>i</fpage><lpage>ii</lpage></element-citation></ref>
<ref id="R18"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Zhao</surname><given-names>P.</given-names></name><name><surname>Zhang</surname><given-names>X.</given-names></name><name><surname>Cheng</surname><given-names>M.M.</given-names></name><name><surname>Yang</surname><given-names>J.</given-names></name><name><surname>Li</surname><given-names>X.</given-names></name></person-group><year>2024</year><article-title>A literature review of literature reviews in pattern analysis and machine intelligence</article-title><comment>arXiv preprint arXiv:2402.12928</comment></element-citation></ref>
</ref-list>
</back>
</article>