<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "http://jats.nlm.nih.gov/publishing/1.0/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" article-type="research-article" xml:lang="en">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">IR</journal-id>
<journal-title-group>
<journal-title>Information Research</journal-title>
</journal-title-group>
<issn pub-type="epub">1368-1613</issn>
<publisher>
<publisher-name>University of Bor&#x00E5;s</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">ir30iConf47584</article-id>
<article-id pub-id-type="doi">10.47989/ir30iConf47584</article-id>
<article-categories>
<subj-group xml:lang="en">
<subject>Research article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Evaluating techniques of artificial intelligence for social robots</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author"><name><surname>Barfield</surname><given-names>Jessica K.</given-names></name>
<xref ref-type="aff" rid="aff0001"/></contrib>
<aff id="aff0001"><bold>Jessica K. Barfield</bold> is Assistant Professor in School of Information Science and the Human-Robot Interaction Lead Faculty of the Intelligent Robotic Arms (IRA) lab in the College of Engineering, University of Kentucky, Lexington, Kentucky, USA. She can be contacted at <email xlink:href="jessicabarfield@uky.edu">jessicabarfield@uky.edu</email>.</aff>
</contrib-group>
<pub-date pub-type="epub"><day>06</day><month>05</month><year>2025</year></pub-date>
<pub-date pub-type="collection"><year>2025</year></pub-date>
<volume>30</volume>
<issue>i</issue>
<fpage>74</fpage>
<lpage>80</lpage>
<permissions>
<copyright-year>2025</copyright-year>
<copyright-holder>&#x00A9; 2025 The Author(s).</copyright-holder>
<license license-type="open-access" xlink:href="https://creativecommons.org/licenses/by-nc/4.0/">
<license-p>This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by-nc/4.0/">http://creativecommons.org/licenses/by-nc/4.0/</ext-link>), permitting all non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<abstract xml:lang="en">
<title>Abstract</title>
<p><bold>Introduction.</bold> Based on techniques of artificial intelligence (AI) robots equipped with algorithms are becoming more social and intelligent as they enter society. As a relatively unexplored topic of research the current study evaluated whether the perception of robot intelligence was influenced by different techniques of AI.</p>
<p><bold>Method.</bold> In an online study participants viewed two versions of a humanoid robot which varied by their surface colour and stated AI abilities. For each image participants rated the perceived intelligence of the robot.</p>
<p><bold>Results.</bold> Using an online survey, the results found no statistically significant effect for robot surface colour on judgments of robot intelligence but that robot voice enablement and the ability to detect a user&#x0027;s face and emotions added significantly to the perception of robot intelligence. In addition, amongst the AI techniques evaluated, text used for human-robot communication was the least effective method for conveying the perception of intelligence for a humanoid robot.</p>
<p><bold>Conclusion.</bold> As tentative conclusions, the perception of robot intelligence can be based on the specific AI technique used to design the robot, and it appears that the more the human-like AI ability of the robot, the more likely that users will view the robot as intelligent.</p>
</abstract>
</article-meta>
</front>
<body>
<sec id="sec1">
<title>Introduction</title>
<p>We live in an age of algorithms in which the technologies that we interact with are gaining in intelligence through neural nets, machine learning techniques, and the use of different statistical procedures. In this paper I take a broad view of intelligence and propose that it reflects a machine or software&#x2019;s ability to perform tasks that are typically associated with human intelligence. One type of intelligent technology that is entering society is social robots which may operate using the latest techniques of artificial intelligence (AI). For example, using computer vision robots can navigate the environment and avoid obstacles (<xref rid="R13" ref-type="bibr">Russell &#x0026; Norvig, 2020</xref>), and using machine learning algorithms robots can detect a person&#x2019;s face, and to some extent, their emotional state (<xref rid="R8" ref-type="bibr">Liu et al., 2017</xref>). Robots also have the ability to communicate with users through text or spoken language (<xref rid="R6" ref-type="bibr">Edwards et al., 2019</xref>). Surprisingly, with the increased use of algorithms creating smart technologies, there has been relatively little research to determine the effectiveness of AI-driven techniques from a usability perspective. To address this gap in the literature this paper explored how people evaluated different techniques of AI when used to guide the behaviour of a social robot.</p>
<p>As background, past studies have shown that different AI abilities controlling a robot&#x2019;s performance can influence the user&#x2019;s perception of the robot and its problem-solving ability (<xref rid="R7" ref-type="bibr">Flores-Fuentes et al., 2014</xref>; <xref rid="R10" ref-type="bibr">McKee, 2003</xref>). For example, Dou et al. (<xref rid="R5" ref-type="bibr">2021</xref>) found that the use of a robot equipped with natural language processing was effective for shopping, education, and as a home companion. Further, Makibuchi et al. (<xref rid="R9" ref-type="bibr">2010</xref>) showed that an algorithmic-based problem-solving approach combined with a humanoid robot was effective for solving problems encountered in a real-world setting. In a different application, Chen et al. (<xref rid="R3" ref-type="bibr">2021</xref>) investigated the effectiveness of AI for customer experiences when chatbots were used. Using a quantitative approach, they found that the usability of the chatbot had a positive influence on extrinsic values of customer experience, whereas the responsiveness of the chatbot had a positive impact on intrinsic values of customer experience.</p>
<p>Evaluating another AI technique, Roundtree and Moallem (<xref rid="R12" ref-type="bibr">2021</xref>) investigated the use of facial recognition ability for young adult and adolescent populations. In a review of the literature they concluded that facial recognition raised important questions about the clinical and ethical applications of the technology. They also noted that prior studies had not adequately addressed the complexities of facial change over time and for different ethnicities thus influencing the accuracy of facial recognition software (<xref rid="R12" ref-type="bibr">Roundtree &#x0026; Moallem 2021</xref>). They concluded that while facial recognition is a promising technology, usability studies were mostly lacking for facial recognition applications. For a different application, Riaz et al. (<xref rid="R11" ref-type="bibr">2022</xref>) looked at the use of facial recognition as a biometric platform and particularly in the context of employment in a real-world setting. Specifically, they evaluated the usability and vulnerability of biometric technologies implemented in the UAE public transportation system to boost security and public service delivery. They noted that usage of facial recognition technology had raised numerous concerns about biometric data security and privacy. Based on participants responses collected using a survey instrument they concluded that public transport users had a poor impression of facial recognition technology compared to iris recognition and fingerprints authentication.</p>
<p>Considering the usability of robots, in addition to the use of AI to control the behavior of robots, the physical design of the robot has been shown to influence user evaluations of the robot. For example, using the shooter bias paradigm, Bartneck et al. (<xref rid="R2" ref-type="bibr">2018</xref>) found that people would shoot faster at a robot that had a darker surface colour than a lighter colour robot. Additionally, Sparrow (<xref rid="R14" ref-type="bibr">2020</xref>) commented that people racialized a robot based on its design, and particularly surface colour. Barfield (<xref rid="R1" ref-type="bibr">2021</xref>) also found that a robot&#x2019;s surface colour influenced the task that robots would be selected to perform with lighter-coloured robots selected for tasks which required problem solving ability and darker-coloured robots selected for tasks which were more labor intensive. From the above brief discussion, missing in the literature are studies which looked at features of a robot&#x2019;s design in the context of evaluating AI techniques guiding the robot&#x2019;s performance. This led to the following question addressed in the current study- would the sur-face colour of a robot, thought by Sparrow (<xref rid="R14" ref-type="bibr">2020</xref>) to racialize a robot, influence its evaluation when equipped with different techniques of AI? More specifically, based on the above discussion of the literature, in this study the primary objective was to evaluate an individual&#x2019;s perception of robot intelligence as a function of different AI techniques controlling the robot. To reiterate, the main interest was to determine if robot surface colour influenced the evaluation of robot intelligence when combined with different AI abilities. The research is guided by the following research questions.</p>
<speech><speaker>RQ1:</speaker><p>Will robot surface colour (black or white) influence the perception of robot intelligence?</p></speech>
<speech><speaker>RQ2:</speaker><p>Will different techniques of AI result in the same or different ratings of robot intelligence?</p></speech>
</sec>
<sec id="sec2">
<title>Methodology</title>
<sec id="sec2_1">
<title>Experiment design</title>
<p>After obtaining IRB approval, 27 participants (21 male, 6 female) whose mean age was 32.25 years were recruited from online social media sites to participate in the study. The study used two levels of robot surface colour (black, white), and four descriptions of robot intelligence based on the use of different AI techniques (see <xref ref-type="table" rid="T1">Table 1</xref>). As protocol, participants viewed all AI conditions and robot colours presented in a random order for each participant. After giving consent to participate in the study participants viewed the robot images and then completed an online questionnaire rating the intelligence of the robot just viewed. Further, after viewing all robot conditions participants answered questions rating robot intelligence considering the different AI techniques presented with the robots.</p>
</sec>
<sec id="sec2_2">
<title>Procedure, dependent variable, and robot narrative</title>
<p>The dependent variable consisted of answers to an online survey which used 1-7 Likert items to evaluate perceived robot intelligence. When the robot used voice to communicate, the participant heard the robot speak a narrative indicating the robot (<xref ref-type="fig" rid="F1">Figure 1</xref>) and user would be working together to solve problems. Each narrative supported the experiment protocol by informing the participants of the AI techniques associated with a particular robot. For the non-voice condition, the narrative was placed on the screen in text and read by the participant. In either case, the robot appeared on the screen while the participant read the narrative or heard the narrative spoken by the robot. The text to speech software used to produce the robot voice was TTSApp for Visual Basic. The robot voice was gendered male and the language spoken was English. The audio format supported was 16kHz, with 16-bit sampling frequency. old heading, normal text.</p>
<fig id="F1">
<label>Figure 1.</label>
<caption><p>The two robots used in the study</p></caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="images\c7-fig1.jpg"><alt-text>none</alt-text></graphic>
</fig>
<p>The narrative spoken by the robot allowed the robot to inform the user of the AI techniques it was equipped with (basically low or high levels of AI crossed with the two robot colours). For example, the narratives for the AI condition with the most AI techniques indicated that the robot had motion and collision detection abilities, the capacity for facial and emotion detection, was mobile, and had natural language processing ability. When the narrative excluded these capabilities and the robot communicated with text only (<xref rid="R1" ref-type="bibr">Barfield, 2021</xref>) the narrative described the least amount of AI techniques. <xref ref-type="table" rid="T1">Table 1</xref> shows the four narratives used to prime the AI condition. Thus, for each condition a white or black robot appeared on the participant&#x2019;s screen and the narrative indicting different AI abilities was either read by the participant or spoken by the robot.</p>
<table-wrap id="T1">
<label>Table 1.</label>
<caption><p>The AI narratives for each of the two robot colours</p></caption>
<table>
<thead>
<tr>
<th align="left" valign="top"></th>
<th align="left" valign="top"></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top"><bold>AI Robot Enabled Condition with Voice</bold></td>
<td align="center" valign="top">Hi, I&#x2019;m a robot and I will be working with you to solve a number of problems that we have been assigned to work on together. I hope you find my help useful. I have the following abilities: I can recognize your face and emotions with facial recognition and emotion recognition software, I am mobile and can avoid obstacles, I can understand your speech and talk back to you with natural language ability.</td>
</tr>
<tr>
<td align="left" valign="top"><bold>AI Robot Less Enabled Condition with Voice</bold></td>
<td align="center" valign="top">Hi, I&#x2019;m a robot and I will be working with you to solve a number of problems that we have been assigned to work on together. I hope you find my help useful. I have the following abilities: I can understand your speech and talk back to you with natural language ability.</td>
</tr>
<tr>
<td align="left" valign="top"><bold>AI Robot Enabled Condition without Voice</bold></td>
<td align="center" valign="top">Hi, I&#x2019;m a robot and I will be working with you to solve a number of problems that we have been assigned to work on together. I hope you find my help useful. I have the following abilities: I can detect your face and emotions with facial recognition and emotion recognition software, I am mobile and can avoid obstacles. I can understand what you text to me and I can text you back.</td>
</tr>
<tr>
<td align="left" valign="top"><bold>AI Robot Less Enabled Condition without Voice</bold></td>
<td align="center" valign="top">Hi, I&#x2019;m a robot and I will be working with you to solve a number of problems that we have been assigned to work on together. I hope you find my help useful. I can understand what you text to me and I can text you back.</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec id="sec3">
<title>Results and discussion</title>
<p>The following is a preliminary analysis of the data as further analysis are being performed. Participants were asked to rate robot intelligence using 1-7 Likert items, from this an ANOVA was performed on robot surface colour crossed with the different types of AI enablement presented in the four narratives.</p>
<p>For judgment of robot intelligence, the ANOVA procedure indicated that the main effect for robot surface colour was not statistically significant (p &#x003E; .05). Thus, there was no significant difference among participants in the perception of robot intelligence as a function of whether the participant viewed a black or white-coloured robot. However, the main effect for the different AI enablement narratives was highly significant (p &#x003C; .001). The Tukey HSD multiple comparison test indicated that voice enablement resulted in the perception of a more intelligent robot than a robot that communicated with text (p &#x003C; .01). Further, facial recognition ability led to the perception of more perceived intelligence than text communication (p &#x003C; .01), and similarly, the ability to detect the user&#x2019;s emotions was evaluated as more useful for judging robot intelligence than the ability to communicate with text (p &#x003C; .01). Interestingly, the two-way interaction between AI enablement and robot surface colour was not statistically significant (p &#x003E; .05); indicating that the black or white colour of the robot did not affect the evaluation of the robot&#x2019;s intelligence when the different AI techniques were considered.</p>
<p>Another question asked how intelligent robots were thought to be (with black and white robots combined in one group) when considering the AI techniques of voice communication, facial, and emotion recognition. A one-way ANOVA with voice, facial recognition, and emotion recognition abilities was not statistically significant (p &#x003E; .05). More broadly, <xref ref-type="fig" rid="F2">Figure 2</xref> shows a graphical depiction of the participant&#x2019;s evaluation of robot intelligence as a function of voice or text communication, and facial and emotion recognition ability. Supporting the results of the multiple comparison test presented earlier, the figure shows that text communication was rated lowest in the evaluation of robot intelligence.</p>
<fig id="F2">
<label>Figure 2.</label>
<caption><p>Robot intelligence based on AI techniques</p></caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="images\c7-fig2.jpg"><alt-text>none</alt-text></graphic>
</fig>
<p>Overall, the results indicated that emotion detection ability, facial recognition, and natural language ability increased the ratings of robot intelligence comparted to text communication. However, it is interesting to note that these AI techniques were not judged to be significantly different from each other when participants evaluated robot intelligence. But consistently, participants evaluated a robot that communicated with text as being less intelligent. It was also shown that colorising a robot as black or white did not affect the evaluation of robot intelligence. In fact, across the different types of AI enablement, the evaluation of robot intelligence was remarkedly similar for white and black-coloured robots (<xref ref-type="fig" rid="F3">Figure 3</xref>). old heading, normal text.</p>
<fig id="F3">
<label>Figure 3.</label>
<caption><p>Perceived robot intelligence as a function of robot colour</p></caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="images\c7-fig3.jpg"><alt-text>none</alt-text></graphic>
</fig>
<p>As a discussion point, I should note that these results are based on the viewing of a static robot image and not based on interacting with a robot to perform a task; for that reason, I postulate that the type of task performed could influence the participants rating of robot intelligence and will be the topic of a future study (<xref rid="R4" ref-type="bibr">Ciocirlan et al 2019</xref>). As another point to emphasize, the current study should be considered exploratory in that this study reflects a beginning effort by the author to investigate a relatively unexplored topic: whether different techniques of AI influence the user&#x2019;s evaluation of a social entity. In the current study, the use of voice was a strong cue to increase the perception of robot intelligence and with large language models beginning to be used with social robots, going forward, it is likely that robots will be perceived as an even more intelligent entity during social interactions. Thus, voice communication seems to be a factor in usability for social robots which leads to another point- the perception of intelligence for an artificial entity could be an important factor to consider when evaluating the usability of an AI-equipped entity. From this observation perhaps a robot intelligence scale should be developed and used in usability studies. It was also an interesting finding that robot surface colour had no effect on the perception of robot intelligence. If robots are racialized as indicated by Sparrow (<xref rid="R14" ref-type="bibr">2020</xref>a) it is possible that racial stereotypes may not be a factor in the evaluation of robot intelligence. This conclusion warrants further exploration. Finally, given the viewing of a static robot image, facial and emotion recognition, robot mobility, and natural language ability were not different in user evaluations of robot intelligence. Whether the results presented here are replicated in future studies using different robots and tasks, and operating with more interactivity between user and robot represents a future direction of my research. old heading, normal text.</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="R1"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Barfield</surname><given-names>J. K.</given-names></name></person-group> <year>(2021)</year> <article-title>Discrimination and stereotypical responses to robots as a function of robot colorization</article-title><source>Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization (UMAP &#x2019;21 Adjunct)</source><publisher-loc>Utrecht, the Netherlands</publisher-loc></element-citation></ref>
<ref id="R2"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Bartneck</surname><given-names>C.</given-names></name><name><surname>Yogeeswaran</surname><given-names>K.</given-names></name><name><surname>Sparrow</surname><given-names>R.</given-names></name><name><surname>Wang</surname><given-names>S.</given-names></name><name><surname>Eyssel</surname><given-names>F.</given-names></name></person-group> <year>(2018)</year> <article-title>Robots and racism</article-title><source>ACM/IEEE International Conference on Human-Robot Interaction</source><fpage>196</fpage><lpage>204</lpage></element-citation></ref>
<ref id="R3"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname><given-names>J.-S.</given-names></name><name><surname>Le</surname><given-names>T.-T.</given-names></name><name><surname>Florence</surname><given-names>D.</given-names></name></person-group> <year>(2021)</year> <article-title>Usability and responsiveness of artificial intelligence chatbot on online customer experience in e-retailing</article-title><source>International Journal of Retail &#x0026; Distribution Management</source><volume>49</volume><issue>11</issue><fpage>1512</fpage><lpage>1531</lpage></element-citation></ref>
<ref id="R4"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Ciocirlan</surname><given-names>S.</given-names></name><name><surname>Agrigoroaie</surname><given-names>R.</given-names></name><name><surname>Tapus</surname><given-names>A.</given-names></name></person-group> <year>(2019)</year> <article-title>Human-robot team: Effects of communication in analyzing trust</article-title><source>IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)</source><fpage>1</fpage><lpage>7</lpage></element-citation></ref>
<ref id="R5"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Dou</surname><given-names>X.</given-names></name><name><surname>Wu</surname><given-names>C. F.</given-names></name><name><surname>Linz</surname><given-names>K. C.</given-names></name><name><surname>Gan</surname><given-names>S. Z.</given-names></name><name><surname>Tseng</surname><given-names>T. M.</given-names></name></person-group> <year>(2021)</year> <article-title>Effects of different types of social robot voices on affective evaluations in different application fields</article-title><source>International Journal of Social Robotics</source><volume>13</volume><issue>4</issue><fpage>615</fpage><lpage>628</lpage></element-citation></ref>
<ref id="R6"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Edwards</surname><given-names>C.</given-names></name><name><surname>Edwards</surname><given-names>A.</given-names></name><name><surname>Stoll</surname><given-names>B.</given-names></name><name><surname>Lin</surname><given-names>X.</given-names></name><name><surname>Massey</surname><given-names>N.</given-names></name></person-group> <year>(2019)</year> <article-title>Evaluations of an artificial intelligence instructor&#x2019;s voice: Social identity theory in human-robot interactions</article-title><source>Computers in Human Behavior</source><volume>90</volume><fpage>357</fpage><lpage>362</lpage></element-citation></ref>
<ref id="R7"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Flores-Fuentes</surname><given-names>W.</given-names></name><name><surname>Rodriguez-Quinonez</surname><given-names>J. C.</given-names></name><name><surname>Hernandez-Balbuena</surname><given-names>D.</given-names></name></person-group><etal/> <year>(2014)</year> <article-title>Machine vision supported by artificial intelligence, applied to rotatory mirror scanners</article-title><source>IEEE 23rd International Symposium on Industrial Electronics (ISIE)</source><fpage>1949</fpage><lpage>1954</lpage></element-citation></ref>
<ref id="R8"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname><given-names>Z.</given-names></name></person-group><etal/> <year>(2017)</year> <article-title>A facial expression emotion recognition based human-robot interaction system</article-title><source>IEEE/CAA Journal of Automatica Sinica</source><volume>4</volume><issue>4</issue><fpage>668</fpage><lpage>676</lpage></element-citation></ref>
<ref id="R9"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Makibuchi</surname><given-names>N.</given-names></name><name><surname>Shen</surname><given-names>F. R.</given-names></name><name><surname>Hasegawa</surname><given-names>O.</given-names></name></person-group> <year>(2010)</year> <article-title>Online knowledge acquisition and general problem solving in a real world by humanoid robots</article-title><person-group person-group-type="editor"><name><surname>Diamantaras</surname><given-names>K.</given-names></name></person-group><person-group person-group-type="editor"><name><surname>Duch</surname><given-names>W.</given-names></name></person-group><person-group person-group-type="editor"><name><surname>Iliadis</surname><given-names>L. S.</given-names></name></person-group><source>ICANN, Part III, LNCS 6354</source><fpage>551</fpage><lpage>556</lpage><publisher-name>Springer-Verlag Berlin Heidelberg</publisher-name></element-citation></ref>
<ref id="R10"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>McKee</surname><given-names>G. T.</given-names></name></person-group> <year>(2003)</year> <article-title>An online robot system for projects in robot intelligence</article-title><source>International Journal of Engineering Education</source><volume>19</volume><issue>3</issue><fpage>356</fpage><lpage>362</lpage></element-citation></ref>
<ref id="R11"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Riaz</surname><given-names>S.</given-names></name><name><surname>Mushtaq</surname><given-names>A.</given-names></name><name><surname>Ibrar</surname><given-names>H.</given-names></name></person-group> <year>(2022)</year> <article-title>Analyzing and comparing public perception of facial recognition, iris verification and fingerprints based authentication systems</article-title><source>International Conference on Control Decision and Information Technologies</source><fpage>641</fpage><lpage>646</lpage></element-citation></ref>
<ref id="R12"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Roundtree</surname><given-names>A. K.</given-names></name><name><surname>Moallem</surname><given-names>A.</given-names></name></person-group> <year>(2021)</year> <article-title>Testing facial recognition software for young adults and adolescents: An integrative review</article-title><source>HCI for Cybersecurity, Privacy and Trust (HCI-CPT 2021), 12788</source><fpage>50</fpage><lpage>65</lpage></element-citation></ref>
<ref id="R13"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Russell</surname><given-names>S.</given-names></name><name><surname>Norvig</surname><given-names>P.</given-names></name></person-group> <year>(2020)</year> <source>Artificial intelligence: A modern approach</source><publisher-name>Pearson</publisher-name></element-citation></ref>
<ref id="R14"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sparrow</surname><given-names>R.</given-names></name></person-group> <year>(2020)</year> <article-title>Do robots have race? Race, social construction, and HRI</article-title><source>IEEE Robotics and Automation Magazine</source><volume>27</volume><issue>3</issue><fpage>144</fpage><lpage>150</lpage></element-citation></ref>
<ref id="R15"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sparrow</surname><given-names>R.</given-names></name></person-group> <year>(2020)</year> <comment>a</comment><article-title>Robotics has a race problem</article-title><source>Science Technology and Human Values</source><volume>45</volume><issue>3</issue><fpage>538</fpage><lpage>560</lpage></element-citation></ref>
</ref-list>
</back>
</article>