From human to AI: University students’ trust in generative AI as reference librarians
DOI:
https://doi.org/10.47989/ir31iConf64203Keywords:
Trust in AI, University students, Reference services, University libraryAbstract
Introduction. We report an investigation to compare students’ trust in GAI versus human librarians in the scenario of university libraries’ reference services and explore factors affecting students’ trust in GAI.
Method. A within-subject design experiment with five tasks covering five disciplines was carried out based on a research model of antecedents for trust in AI, including AI (trustee) characteristics, human (trustor) characteristics, and decision situation (context) characteristics. Two versions of feedback (ChatGPT versus human) were generated for each task and randomly presented to participants without indicating feedback providers’ identities.
Analysis. Quantitative analyses were carried out on the data of 146 participants, including descriptive statistics and reliability tests, bivariate correlations, and hierarchical multiple linear regression.
Results. Students’ trust in GAI versus human librarians shows no significant difference. AI characteristics emerged as the strongest predictors for students’ trust in GAI as reference librarians, especially the perceived knowledge of AI. Human and decision situation characteristics have limited effects on students’ trust in GAI.
Conclusion. The findings refine AI trust antecedents, emphasising system-centric factors over user-centric ones. It supports a shift toward designing ‘trustworthy by design’ AI systems and prioritises verifiable knowledge bases and clarified knowledge presentation ability in developing GAI-embedded reference service systems.
References
Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal QC, Canada. https://doi.org/10.1145/3173574.3174156
Ajani, Y. A., Tella, A., Salawu, K. Y., & Abdullahi, F. (2022). Perspectives of librarians on awareness and readiness of academic libraries to integrate artificial intelligence for library operations and services in Nigeria. Internet Reference Services Quarterly, 26(4), 213-230.
Balnaves, E., Bultrini, L., Cox, A., & Uzwyshyn, R. (2025). New Horizons in Artificial Intelligence in Libraries. De Gruyter Saur. https://doi.org/https://doi.org/10.1515/9783111336435
Bećirović, S., Polz, E., & Tinkel, I. (2025). Exploring students’ AI literacy and its effects on their AI output quality, self-efficacy, and academic performance. Smart Learning Environments, 12(1), 29. https://doi.org/10.1186/s40561-025-00384-3
Belanger, F., Hiller, J. S., & Smith, W. J. (2002). Trustworthiness in electronic commerce: the role of privacy, security, and site attributes. The journal of strategic Information Systems, 11(3-4), 245-270.
Cavalcanti, A. P., Barbosa, A., Carvalho, R., Freitas, F., Tsai, Y.-S., Gašević, D., & Mello, R. F. (2021). Automatic feedback in online learning environments: A systematic literature review. Computers and Education: Artificial Intelligence, 2, 100027. https://doi.org/https://doi.org/10.1016/j.caeai.2021.100027
Feuerriegel, S., Hartmann, J., Janiesch, C., & Zschech, P. (2024). Generative AI. Business & Information Systems Engineering, 66(1), 111-126. https://doi.org/10.1007/s12599-023-00834-7
Glikson, E., & Woolley, A. W. (2020). Human Trust in Artificial Intelligence: Review of Empirical Research. Academy of Management Annals, 14(2), 627-660. https://doi.org/10.5465/annals.2018.0057
Gomez, C., Unberath, M., & Huang, C.-M. (2023). Mitigating knowledge imbalance in AI-advised decision-making through collaborative user involvement. International journal of human-computer studies, 172, 102977.
GYANG, C. H. (2020). Library reference services based on artificial intelligence. Villanova Journal of Science, Technology and Management, 2(1).
Huang, K.-T., & Ball, C. (2024). The Influence of AI Literacy on User's Trust in AI in Practical Scenarios: A Digital Divide Pilot Study. Proceedings of the Association for Information Science and Technology, 61(1), 937-939. https://doi.org/https://doi.org/10.1002/pra2.1146
Hussain, A. (2023). Use of artificial intelligence in the library services: prospects and challenges. Library Hi Tech News, 40(2), 15-17.
Jha, S. K. (2023). Application of artificial intelligence in libraries and information centers services: prospects and challenges. Library Hi Tech News, 40(7), 1-5.
Kahr, P. K., Rooks, G., Willemsen, M. C., & Snijders, C. C. P. (2023). It Seems Smart, but It Acts Stupid: Development of Trust in AI Advice in a Repeated Legal Decision-Making Task Proceedings of the 28th International Conference on Intelligent User Interfaces, Sydney, NSW, Australia. https://doi.org/10.1145/3581641.3584058
Kaushal, V., & Yadav, R. (2022). The role of chatbots in academic libraries: An experience-based perspective. Journal of the Australian Library and Information Association, 71(3), 215-232.
McNeal, M. L., & Newyear, D. (2013). Introducing chatbots in libraries. Library technology reports, 49(8), 5-10.
Onifade, F. N., & Sowole, A. O. (2011). Reference services in a university library: awareness and perception of undergraduate students. Pacific Northwest Library Association in Quantity.
Pelau, C., Dabija, D.-C., & Ene, I. (2021). What makes an AI device human-like? The role of interaction quality, empathy, and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Computers in human behavior, 122, 106855.
Phelps, S. F., & Campbell, N. (2012). Commitment and Trust in Librarian–Faculty Relationships: A Systematic Review of the Literature. The Journal of Academic Librarianship, 38(1), 13-19. https://doi.org/https://doi.org/10.1016/j.acalib.2011.11.003
Ritala, P., Ruokonen, M., & Ramaul, L. (2023). Transforming boundaries: how does ChatGPT change knowledge work? Journal of Business Strategy, 45(3), 214-220. https://doi.org/10.1108/jbs-05-2023-0094
Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206-215. https://doi.org/10.1038/s42256-019-0048-x
Saßmannshausen, T., Burggräf, P., Wagner, J., Hassenzahl, M., Heupel, T., & Steinberg, F. (2021). Trust in artificial intelligence within production management–an exploration of antecedents. Ergonomics, 64(10), 1333-1350.
Schaefer, K. E., Chen, J. Y. C., Szalma, J. L., & Hancock, P. A. (2016). A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Understanding Autonomy in Future Systems. Human Factors, 58(3), 377-400. https://doi.org/10.1177/0018720816634228
Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International journal of human-computer studies, 146, 102551.
Tyckoson, D. A. (2001). What is the best model of reference service? Library trends, 50(2), 183-196.
Vijayakumar, S., & Sheshadri, K. (2019). Applications of artificial intelligence in academic libraries. International Journal of Computer Sciences and Engineering, 7(16), 136-140.
Wang, B., Rau, P.-L. P., & Yuan, T. (2023). Measuring user competence in using artificial intelligence: validity and reliability of artificial intelligence literacy scale. Behaviour & information technology, 42(9), 1324-1337.
Wang, D., Zheng, K., Li, C., & Guo, J. (2024). Transitioning to Human-Centered AI: A Systematic Review of Theories, Scenarios, and Hypotheses in Human-AI Interactions. Proceedings of the Association for Information Science and Technology, 61(1), 673-678. https://doi.org/https://doi.org/10.1002/pra2.1078
Wheatley, A., & Hervieux, S. (2019). Artificial intelligence in academic libraries: An environmental scan. Information services & use, 39, 347-356. https://doi.org/10.3233/ISU-190065
Yan, L., Sha, L., Zhao, L., Li, Y., Martinez-Maldonado, R., Chen, G., Li, X., Jin, Y., & Gašević, D. (2024). Practical and ethical challenges of large language models in education: A systematic scoping review. British Journal of Educational Technology, 55(1), 90-112. https://doi.org/https://doi.org/10.1111/bjet.13370
Zhang, G., Chong, L., Kotovsky, K., & Cagan, J. (2023). Trust in an AI versus a Human teammate: The effects of teammate identity and performance on Human-AI cooperation. Computers in human behavior, 139, 107536. https://doi.org/https://doi.org/10.1016/j.chb.2022.107536
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Di Wang , Xizhou Deng , Xinyu Lu , Jianting Guo

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
