Towards a typology of epistemic relationships in human–AI interaction

Authors

  • Shengnan Yang The University of Western Ontario
  • Rongqian Ma Indiana University Bloomington

DOI:

https://doi.org/10.47989/ir31iConf64143

Keywords:

Epistemic relationship, Human-AI interaction (HAI) and collaboration, AI metaphor, Semi-structured interview

Abstract

Introduction. This study investigates how academics negotiate epistemic relationships (ERs) with artificial intelligence (AI) in research and teaching. We conceptualize ERs as the positioning of humans and AI in relation to knowledge, asking: What epistemic relationships emerge in researchers’ interactions with AI, and how are they enacted across academic knowledge practices? 

Method. Semi-structured interviews were conducted in two independently collected datasets with 31 academics across regions and disciplines.

Analysis. We developed a five-dimensional codebook including three epistemic attributes (assessment perspective, trust type, human epistemic status) and two analytic devices (activities and metaphors). We refined the coding through iterative analysis and identified recurring patterns across participants’ accounts in both datasets.

Results. Five recognizable ER types emerged: Epistemic Abstention, Instrumental Reliance, Contingent Delegation, Co-agency Collaboration, and Authority Displacement. These types reflect variations in how participants assess and trust AI, position themselves epistemically, and embed AI in academic activities.

Conclusion(s). Our study contributes to conversations on information literacies and the ethical use of AI. We highlight the contingent and practice-dependent nature of ERs, suggesting a shift beyond static metaphors of AI toward a more nuanced account that captures the epistemic dimensions of human–AI knowledge co-construction.

References

Alvarado, R. (2023). AI as an Epistemic Technology. Science and Engineering Ethics, 29(5), 32.

Blanco, S. (2025). Human trust in AI: A relationship beyond reliance. AI and Ethics.

Carter, N., Bryant-Lukosius, D., DiCenso, A., Blythe, J., & Neville, A. J. (2014). The use of triangulation in qualitative research. Oncology nursing forum, 41(5), 545–547.

Choudhary, V., Marchetti, A., Shrestha, Y. R., & Puranam, P. (2025). Human-AI ensembles: When can they work? Journal of Management, 51(2), 536–569.

Cornelio, P., Haggard, P., Hornbaek, K., Georgiou, O., Bergström, J., Subramanian, S., & Obrist, M. (2022). The sense of agency in emerging technologies for human–computer integration: A review. Frontiers in Neuroscience, 16.

Cox, A. M. (2012). An exploration of the practice approach and its place in information science. Journal of Information Science, 38(2), 176-188.

Csaszar, F. A., Ketkar, H., & Kim, H. (2024). Artificial intelligence and strategic decision-making: Evidence from entrepreneurs and investors (SSRN Scholarly Paper No. 4913363). Social Science Research Network.

Denzin, N. K. (1970). The research act: A theoretical introduction to sociological methods. Aldine Publishing Company.

Divis, K., Howell, B., Matzen, L., Stites, M., & Gastelum, Z. (2022). The cognitive effects of machine learning aid in domain-specific and domain-general tasks. Hawaii International Conference on System Sciences.

Felin, T., & Holweg, M. (2024). Theory is all you need: AI, human cognition, and decision making. SSRN Electronic Journal.

Ferrario, A., Loi, M., & Viganò, E. (2020). In AI we trust incrementally: A multi-layer model of trust to analyze human-artificial intelligence interactions. Philosophy & Technology, 33(3), 523–539.

Ferrario, A., Facchini, A., & Termine, A. (2024). Experts or authorities? The strange case of the presumed epistemic superiority of artificial intelligence systems. Minds and Machines, 34(3), 30.

Fragiadakis, G., Diou, C., Kousiouris, G., & Nikolaidou, M. (2025). Evaluating human-AI collaboration: A review and methodological framework (No. arXiv:2407.19098).

Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660.

Hauswald, R. (2025). Artificial epistemic authorities. Social Epistemology, 1–10.

Hayles, N. K. (2023). Technosymbiosis: Figuring (out) our relations to AI. In J. Browne, S. Cave, E. Drage, & K. McInerney (Eds.), Feminist AI (1st ed., pp. 1–18). Oxford University PressOxford.

Heer, J. (2019). Agency plus automation: Designing artificial intelligence into interactive systems. Proceedings of the National Academy of Sciences, 116(6), 1844–1850.

Heersmink, R., De Rooij, B., Clavel Vázquez, M. J., & Colombo, M. (2024). A phenomenology and epistemology of large language models: Transparency, trust, and trustworthiness. Ethics and Information Technology, 26(3), 41.

Kahr, P., Rooks, G., Snijders, C., & Willemsen, M. C. (2025). Good performance isn’t enough to trust AI: Lessons from logistics experts on their long-term collaboration with an AI planning system. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 1–16.

Kim, J., Merrill Jr., K., & Collins, C. (2021). AI as a friend or assistant: The mediating role of perceived usefulness in social AI vs. functional AI. Telemat. Inf., 64(C).

Kim, T., Molina, M. D., Rheu, M. (MJ), Zhan, E. S., & Peng, W. (2024). One AI does not fit all: A cluster analysis of the laypeople’s perception of AI roles. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–20.

Mehrotra, S., Degachi, C., Vereschak, O., Jonker, C. M., & Tielman, M. L. (2024). A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction: Trends, Opportunities and Challenges. ACM J. Responsib. Comput., 1(4), 26:1-26:45.

Papachristos, E., Skov Johansen, P., Møberg Jacobsen, R., Bjørn Leer Bysted, L., & Skov, M. B. (2021). How do people perceive the role of AI in human-AI collaboration to solve everyday tasks? CHI Greece 2021: 1st International Conference of the ACM Greek SIGCHI Chapter, 1–6.

Pareek, S., Schömbs, S., Velloso, E., & Goncalves, J. (2025). "It’s not the AI’s fault because it relies purely on data": How causal attributions of AI decisions shape trust in AI systems. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 1–18.

Ryan, M. (2020). In AI we trust: Ethics, artificial intelligence, and reliability. Science and Engineering Ethics, 26(5), 2749–2767.

Russo, F., Schliesser, E., & Wagemans, J. (2024). Connecting ethics and epistemology of AI. AI & SOCIETY, 39(4), 1585–1603.

Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human–AI interaction (HAII). Journal of Computer-Mediated Communication, 25(1), 74–88.

Savolainen, R. (2008). Everyday information practices: A social phenomenological perspective. Scarecrow Press.

Taudien, A., Fügener, A., Gupta, A., & Ketter, W. (2022). The effect of AI advice on human confidence in decision-making. Hawaii International Conference on System Sciences.

Wu, J.-Y., Lee, Y.-H., Chai, C. S., & Tsai, C.-C. (2025). Strengthening human epistemic agency in the symbiotic learning partnership with generative artificial intelligence. Educational Researcher, 0013189X251333628.

Downloads

Published

2026-03-20

How to Cite

Yang, S., & Ma, R. (2026). Towards a typology of epistemic relationships in human–AI interaction. Information Research an International Electronic Journal, 31(iConf), 1465–1480. https://doi.org/10.47989/ir31iConf64143

Issue

Section

Conference proceedings

Similar Articles

1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.