Redefining critical literacies and ethics in human–machine conversations in the emergent AI-human zone

Authors

DOI:

https://doi.org/10.47989/ir31iConf64250

Keywords:

Conversational information retrieval systems, Critical literacies, AI-Human interactions, Information ethics, Information behaviour

Abstract

Introduction. The ‘Emergent Zone’ is a boundary space where humans and AI co-construct meaning, authorship, and responsibility in response to the transformative impact of conversational information retrieval systems.

Method. Develop a conceptual framework by synthesising the information foraging theory, Vygotsky’s Zone of Proximal Development (ZPD), and Kuhlthau’s Zones of Intervention, all grounded in LIS literature and expanded through critical AI perspectives.

Analysis. AI’s roles as forager, scaffolder, and intervener are illustrated by illustrative exemplum cases from library references, disinformation detection, and learning environments, revealing both efficiencies and profound challenges to agency and accountability.

Results. The framework underscores the urgent need for new critical literacies (e.g., prompt, interpretive, algorithmic, and ethical) and positions meta-literacy as an integrative foundation for LIS.

Conclusion. The Emergent Zone advances scholarship on AI–human interaction, calling for new literacies and reimagined pedagogies and ethics in LIS, and proposes future research emphasising justice-oriented and cross-cultural approaches.

References

Adelakun, N. O. (2024). Exploring the impact of artificial intelligence on information retrieval systems. Information Matters, 4(5). https://informationmatters.org/2024/05/exploring-the-impact-of-artificial-intelligence-on-information-retrieval-systems/

Agrawal, K. (2025). The future of AI in digital search: towards a fully conversational experience. Journal of Computer Science and Technology Studies, 7(4), 298-306. https://doi.org/10.32996/jcsts.2025.7.4.34

Atoum, I. (2025). Revolutionising AI governance: addressing bias and ensuring accountability through the holistic AI governance framework. International Journal of Advanced Computer Science and Applications, 16(2). https://dx.doi.org/10.14569/IJACSA.2025.0160283

Atreja, S., Ashkinaze, J., Li, L., Mendelsohn, J. & Hemphill, L. (2025). What's in a prompt?: a large-scale experiment to assess the impact of prompt design on the compliance and accuracy of LLM-generated text annotations. In Proceedings of the International AAAI Conference on Web and Social Media (Vol. 19, pp. 122–145). https://doi.org/10.1609/icwsm.v19i1.35807

Bawden, D. & Robinson, L. (2022). Introduction to Information Science (2nd ed.). London, United Kingdom: Facet Publishing.

Bender, E. M., Gebru, T., McMillan-Major, A. & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922

Boyd, D. (2014). It's complicated: The social lives of networked teens. Yale University Press. http://www.jstor.org/stable/j.ctt5vm5gk

Capurro, R. (2008). Intercultural information ethics. In K. E. Himma & H. T. Tavani (Eds.), The Handbook of Information and Computer Ethics (pp. 639–665). Hoboken, New Jersey: John Wiley & Sons.

Chavan, V., Cenaj, A., Shen, S., Bar, A., Binwani, S., Del Becaro, T., ... & Fresquet, X. (2025). Feeling machines: ethics, culture, and the rise of emotional AI. arXiv preprint arXiv:2506.12437. https://doi.org/10.48550/arXiv.2506.12437

Chaves-de-Plaza, N. F., Mody, P., Hildebrandt, K., Staring, M., Astreinidou, E., de Ridder, M., ... & van Egmond, R. (2025). Implementation of delineation error detection systems in time-critical radiotherapy: do AI-supported optimisation and human preferences meet?. Cognition, Technology and Work, 27(1), 41-57. https://doi.org/10.1007/s10111-024-00784-4

Chehak, M., Debelius, M., Holtschlag, J., Kim, G., Le, K., Lyons, S., ... & Oh, S. (2025). Designing an AI policy: an experiment in co-creation. International Journal for Students as Partners, 9(1), 151–160. https://doi.org/10.15173/ijsap.v9i1.5836

Couldry, N. & Mejias, U. A. (2019). The costs of connection: How data is colonising human life and appropriating it for capitalism. Redwood City, CA: Stanford University Press.

Crawford, K. & Paglen, T. (2021). Excavating AI: the politics of images in machine learning training sets. AI and Society, 36(4), 1105–1116. https://doi.org/10.1007/s00146-021-01162-8

Czerniewicz, L. (2018). Unbundling and rebundling higher education in an age of inequality. EDUCAUSE Review (Online), https://www.proquest.com/scholarly-journals/unbundling-rebundling-higher-education-age/docview/3224663262/se-2

D'Ignazio, C. & Klein, L. F. (2020). Data feminism. Cambridge, MA: The MIT Press.

Doshi-Velez, F. & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. https://arxiv.org/abs/1702.08608

Ellis, D. (1989). A behavioural approach to information retrieval system design. Journal of Documentation, 45(3), 171–212. https://doi.org/10.1108/eb026843

Elmborg, J. (2006). Critical information literacy: implications for instructional practice. Journal of Academic Librarianship, 32(2), 192–199. https://doi.org/10.1016/j.acalib.2005.12.004

Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. Manhattan, New York City: St. Martin's Press.

Floridi, L. & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1). https://doi.org/10.1162/99608f92.8cd550d1

Fourie, I. (2013). Twenty‐first century librarians: time for Zones of Intervention and Zones of Proximal Development?. Library Hi Tech, 31(1), 171–181. https://doi.org/10.1108/07378831311304001

Fraser, N. (2007). Re-framing justice in a globalising world. In (Mis) recognition, social inequality and social justice (pp. 29–47). Milton Park, United Kingdom: Routledge.

Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. New Haven, CT: Yale University Press.

Haider, J. & Bawden, D. (2007). Conceptions of ‘information poverty’ in LIS: a discourse analysis. Journal of Documentation, 63(4), 534-557. https://doi.org/10.1108/00220410710759002

Håkansson, A. & Phillips-Wren, G. (2024). Generative AI and Large Language Models-Benefits, Drawbacks, Future and Recommendations. Procedia Computer Science, 246(2024), 5458-5468. https://doi.org/10.1016/j.procs.2024.09.689

Hou, I., Man, O., Hamilton, K., Muthusekaran, S., Johnykutty, J., Zadeh, L. & MacNeil, S. (2025). 'All Roads lead to ChatGPT': how generative ai is eroding social interactions and student learning communities. In Proceedings of the 30th ACM Conference on Innovation and Technology in Computer Science Education V. 1 (pp. 79–85). https://doi.org/10.48550/arXiv.2504.09779

Krakowska, M. & Zych, M. (2025). (Un) conventional ways of dialogic information retrieval using prompt engineering and the role of AI literacy. Information Research: An International Electronic Journal, 30(CoLIS), 121-140. https://doi.org/10.47989/ir30CoLIS52243

Kuhlthau, C. C. (1991). Inside the search process: Information seeking from the user's perspective. Journal of the American Society for Information Science, 42(5), 361–371. https://doi.org/10.1002/(SICI)1097-4571(199106)42:5<361::AID-ASI6>3.0.CO;2-%23

Kuhlthau, C. C. (1994). Students and the information search process: zones of intervention for librarians. Advances in Librarianship, 18(57-72). https://doi.org/10.1108/S0065-2830(1994)0000018004

Łabędzki, R., Mikołajczyk, K., Biłyk, A. & Trojanowska, M. (2025). Understanding Human-AI collaboration: a systematic review of challenges and research methods in management. In International Conference on Human-Computer Interaction (pp. 332-348). Cham: Springer Nature Switzerland.

Limberg, L., Sundin, O. & Talja, S. (2012). Three theoretical perspectives on information literacy. Human IT: Journal for Information Technology Studies as a Human Science, 11(2), 93–130. https://web.archive.org/save/https://humanit.hb.se/article/view/69/51

Lloyd, A. (2010). Information literacy landscapes: Information literacy in education, workplace and everyday contexts. Hull, United Kingdom: Chandos Publishing.

Long, D. X., Dinh, D., Nguyen, N. H., Kawaguchi, K., Chen, N. F., Joty, S., & Kan, M. Y. (2025). What makes a good natural language prompt?. arXiv preprint arXiv:2506.06950. https://doi.org/10.48550/arXiv.2506.06950

Liu, F., Liu, Y., Shi, L., Huang, H., Wang, R., Yang, Z., ... & Ma, Y. (2024). Exploring and evaluating hallucinations in LLM-powered code generation. arXiv preprint arXiv:2404.00971. https://doi.org/10.48550/arXiv.2404.00971

Mackey, T. P. & Jacobson, T. E. (2014). Metaliteracy: Reinventing information literacy to empower learners. Chicago, IL: ALA Neal-Schuman.

Metzger, M. J. & Flanagin, A. J. (2013). Credibility and trust of information in online environments: the use of cognitive heuristics. Journal of Pragmatics, 59(Part B), 210–220. https://doi.org/10.1016/j.pragma.2013.07.012

Moore, H. (2022). Mind the gap! From traditional and instrumental approaches of source evaluation towards source consciousness. Nordic Journal of Library and Information Studies, 3(2), 1–15. https://doi.org/10.7146/njlis.v3i2.125485

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York, NY: New York University Press.

O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. New York City, New York: Crown Publishing Group.

Pangrazio, L. & Selwyn, N. (2018). ‘It’s not like it’s life or death or whatever’: young people’s understandings of social media data. Social Media + Society, 4(3). https://doi.org/10.1177/2056305118787808

Panteli, D., Adib, K., Buttigieg, S., Goiana-da-Silva, F., Ladewig, K., Azzopardi-Muscat, N., ... & McKee, M. (2025). Artificial intelligence in public health: promises, challenges, and an agenda for policy makers and public health institutions. The Lancet Public Health, 10(5), e428-e432. https://doi.org/10.1016/S2468-2667(25)00036-2

Park, J. (2025). A systematic literature review of generative artificial intelligence (GenAI) literacy in schools. Computers and Education: Artificial Intelligence, 100487. https://doi.org/10.1016/j.caeai.2025.100487

Pasquale, F. (2015). The black box society: the secret algorithms that control money and information. Cambridge, MA: Harvard University Press.

Pirolli, P. & Card, S. (1999). Information foraging. Psychological Review, 106(4), 643–675. https://doi.org/10.1037/0033-295X.106.4.643

Ravichander, A., Ghela, S., Wadden, D. & Choi, Y. (2025). Halogen: fantastic LLM hallucinations and where to find them. arXiv preprint arXiv:2501.08292. https://doi.org/10.48550/arXiv.2501.08292

Reisman, D., Schultz, J., Crawford, K. & Whittaker, M. (2018). Algorithmic impact assessments: a practical framework for public agency accountability. AI Now Institute. https://web.archive.org/web/20250915153613/https://ainowinstitute.org/publications/algorithmic-impact-assessments-report-2

Revez, J. & Corujo, L. (2021). Librarians against fake news: a systematic literature review of library practices (Jan. 2018–Sept 2020). The Journal of Academic Librarianship, 47(2), 102304. https://doi.org/10.1016/j.acalib.2020.102304

Robayo-Pinzon, O., Rojas-Berrio, S., Rincon-Novoa, J. & Ramirez-Barrera, A. (2024). Artificial intelligence and the value co-creation process in higher education institutions. International Journal of Human–Computer Interaction, 40(20), 6659–6675. https://doi.org/10.1080/10447318.2023.2259722

Roy, B. K. & Mukhopadhyay, P. (2023). Theoretical backbone of library and information science: a quest. LIBER Quarterly: The Journal of the Association of European Research Libraries, 33(1), 1–57. https://doi.org/10.53377/lq.13269

Shah, C. & White, R. W. (2025). From to-do to ta-da: transforming task-focused IR with generative AI. In Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 3911–3921). https://doi.org/10.1145/3726302.3730352

Shumailov, I., Shumaylov, Z., Zharov, D., Papernot, N. & Anderson, R. (2023). The curse of recursion: training on generated data makes models forget. arXiv preprint arXiv:2305.17493. https://arxiv.org/abs/2305.17493

Steigenberger, N. (2025). Deceptive signalling: causes, consequences and remedies. International Journal of Management Reviews, 27(2), 283–305. https://doi.org/10.1111/ijmr.12392

Stokel-Walker C. (2023). ChatGPT listed as author on research papers: many scientists disapprove. Nature, 613(7945), 620–621. https://doi.org/10.1038/d41586-023-00107-z

UNESCO. (2021). Recommendation on the ethics of artificial intelligence. UNESCO Publishing. https://web.archive.org/save/https://unesdoc.unesco.org/ark:/48223/pf0000381137

Vasquez, V. M., Janks, H. & Comber, B. (2019). Critical literacy as a way of being and doing. Language arts, 96(5), 300–311. https://www.jstor.org/stable/26779071 Archived: https://web.archive.org/web/20250915121912/https://www.jstor.org/stable/26779071

Vidergor, H. E. (2023). Teaching futures thinking literacy and futures studies in schools. Futures, 146, 103083. https://doi.org/10.1016/j.futures.2022.103083

Vygotsky, L.S. (1978) in Cole, M., John-Steiner, V., Scriber, S. & Souberman, E. (Eds), Mind in Society: The Development of Higher Psychological Processes, Harvard University Press, Cambridge, MA. https://doi.org/10.2307/j.ctvjf9vz4

Wilson, T. D. (1999). Models in information behaviour research. Journal of Documentation, 55(3), 249–270. https://doi.org/10.1108/EUM0000000007145

Zhai, C., Wibowo, S. & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review. Smart Learning Environments, 11(1), 28. https://doi.org/10.1186/s40561-024-00316-7

Zhang, B. (2023). Prompt engineers or librarians? An exploration. Medical reference services quarterly, 42(4), 381–386. https://doi.org/10.1080/02763869.2023.2250680

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. New York, NY: PublicAffairs.

Downloads

Published

2026-03-20

How to Cite

Meyer, A., Holmner, M., & Bothma, T. J. (2026). Redefining critical literacies and ethics in human–machine conversations in the emergent AI-human zone. Information Research an International Electronic Journal, 31(iConf), 1585–1596. https://doi.org/10.47989/ir31iConf64250

Issue

Section

Conference proceedings

Similar Articles

1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.