See, trust, and interact: how AI disclosure shapes high school students’ trust

Authors

  • Nuo Chen Peking University
  • Zhiyuan Lai Peking University
  • Yichu Liu Peking University
  • Jia Li Peking University
  • Rui Wang Peking University
  • Pu Yan Peking University

DOI:

https://doi.org/10.47989/ir31iConf64165

Keywords:

Artificial Intelligence Generated Content(AIGC), AI label, User trust, Human - AI interaction, Information disclosure

Abstract

Introduction. The rise of AI-generated content challenges adolescents’ ability to evaluate information and calibrate trust. This study explores how AI disclosure influences high school students’ attention, trust, and interaction with AI-generated news and comments.

Method. A field experiment was conducted at a county-level high school in Henan with 60 students. Participants were randomly assigned to a control group (no disclosure) or one of two experimental groups (simple vs. detailed disclosure), enabling examination of group-level effects. Data collection combined eye-tracking, post-test questionnaires, and interviews.

Analysis. Eye-tracking metrics and survey data were analysed quantitatively to examine the main and moderating effects of AI disclosure, while interview transcripts were thematically coded to provide qualitative insights.

Results. Simple disclosure increased attention and trust in AI bots but reduced trust and sharing for news content. Detailed disclosure lowered engagement overall, slightly reducing trust in conversational settings and strongly reducing news-sharing. Individual differences moderated these effects: light internet users benefited most from simple labels, whereas heavy users showed stronger gains from detailed explanations in AI trust and technical understanding.

Conclusion. AI disclosure produces context-dependent effects. Effective design should align label complexity with content type and user experience to provide guidance for ethical AI integration in education and social media.

References

Ajenaghughrure, I. B., da Costa Sousa, S. C., & Lamas, D. (2020, June 30–July 3). Risk and trust in artificial intelligence technologies: A case study of Autonomous Vehicles. 2020 13th International Conference on Human System Interaction (HSI), 118–123. https://doi.org/10.1109/HSI49210.2020.9142686

Akash, K., Hu, W.-L., Jain, N., & Reid, T. (2018). A classification model for sensing human trust in machines using EEG and GSR. ACM Transactions on Interactive Intelligent Systems, 8(4), 1–20. https://doi.org/10.1145/3132743

Amoozadeh, M., Daniels, D., Nam, D., Kumar, A., Chen, S., Hilton, M., ... & Alipour, M. A. (2024, March). Trust in generative AI among students: An exploratory study. Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1, 67–73. https://doi.org/10.1145/3626252.3630757

Balakrishnan, J., & Dwivedi, Y. K. (2021). Role of cognitive absorption in building user trust and experience. Psychology & Marketing, 38(4), 643–668. https://doi.org/10.1002/mar.21462

Bindewald, J. M., Rusnock, C. F., & Miller, M. E. (2018). Measuring human trust behavior in human-machine teams. In D. N. Cassenti (Ed.), Advances in human factors in simulation and modeling (Vol. 591, pp. 47–58). Springer. https://doi.org/10.1007/978-3-319-60591-3_5

Bochniarz, K. T., Czerwiński, S. K., Sawicki, A., & Atroszko, P. A. (2022). Attitudes to AI among high school students: Understanding distrust towards humans will not help us understand distrust towards AI. Personality and Individual Differences, 185, Article 111299. https://doi.org/10.1016/j.paid.2021.111299

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative research in psychology, 3(2), 77-101.

Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education, 20(1), 38. https://doi.org/10.1186/s41239-023-00408-3

Chang, T., Trybala, J. J., Bassan, S., & Razi, A. (2025, May 10–15). Opaque transparency: Gaps and discrepancies in the report of social media harms. Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA '25), Article 424, 1–12. https://doi.org/10.1145/3706599.3719829

Chattaraman, V., Kwon, W.-S., Gilbert, J. E., & Ross, K. (2019). Should AI-Based, conversational digital assistants employ social- or task-oriented interaction style? A task-competency and reciprocity perspective for older adults. Computers in Human Behavior, 90, 315–330. https://doi.org/10.1016/j.chb.2018.08.048

Chen, J. Y. C., & Barnes, M. J. (2014). Human–Agent teaming for multirobot control: A review of human factors issues. IEEE Transactions on Human-Machine Systems, 44(1), 13–29. https://doi.org/10.1109/THMS.2013.2293535

Choudhury, A., & Shamszare, H. (2023). Investigating the impact of user trust on the adoption and use of ChatGPT: Survey analysis. Journal of Medical Internet Research, 25, e47184. https://doi.org/10.2196/47184

Choudhury, A., & Shamszare, H. (2024). The impact of ferformance expectancy, workload, risk, and satisfaction on trust in ChatGPT: Cross-Sectional Survey Analysis. JMIR Human Factors, 11, e55399. https://doi.org/10.2196/55399

De Freitas, J., Agarwal, S., Schmitt, B., & Haslam, N. (2023). Psychological factors underlying attitudes toward AI tools. Nature Human Behaviour, 7(11), 1845–1854. https://doi.org/10.1038/s41562-023-01734-2

De Visser, E. J., Pak, R., & Shaw, T. H. (2018). From ‘automation’to ‘autonomy’: the importance of trust repair in human–machine interaction. Ergonomics, 61(10), 1409-1427. https://doi.org/10.1080/00140139.2018.1457725

Ehsan, U., Tambwekar, P., Chan, L., Harrison, B., & Riedl, M. O. (2019, March 17–20). Automated rationale generation: a technique for explainable AI and its effects on human perceptions. In Proceedings of the 24th international conference on intelligent user interfaces (pp. 263-274).

European Commission. (2018). General Data Protection Regulation (GDPR) – Legal Text. General Data Protection Regulation (GDPR). https://gdpr-info.eu/

Gao, L., & Waechter, K. A. (2017). Examining the role of initial trust in user adoption of mobile payment services: an empirical investigation. Information Systems Frontiers, 19(3), 525-548.

Gremillion, G. M., Metcalfe, J. S., Marathe, A. R., Paul, V. J., Christensen, J., Drnec, K., Haynes, B., & Atwater, C. (2016). Analysis of trust in autonomy for convoy operations. Micro- and Nanotechnology Sensors, Systems, and Applications VIII (Vol. 9836, pp. 356-365). SPIE. https://doi.org/10.1117/12.2224009

Hancock, J. T., & Bailenson, J. N. (2021). The social impact of deepfakes. Cyberpsychology, behavior, and social networking, 24(3), 149-152.

Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors: The Journal of the Human Factors and Ergonomics Society, 57(3), 407–434. https://doi.org/10.1177/0018720814547570

Hu, P., Lu, Y., & Gong, Y. (2021). Dual humanness and trust in conversational AI: A person-centered approach. Computers in Human Behavior, 119, 106727. https://doi.org/10.1016/j.chb.2021.106727

Hwang, H. S., Zhu, L. C., & Cui, Q. (2023). Development and Validation of a Digital Literacy Scale in the Artificial Intelligence Era for College Students. KSII Transactions on Internet and Information Systems (TIIS), 17(8), 2241–2258. https://doi.org/10.3837/tiis.2023.08.016

Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577–586. https://doi.org/10.1016/j.bushor.2018.03.007

Jiang, X., Wu, Z., & Yu, F. (2024). Constructing consumer trust through artificial intelligence generated content. Academic Journal of Business & Management, 6(8), 263-272.

Kim, T., & Hinds, P. (2006). Who should I blame? Effects of autonomy and transparency on attributions in Human-Robot interaction. ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication, 80–85. https://doi.org/10.1109/ROMAN.2006.314398

Kizilcec, R. F. (2016, May 7–12). How much information?: Efffects of transparency on trust in an algorithmic interface. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2390–2395. https://doi.org/10.1145/2858036.2858402

Koo, J., Kwac, J., Ju, W., Steinert, M., Leifer, L., & Nass, C. (2015). Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance. International Journal on Interactive Design and Manufacturing (IJIDeM), 9(4), 269–275. https://doi.org/10.1007/s12008-014-0227-2

Kozak, J., & Fel, S. (2024). How sociodemographic factors relate to trust in artificial intelligence among students in Poland and the United Kingdom. Scientific Reports, 14(1), 28776.

Kozyreva, A., Lorenz-Spreen, P., Hertwig, R., Lewandowsky, S., & Herzog, S. (2021). Public attitudes towards algorithmic personalization and use of personal data online: Evidence from Germany, Great Britain, and the United States. Humanities & Social Sciences Communications, 8(1). https://doi.org/10.1057/s41599-021-00787-w

Lee, C., & Cha, K. (2024). Toward the dynamic relationship between AI transparency and trust in AI: A case study on ChatGPT. International Journal of Human–Computer Interaction, 0(0), 1–18. https://doi.org/10.1080/10447318.2024.2405266 Hancock, J. T., & Bailenson, J. N. (2021). The social impact of deepfakes. Cyberpsychology, behavior, and social networking, 24(3), 149-152.

Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392

Lee, S., & Park, G. (2024). Development and validation of ChatGPT literacy scale. Current Psychology, 43(21), 18992–19004. https://doi.org/10.1007/s12144-024-05723-0

Lewis, P. R., & Marsh, S. (2022). What is it like to trust a rock? A functionalist perspective on trust and trustworthiness in artificial intelligence. Cognitive Systems Research, 72, 33–49. https://doi.org/10.1016/j.cogsys.2021.11.001

Liao, T., & MacDonald, E. F. (2021). Manipulating users’ trust of autonomous products with affective priming. Journal of Mechanical Design, 143(5), 051402. https://doi.org/10.1115/1.4048640

Linegang, M. P., Stoner, H. A., Patterson, M. J., Seppelt, B. D., Hoffman, J. D., Crittendon, Z. B., & Lee, J. D. (2006). Human-Automation collaboration in dynamic mission planning: A challenge requiring an ecological approach. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 50(23), 2482–2486. https://doi.org/10.1177/154193120605002304

Lintner, T. (2024). A systematic review of AI literacy scales. NPJ Science of Learning, 9(1), 50. https://doi.org/10.1038/s41539-024-00264-4

Lund, B. D., Wang, T., Mannuru, N. R., Nie, B., Shimray, S., & Wang, Z. (2023). ChatGPT and a new academic reality: Artificial intelligence-written research papers and the ethics of the large language models in scholarly publishing. Journal of the Association for Information Science and Technology, 74(5), 570–581. https://doi.org/10.1002/asi.24750

Luo, X., Tong, S., Fang, Z., & Qu, Z. (2019). Frontiers: Machines vs. Humans: The impact of artificial intelligence chatbot disclosure on customer purchases. Marketing Science, 38(6). https://doi.org/10.1287/mksc.2019.1192

Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. The Academy of Management Review, 20(3), 709. https://doi.org/10.2307/258792

Molnar, L. J., Ryan, L. H., Pradhan, A. K., Eby, D. W., St. Louis, R. M., & Zakrajsek, J. S. (2018). Understanding trust and acceptance of automated vehicles: An exploratory simulator study of transfer of control between automated and manual driving. Transportation Research Part F: Traffic Psychology and Behaviour, 58, 319–328. https://doi.org/10.1016/j.trf.2018.06.004

Naiseh, M., Al-Mansoori, R. S., Al-Thani, D., Jiang, N., & Ali, R. (2021, October). Nudging through friction: an approach for calibrating trust in explainable AI. In 2021 8th International Conference on Behavioral and Social Computing (BESC) (pp. 1-5). IEEE.

Oleson, K. E., Billings, D. R., Kocsis, V., Chen, J. Y. C., & Hancock, P. A. (2011). Antecedents of trust in human-robot collaborations. 2011 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 175–178. https://doi.org/10.1109/COGSIMA.2011.5753439

Poursabzi-Sangdeh, F., Goldstein, D. G., Hofman, J. M., Wortman Vaughan, J. W., & Wallach, H. (2021). Manipulating and measuring model interpretability. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–52. https://doi.org/10.1145/3411764.3445315

Ramirez, J. P., Obenza, D. M., & Cuarte, R. (2024). AI trust and attitude towards AI of university students. International Journal of Multidisciplinary Studies in Higher Education, 1(1), 22-36.

Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124(3), 372–422. https://doi.org/10.1037/0033-2909.124.3.372

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). ‘Why should I trust you?’: Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144. https://doi.org/10.1145/2939672.2939778

Rossner, A., Cassel, M., & Huschens, M. (2024). Do users really care? Evaluating the user perception of disclosing AI-Generated content on credibility in (sports) Journalism. Proceedings of the 2024 Conference on Mensch und Computer (MuC '24), 413–418. https://doi.org/10.1145/3670653.3677490

Samonte, M. J. C., Escarillo, J. G. M., Go, K., Landrito, N. B. A., & Randhawa, J. K. (2023, August 24–26). Determining the trust level of senior high school associated with the use of AI-powered digital assistants. Proceedings of the 2023 6th International Conference on Information Management and Management Science, 54–61. https://doi.org/10.1145/3615431.3615442

Schaefer, K. E., Chen, J. Y. C., Szalma, J. L., & Hancock, P. A. (2016). A meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Human Factors: The Journal of the Human Factors and Ergonomics Society, 58(3), 377–400. https://doi.org/10.1177/0018720816634228

Schmidt, P., Biessmann, F., & Teubner, T. (2020). Transparency and trust in artificial intelligence systems. Journal of Decision Systems, 29(4), 260–278. https://doi.org/10.1080/12460125.2020.1819094

Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, Article 102551. https://doi.org/10.1016/j.ijhcs.2020.102551

Shin, D., & Park, Y. J. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98, 277–284. https://doi.org/10.1016/j.chb.2019.04.019

Sinha, R., & Swearingen, K. (2002). The role of transparency in recommender systems. CHI ’02 Extended Abstracts on Human Factors in Computing Systems, 830–831. https://doi.org/10.1145/506443.506619

Sunnie S. Y. Kim. (2024, May 11–16). Establishing appropriate trust in AI through transparency and explainability. Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA '24), Article 433, 1–6. https://doi.org/10.1145/3613905.3638184

Thielmann, I., & Hilbig, B. E. (2014). Trust in me, trust in you: A social projection account of the link between personality, cooperativeness, and trustworthiness expectations. Journal of Research in Personality, 50, 61–65. https://doi.org/10.1016/j.jrp.2014.03.006

Tossell, C. C., Tenhundfeld, N. L., Momen, A., Cooley, K., & De Visser, E. J. (2024). Student Perceptions of ChatGPT Use in a College Essay Assignment: Implications for Learning, Grading, and Trust in Artificial Intelligence. IEEE Transactions on Learning Technologies, 17, 1069–1081. https://doi.org/10.1109/TLT.2024.3355015

Ueno, T., Sawa, Y., Kim, Y., Urakami, J., Oura, H., & Seaborn, K. (2022, April 29–May 5). Trust in human-AI interaction: Scoping out models, measures, and methods. CHI Conference on Human Factors in Computing Systems Extended Abstracts, Article 400, 1–7. https://doi.org/10.1145/3491101.3516390

Ullman, D., & Malle, B. F. (2019, March 11–14). Measuring gains and losses in human-robot trust: Evidence for differentiable components of trust. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 618–619. https://doi.org/10.1109/HRI.2019.8673154

van Deursen, A., & van Dijk, J. (2010). Internet skills and the digital divide. New Media & Society, 13(6), 893-911. https://doi.org/10.1177/1461444810386774

Wang, W., & Benbasat, I. (2007). Recommendation agents for electronic commerce: Effects of explanation facilities on trusting beliefs. Journal of Management Information Systems, 23(4), 217–246. https://doi.org/10.2753/MIS0742-1222230410

Wittenberg, C., Epstein, Z., Berinsky, A. J., & Rand, D. G. (2024). Labeling AI-generated content: Promises, perils, and future directions. An MIT Exploration of Generative AI. https://doi.org/10.21428/e4baedd9.0319e3a6

Zhou, T., & Lu, H. (2025). The effect of trust on user adoption of AI-generated content. The Electronic Library, 43(1), 61–76. https://doi.org/10.1108/EL-08-2024-0244

Downloads

Published

2026-03-20

How to Cite

Chen, N., Lai, Z., Liu, Y., Li, J., Wang, R., & Yan, P. (2026). See, trust, and interact: how AI disclosure shapes high school students’ trust. Information Research an International Electronic Journal, 31(iConf), 1099–1145. https://doi.org/10.47989/ir31iConf64165

Issue

Section

Conference proceedings

Similar Articles

1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.