How do users respond to AI fact-checkers?

Authors

DOI:

https://doi.org/10.47989/ir31iConf64149

Keywords:

Fact-checking, Artificial intelligence, Fake news

Abstract

Introduction. This paper aims to empirically validate a conceptual model that explains how users respond to AI fact-checkers originating from different countries. Guided by the country-of-origin effect, source credibility theory, and elaboration likelihood model, the model comprises five variables, namely, AI fact-checkers, fact-checker source credibility, perceived credibility of flagged news, issue involvement, and AI literacy.

Method. An online experiment was conducted to examine how participants responded to AI fact-checkers from two countries, namely the United States of America (U.S.) and China.

Analysis. A total of 139 responses were collected in this study. Data was analysed using a one-way analysis of variance (ANOVA), PROCESS Model 4 and 9.

Results. The results showed that AI fact-checkers (country of origin: U.S. vs. China) directly influenced the perceived credibility of flagged news. Fact-checker source credibility mediated the effects of AI fact-checkers. Issue involvement moderated the indirect effect of AI fact-checkers on perceived credibility of flagged news via fact-checker source credibility, whereas AI literacy did not.

Conclusion(s). Theoretically, this paper adds to the scholarly understanding of the effectiveness of AI fact-checkers from different countries of origin. Practically, it highlights the importance of considering the country-of-origin effect when deploying AI fact-checkers for social media platforms.

References

Aghakhani, N., Oh, O., Gregg, D. G., & Karimi, J. (2021). Online Review Consistency Matters: An Elaboration Likelihood Model Perspective. Information Systems Frontiers, 23(5), 1287–1301. https://doi.org/10.1007/s10796-020-10030-7

Augenstein, I., Baldwin, T., Cha, M., Chakraborty, T., Ciampaglia, G. L., Corney, D., DiResta, R., Ferrara, E., Hale, S., Halevy, A., Hovy, E., Ji, H., Menczer, F., Miguez, R., Nakov, P., Scheufele, D., Sharma, S., & Zagni, G. (2023). Factuality Challenges in the Era of Large Language Models (No. arXiv:2310.05189). arXiv. https://doi.org/10.48550/arXiv.2310.05189

Bae, H.S. (2008). Entertainment-Education and Recruitment of Cornea Donors: The Role of Emotion and Issue Involvement. Journal of Health Communication, 13(1), 20–36. https://doi.org/10.1080/10810730701806953

Banerjee, S., & Chua, A. Y. K. (2025). Effect of autonomous vehicle-related eWOM on (fe)males’ attitude and perceived risk as passengers and pedestrians. Internet Research, 35(2), 841–859. https://doi.org/10.1108/INTR-10-2023-0912

Beer, D. de, & Matthee, M. (2020). Approaches to Identify Fake News: A Systematic Literature Review. Integrated Science in Digital Age 2020, 136, 13. https://doi.org/10.1007/978-3-030-49264-9_2

Carpenter, S., Zhu, F., Zeng, M., & Shreeves, M. (2017). Expert Sources in Warnings May Reduce the Extent of Identity Disclosure in Cyber Contexts. International Journal of Human–Computer Interaction, 33(3), 215–228. https://doi.org/10.1080/10447318.2016.1232909

Chao, F., Zhou, Q., Zhao, J., Xu, Y., & Yu, G. (2024). Trustworthiness matters: Effect of source credibility on sharing debunking information across different rumour types. Information Processing & Management, 61(4), 103747. https://doi.org/10.1016/j.ipm.2024.103747

Clayton, K., Blair, S., Busam, J. A., Forstner, S., Glance, J., Green, G., Kawata, A., Kovvuri, A., Martin, J., Morgan, E., Sandhu, M., Sang, R., Scholz-Bright, R., Welch, A. T., Wolff, A. G., Zhou, A., & Nyhan, B. (2020). Real Solutions for Fake News? Measuring the Effectiveness of General Warnings and Fact-Check Tags in Reducing Belief in False Stories on Social Media. Political Behavior, 42(4), 1073–1095. https://doi.org/10.1007/s11109-019-09533-0

El-Sayed, B. K. M., El-Sayed, A. A. I., Alsenany, S. A., & Asal, M. G. R. (2025). The role of artificial intelligence literacy and innovation mindset in shaping nursing students’ career and talent self-efficacy. Nurse Education in Practice, 82, 104208. https://doi.org/10.1016/j.nepr.2024.104208

Gilchrist, A. D. (2018). Post-Truth: An outline review of the issues and what is being done to combat it. Ibersid: Revista de Sistemas de Información y Documentación, 12(2), 13–24. https://doi.org/10.54886/ibersid.v12i2.4601

Gupta, M., Dennehy, D., Parra, C. M., Mäntymäki, M., & Dwivedi, Y. K. (2023). Fake news believability: The effects of political beliefs and espoused cultural values. Information & Management, 60(2), 103745. https://doi.org/10.1016/j.im.2022.103745

Gwebu, K. L., Wang, J., & Zifla, E. (2022). Can warnings curb the spread of fake news? The interplay between warning, trust and confirmation bias. Behaviour & Information Technology, 41(16), 3552–3573. https://doi.org/10.1080/0144929X.2021.2002932

Han, J., & Chua, A. Y. K. (2025). The Power of Warning: Unpacking the Impact of Fact-Checking Flag on News Sharing and Verification. In G. Oliver, V. Frings-Hessami, J. T. Du, & T. Tezuka (Eds.), Sustainability and Empowerment in the Context of Digital Libraries (pp. 291–304). Springer Nature. https://doi.org/10.1007/978-981-96-0865-2_24

Harrison, W. L., & Farn, C.K. (1990). A comparison of information management issues in the United States of America and the Republic of China. Information & Management, 18(4), 177–188. https://doi.org/10.1016/0378-7206(90)90038-J

Jiang, Q., Gao, Z., & Karniadakis, G. E. (2025). DeepSeek vs. ChatGPT vs. Claude: A comparative study for scientific computing and scientific machine learning tasks. Theoretical and Applied Mechanics Letters, 15(3), 100583. https://doi.org/10.1016/j.taml.2025.100583

Kim, Y., & Lee, J. (2025). Balancing Artificial Intelligence and Human Expertise: Ideal Fact-Checking Strategies for Hard and Soft News. Journalism & Mass Communication Quarterly, 10776990251325875. https://doi.org/10.1177/10776990251325875

Koch, T. K., Frischlich, L., & Lermer, E. (2023). Effects of fact-checking warning labels and social endorsement cues on climate change fake news credibility and engagement on social media. Journal of Applied Social Psychology, 53(6), 495–507. https://doi.org/10.1111/jasp.12959

Lam, C., Huang, Z., & Shen, L. (2022). Infographics and the Elaboration Likelihood Model (ELM): Differences between Visual and Textual Health Messages. Journal of Health Communication, 27(10), 737–745. https://doi.org/10.1080/10810730.2022.2157909

Li, C., Lin, Y., Chen, R., & Chen, J. (Elaine). (2025). How do users adopt AI-generated content (AIGC)? An exploration of content cues and interactive cues. Technology in Society, 81, 102830. https://doi.org/10.1016/j.techsoc.2025.102830

Lim, T.S., & Loh, W.Y. (1996). A comparison of tests of equality of variances. Computational Statistics & Data Analysis, 22(3), 287–301. https://doi.org/10.1016/0167-9473(95)00054-2

Liu, X., Qi, L., Wang, L., & Metzger, M. J. (2023). Checking the Fact-Checkers: The Role of Source Type, Perceived Credibility, and Individual Differences in Fact-Checking Effectiveness. Communication Research, 00936502231206419. https://doi.org/10.1177/00936502231206419

McDermott, K. C. P., & Lachlan, K. A. (2020). Polarising Organisations and Image Repair: The Effects of Extreme Disposition and Ego-Involvement on ELM Processing Routes for Organisational Responses. Communication Studies, 71(2), 332–350. https://doi.org/10.1080/10510974.2020.1733039

Miller, S., Menard, P., Bourrie, D., & Sittig, S. (2024). Integrating truth bias and elaboration likelihood to understand how political polarisation impacts disinformation engagement on social media. Information Systems Journal, 34(3), 642–679. https://doi.org/10.1111/isj.12418

Moon, W.K., & Kahlor, L. A. (2025). Fact-checking in the age of AI: Reducing biases with non-human information sources. Technology in Society, 80, 102760. https://doi.org/10.1016/j.techsoc.2024.102760

Nunnally, J. C. (with Internet Archive). (1978). Psychometric theory. New York : McGraw-Hill. http://archive.org/details/psychometrictheo00nunn

Ohanian, R. (1991). The Impact of Celebrity Spokespersons’ Perceived Image on Consumers’ Intention to Purchase. Journal of Advertising Research, 31(1), 46–54.

Pal, A., Chua, A. Y. K., & Hoe-Lian Goh, D. (2020). How do users respond to online rumor rebuttals? Computers in Human Behavior, 106, 106243. https://doi.org/10.1016/j.chb.2019.106243

Pareek, S., & Goncalves, J. (2024). Peer-supplied credibility labels as an online misinformation intervention. International Journal of Human-Computer Studies, 188, 103276. https://doi.org/10.1016/j.ijhcs.2024.103276

Park, K., & Young Yoon, H. (2025). AI algorithm transparency, pipelines for trust not prisms: Mitigating general negative attitudes and enhancing trust toward AI. Humanities and Social Sciences Communications, 12(1), 1160. https://doi.org/10.1057/s41599-025-05116-z

Petty, R. E., & Cacioppo, J. T. (1986). The Elaboration Likelihood Model of Persuasion. In L. Berkowitz (Ed.), Advances in Experimental Social Psychology (Vol. 19, pp. 123–205). Academic Press. https://doi.org/10.1016/S0065-2601(08)60214-2

Pinarbasi, F. (2023). Country-of-Origin Effect and Digital Marketing: Evaluating Current and Future of Marketing. In Origin and Branding in International Market Entry Processes (pp. 117–137). IGI Global Scientific Publishing. https://doi.org/10.4018/978-1-6684-6613-1.ch006

Singh, S., Bansal, S., Saddik, A. E., & Saini, M. (2025). From ChatGPT to DeepSeek AI: A Comprehensive Analysis of Evolution, Deviation, and Future Implications in AI-Language Models (No. arXiv:2504.03219). arXiv. https://doi.org/10.48550/arXiv.2504.03219

Sun, M., & Dong, X. (2024). Factors influencing correction upon exposure to health misinformation on social media: The moderating role of active social media use. Online Information Review, ahead-of-print(ahead-of-print). https://doi.org/10.1108/OIR-09-2023-0505

Verlegh, P. W. J., & Steenkamp, J.-B. E. M. (1999). A review and meta-analysis of country-of-origin research. Journal of Economic Psychology, 20(5), 521–546. https://doi.org/10.1016/S0167-4870(99)00023-9

Yadav, S., Koushik, K., & Kishor, N. (2025). The state of country-of-origin research: A bibliometric review of trends and future. Asia Pacific Management Review, 30(1), 100337. https://doi.org/10.1016/j.apmrv.2024.12.001

Yang, Z. J., Kahlor, L., & Li, H. (2014). A United States-China Comparison of Risk Information–Seeking Intentions. Communication Research, 41(7), 935–960. https://doi.org/10.1177/0093650213479795

Zha, X., Yang, H., Yan, Y., Liu, K., & Huang, C. (2018). Exploring the effect of social media information quality, source credibility and reputation on informational fit-to-task: Moderating role of focused immersion. Computers in Human Behavior, 79, 227–237. https://doi.org/10.1016/j.chb.2017.10.038

Downloads

Published

2026-03-20

How to Cite

Chua, A. Y., & Han, J. (2026). How do users respond to AI fact-checkers?. Information Research an International Electronic Journal, 31(iConf), 1021–1032. https://doi.org/10.47989/ir31iConf64149

Issue

Section

Conference proceedings

Similar Articles

<< < 1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.