Myth, reality, or in between: Unveiling potential geographical biases of ChatGPT

Authors

  • Manjula Wijewickrema Sabaragamuwa University of Sri Lanka

DOI:

https://doi.org/10.47989/ir31146885

Keywords:

AI chatbots, ChatGPT, Geographical biases, Training data, User survey

Abstract

Introduction. This research examines how geographically biased training data influence the nature of content in ChatGPT responses and to assess the potential occurrence of various geographically biased responses from the users' perspective.

Method. ChatGPT was tested with geographically oriented prompts on ninety-eight countries. The responses were analysed for opinions, facts, and neutral directive sentences, as well as their qualitative and quantitative characteristics. A user survey was conducted on identifying potential geographical biases in ChatGPT responses.

Analysis. The Wilcoxon signed-rank test and the Permutation test were employed, in addition to descriptive analysis. R programming language within RStudio were utilised for the data analysis.

Results. Central and Western European countries exhibited more opinion and fact sentences in their responses, respectively. Qualitative responses had greater meaning consistency than quantitative ones. Sentence type depended on qualitative or quantitative nature, not prompt geography. ChatGPT 3.5 was the most used version, with no reports of geographically offensive, racially biased, or religiously biased responses. Views on geographical bias varied by region, though certain trends emerged.

Conclusion. ChatGPT generates responses of similar lengths irrespective to regions. Qualitative responses are generally more consistent or reliable in terms of their meanings. Most users do not perceive geographical biases, though concerns arise in East Asia and South America.

References

Abid, A., Farooqi, M., & Zou, J. (2021). Persistent Anti-Muslim Bias in Large Language Models. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 298–306. https://doi.org/10.1145/3461702.3462624

Afjal, M. (2025). ChatGPT and the AI revolution: A comprehensive investigation of its multidimensional impact and potential. Library Hi Tech, 43(1), 353–376. https://doi.org/10.1108/LHT-07-2023-0322

Bin-Hady, W. R. A., Al-Kadi, A., Hazaea, A., & Ali, J. K. M. (2023). Exploring the dimensions of ChatGPT in English language learning: A global perspective. Library Hi Tech. https://doi.org/10.1108/LHT-05-2023-0200

Cao, X. (2023). A new era of intelligent interaction: Opportunities and challenges brought by ChatGPT (0). Resources Economics Research Board. https://doi.org/10.50908/grb.2.0_162

Castello, M., Pantana, G., & Torre, I. (2024). Examining Cognitive Biases in ChatGPT 3.5 and ChatGPT 4 through Human Evaluation and Linguistic Comparison. 16th Conference of the Association for Machine Translation in the Americas, 250–260. https://aclanthology.org/2024.amta-research.21.pdf

Deldjoo, Y. (2024). Understanding Biases in ChatGPT-based Recommender Systems: Provider Fairness, Temporal Stability, and Recency. ACM Transactions on Recommender Systems, 3690655. https://doi.org/10.1145/3690655

Duncan, C., & Mcculloh, I. (2023). Unmasking Bias in Chat GPT Responses. Proceedings of the International Conference on Advances in Social Networks Analysis and Mining, 687–691. https://doi.org/10.1145/3625007.3627484

Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., … Wright, R. (2023). Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642

Gross, N. (2023). What ChatGPT Tells Us about Gender: A Cautionary Tale about Performativity and Gender Biases in AI. Social Sciences, 12(8), 435. https://doi.org/10.3390/socsci12080435

Haleem, A., Javaid, M., & Singh, R. P. (2022). An era of ChatGPT as a significant futuristic support tool: A study on features, abilities, and challenges. BenchCouncil Transactions on Benchmarks, Standards and Evaluations, 2(4), 100089. https://doi.org/10.1016/j.tbench.2023.100089

Haltaufderheide, J., & Ranisch, R. (2024). The ethics of ChatGPT in medicine and healthcare: A systematic review on Large Language Models (LLMs). Npj Digital Medicine, 7(1), 183. https://doi.org/10.1038/s41746-024-01157-x

Hosseini, M., & Horbach, S. P. J. M. (2023). Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. Research Integrity and Peer Review, 8(1), 4. https://doi.org/10.1186/s41073-023-00133-5

Jeon, J., & Lee, S. (2023). Large language models in education: A focus on the complementary relationship between human teachers and ChatGPT. Education and Information Technologies, 28(12), 15873–15892. https://doi.org/10.1007/s10639-023-11834-1

Kalla, D., Smith, N., Samaah, F., & Kuraku, S. (2023). Study and analysis of ChatGPT and its impact on different fields of study. International Journal of Innovative Science and Research Technology, 8(3), 827–833.

Kaplan, D. M., Palitsky, R., Arconada Alvarez, S. J., Pozzo, N. S., Greenleaf, M. N., Atkinson, C. A., & Lam, W. A. (2024). What’s in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT. Journal of Medical Internet Research, 26, e51837. https://doi.org/10.2196/51837

Kim, J., & Lee, J. (2023). How does ChatGPT Introduce Transport Problems and Solutions in North America? Findings, March. https://doi.org/10.32866/001c.72634

Kim, J., Lee, J., Jang, K. M., & Lourentzou, I. (2024). Exploring the limitations in how ChatGPT introduces environmental justice issues in the United States: A case study of 3,108 counties. Telematics and Informatics, 86, 102085. https://doi.org/10.1016/j.tele.2023.102085

Kocoń, J., Cichecki, I., Kaszyca, O., Kochanek, M., Szydło, D., Baran, J., Bielaniewicz, J., Gruza, M., Janz, A., Kanclerz, K., Kocoń, A., Koptyra, B., Mieleszczenko-Kowszewicz, W., Miłkowski, P., Oleksy, M., Piasecki, M., Radliński, Ł., Wojtasik, K., Woźniak, S., & Kazienko, P. (2023). ChatGPT: Jack of all trades, master of none. Information Fusion, 99, 101861. https://doi.org/10.1016/j.inffus.2023.101861

Kung, T. H., Cheatham, M., Medenilla, A., Sillos, C., De Leon, L., Elepaño, C., Madriaga, M., Aggabao, R., Diaz-Candido, G., & Maningo, J. (2023). Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digital Health, 2(2), e0000198. https://doi.org/10.1371/journal.pdig.0000198

Limna, P., Kraiwanit, T., Jangjarat, K., Klayklung, P., & Chocksathaporn, P. (2023). The use of ChatGPT in the digital era: Perspectives on chatbot implementation. Journal of Applied Learning & Teaching, 6(1). https://doi.org/10.37074/jalt.2023.6.1.32

Lippens, L. (2024). Computer says ‘no’: Exploring systemic bias in ChatGPT using an audit approach. Computers in Human Behavior: Artificial Humans, 2(1), 100054. https://doi.org/10.1016/j.chbah.2024.100054

Liu, Y., Han, T., Ma, S., Zhang, J., Yang, Y., Tian, J., He, H., Li, A., He, M., Liu, Z., Wu, Z., Zhao, L., Zhu, D., Li, X., Qiang, N., Shen, D., Liu, T., & Ge, B. (2023). Summary of ChatGPT-Related research and perspective towards the future of large language models. Meta-Radiology, 1(2), 100017. https://doi.org/10.1016/j.metrad.2023.100017

Lund, B. D., & Wang, T. (2023). Chatting about ChatGPT: How may AI and GPT impact academia and libraries? Library Hi Tech News, 40(3), 26–29. https://doi.org/10.1108/LHTN-01-2023-0009

Motoki, F., Pinho Neto, V., & Rodrigues, V. (2024). More human than human: Measuring ChatGPT political bias. Public Choice, 198, 3–23. https://doi.org/10.1007/s11127-023-01097-2

Ray, P. P. (2023). ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems, 3, 121–154. https://doi.org/10.1016/j.iotcps.2023.04.003

Rozado, D. (2023). The Political Biases of ChatGPT. Social Sciences, 12(3), 148. https://doi.org/10.3390/socsci12030148

Samaan, J. S., Yeo, Y. H., Rajeev, N., Hawley, L., Abel, S., Ng, W. H., Srinivasan, N., Park, J., Burch, M., Watson, R., Liran, O., & Samakar, K. (2023). Assessing the Accuracy of Responses by the Language Model ChatGPT to Questions Regarding Bariatric Surgery. Obesity Surgery, 33(6), 1790–1796. https://doi.org/10.1007/s11695-023-06603-5

Stekhoven, D. J., & Bühlmann, P. (2012). MissForest—Non-parametric missing value imputation for mixed-type data. Bioinformatics, 28(1), 112–118. https://doi.org/10.1093/bioinformatics/btr597

Tan Yip Ming, C., Rojas-Carabali, W., Cifuentes-González, C., Agrawal, R., Thorne, J. E., Tugal-Tutkun, I., Nguyen, Q. D., Gupta, V., de-la-Torre, A., & Agrawal, R. (2023). The Potential Role of Large Language Models in Uveitis Care: Perspectives After ChatGPT and Bard Launch. Ocular Immunology and Inflammation, 32(7), 1435–1439. https://doi.org/10.1080/09273948.2023.2242462

Tang, F., & Ishwaran, H. (2017). Random forest missing data algorithms. Statistical Analysis and Data Mining: The ASA Data Science Journal, 10(6), 363–377. https://doi.org/10.1002/sam.11348

Wu, T., He, S., Liu, J., Sun, S., Liu, K., Han, Q.-L., & Tang, Y. (2023). A Brief Overview of ChatGPT: The History, Status Quo and Potential Future Development. IEEE/CAA Journal of Automatica Sinica, 10(5), 1122–1136. https://doi.org/10.1109/JAS.2023.123618

Downloads

Published

2026-01-15

How to Cite

Wijewickrema, M. (2026). Myth, reality, or in between: Unveiling potential geographical biases of ChatGPT. Information Research an International Electronic Journal, 31(1), 188–225. https://doi.org/10.47989/ir31146885

Issue

Section

Peer-reviewed papers

Similar Articles

<< < 2 3 4 5 6 7 8 9 10 11 > >> 

You may also start an advanced similarity search for this article.