Disability misinformation on Facebook: a comparison of LLM-based fact-checking tools

Authors

  • Ian Prazak Duke University
  • Leah Padovani University of Maryland
  • Yool Lim Rice University
  • Julia (Hsin-Ping) Hsu George Mason University
  • Myeong Lee George Mason University

DOI:

https://doi.org/10.47989/ir31iConf64259

Keywords:

Misinformation, Disabilities, Social media, Large-language model, Fact-checking

Abstract

Introduction. Social media has become a prominent space for seeking and sharing information, but it also enables misinformation to spread. When it comes to disability-related information, such as how to apply for a Medicaid waiver, understanding the prevalence of false information on social media becomes further complicated due to varying content types. To provide an initial exploration of this problem space, we investigated misinformation propensity within disability-related Facebook groups, group factors associated with it, and the performance of AI fact-checking tools in detecting this type of information.

Method. We identified target Facebook groups through a large-scale survey. From 20 public Facebook groups mentioned in the survey, we scraped 1,407 informational and fact-checkable posts. GPT-4o, GPTo1, and Originality.ai were used to classify the posts and compared.

Analysis. The results were validated against the ground-truths generated manually, providing benchmarks for assessing AI tools in detecting misinformation on Facebook.

Results. Our findings reveal that groups centered on developmental disabilities tend to be more vulnerable to misinformation. AI factchecking tools are generally effective in classifying accurate information but presented varying performance in detecting misinformation.

Conclusion. This work provides an initial assessment of the prevalence of misinformation about disability services and the performance of LLM-based tools.

 

References

Ahmad, I., Yousaf, M., Yousaf, S., & Ahmad, M. O. (2020). Fake news detection using machine learning ensemble methods. Complexity, 2020(1), 8885861.

Amriza, R. N. S., Chou, T.-C., & Ratnasari, W. (2025). Understanding the shifting nature of fake news research: Consumption, dissemination, and detection. Journal of the Association for Information Science and Technology, 76(6), 896– 916.

Bannon, L. (2011). Reimagining hci: Toward a more human-centered perspective.

Interactions, 18(4), 50–57. https://doi.org/10.1145/1978822.1978833

Burel, G., & Alani, H. (2023). The fact-checking observatory: Reporting the cospread of misinformation and fact-checks on social media. Proceedings of the 34th ACM Conference on Hypertext and Social Media. https://doi.org/ 10.1145/3603163.3609042

Chi, Y., He, D., & Jeng, W. (2020). Laypeople’s source selection in online health information-seeking process. Journal of the Association for Information Science and Technology, 71(12), 1484–1499. https://doi.org/https://doi.org/ 10.1002/asi.24343

Conroy, N. K., Rubin, V. L., & Chen, Y. (2015). Automatic deception detection: Methods for finding fake news. Proceedings of the Association for Information Science and Technology, 52(1), 1–4. https://doi.org/https://doi.org/10.1002/ pra2.2015.145052010082

Gangopadhyay, S., Schellhammer, S., Hafid, S., Dessi, D., Koß, C., Todorov, K., Dietze, S., & Jabeen, H. (2024). Investigating characteristics, biases and evolution of fact-checked claims on the web. Proceedings of the 35th ACM Conference on Hypertext and Social Media, 246–258. https://doi.org/10.1145/ 3648188.3675135

Ghosh, S., & Mitra, P. (2023). Catching lies in the act: A framework for early misinformation detection on social media. Proceedings of the 34th ACM Conference on Hypertext and Social Media. https://doi.org/10.1145/3603163.3609057

Ghosh, S., & Shah, C. (2018). Towards automatic fake news classification. Proceedings of the Association for Information Science and Technology, 55(1), 805– 807. https://doi.org/https://doi.org/10.1002/pra2.2018.14505501125

Giachanou, A., Rosso, P., & Crestani, F. (2021). The impact of emotional signals on credibility assessment. Journal of the Association for Information Science and Technology, 72(9), 1117–1132. https://doi.org/https://doi.org/10.1002/asi. 24480

Gibson, A. N., & Martin III, J. D. (2019). Re-situating information poverty: Information marginalISATion and parents of individuals with disabilities. Journal of the Association for Information Science and Technology, 70(5), 476–487. https: //doi.org/https://doi.org/10.1002/asi.24128

Hasanain, M., & Elsayed, T. (2022). Studying effectiveness of web search for fact checking. Journal of the Association for Information Science and Technology,

73(5), 738–751. https://doi.org/https://doi.org/10.1002/asi.24577

Hsu, J. H.-P., & Lee, M. (2025). From open-ended text to taxonomy: An llm-based framework for information sources for disability services. Proceedings of the Association for Information Science and Technology, 62(1), 915–919.

Huvila, I., & Gorichanaz, T. (2025). Trends in information behavior research, 2016–2022: An annual review of information science and technology (arist) paper. Journal of the Association for Information Science and Technology, 76(1), 216–237. https://doi.org/https://doi.org/10.1002/asi.24943

Jahanbakhsh, F., Katsis, Y., Wang, D., Popa, L., & Muller, M. (2023). Exploring the use of personalised ai for identifying misinformation on social media. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3544548.3581219

Karami, M., Nazer, T. H., & Liu, H. (2021). Profiling fake news spreaders on social media through psychological and motivational factors. Proceedings of the 32nd ACM Conference on Hypertext and Social Media, 225–230. https://doi. org/10.1145/3465336.3475097

Labib, A., Chakhar, S., Hope, L., Shimell, J., & Malinowski, M. (2022). Analysis of noise and bias errors in intelligence information systems. Journal of the Association for Information Science and Technology, 73(12), 1755–1775. https://doi.

org/https://doi.org/10.1002/asi.24707

Lee, M., Abubakr, L., Shrivastava, T., Hsu, J. H.-p., Whitman, S. A., & Kim, P. (2024). 2024 assessment of virginia’s information ecology of the disability services system. Virginia Board for People with Disabilities.

Li, A., & Sinnamon, L. (2024). Generative ai search engines as arbiters of public knowledge: An audit of bias and authority. Proceedings of the Association for Information Science and Technology, 61(1), 205–217. https://doi.org/https: //doi.org/10.1002/pra2.1021

Liu, J., & Regulagedda, R. M. (2023). Social network analysis of misinformation spreading and science communication during covid-19. Proceedings of the Association for Information Science and Technology, 60(1), 1059–1061. https://doi. org/https://doi.org/10.1002/pra2.944

Morrison, J., Forrester-Jones, R., Bradshaw, J., & Murphy, G. (2019). Communication and cross-examination in court for children and adults with intellectual disabilities: A systematic review. The International Journal of Evidence & Proof, 23(4), 366–398.

Nicol, E., Willson, R., Ruthven, I., Elsweiler, D., & Buchanan, G. (2022). Information intermediaries and information resilience: Working to support marginalised groups. Proceedings of the Association for Information Science and Technology, 59(1), 469–473. https://doi.org/https://doi.org/10.1002/pra2.654

OpenAI. (2024). Learning to reason with LLMs. Retrieved September 12, 2024, from https://openai.com/index/learning-to-reason-with-llms/

Pentina, I., & Tarafdar, M. (2014). From ‘information’ to ‘knowing’: Exploring the role of social media in contemporary news consumption. Computers in human behavior, 35, 211–223.

Pierri, F., Piccardi, C., & Ceri, S. (2020). Topology comparison of twitter diffusion networks effectively reveals misleading information. Scientific reports, 10(1), 1372.

Sanfilippo, M. R., Zhu, X. A., & Yang, S. (2025). Sociotechnical governance of misinformation: An annual review of information science and technology (arist) paper. Journal of the Association for Information Science and Technology, 76(1), 289–325. https://doi.org/https://doi.org/10.1002/asi.24953

Simko, J., Hanakova, M., Racsko, P., Tomlein, M., Moro, R., & Bielikova, M. (2019). Fake news reading on social media: An eye-tracking study. Proceedings of the 30th ACM Conference on Hypertext and Social Media, 221–230. https:// doi.org/10.1145/3342220.3343642

Singh, V. K., Ghosh, I., & Sonagara, D. (2021). Detecting fake news stories via multimodal analysis. Journal of the Association for Information Science and Technology, 72(1), 3–17. https://doi.org/https://doi.org/10.1002/asi.24359

Song, S., Ying, J., Zhao, Y. (, & Li, J. (2024). Harnessing the power of scientists and livestreaming: Health information outreach in a medical library. Proceedings of the Association for Information Science and Technology, 61(1), 645–

649. https://doi.org/https://doi.org/10.1002/pra2.1073

Spearman, C. (1961). The proof and measurement of association between two things.

Suarez-Lledo, V., & Alvarez-Galvez, J. (2021). Prevalence of health misinformation on social media: Systematic review. Journal of medical Internet research, 23(1), e17187.

Wang, Y., McKee, M., Torbica, A., & Stuckler, D. (2019). Systematic literature review on the spread of health-related misinformation on social media. Social Science & Medicine, 240, 112552. https://doi.org/https://doi.org/10.1016/j. socscimed.2019.112552

Young, N. A. (2022). Childhood disability in the united states: 2019. ACSBR-006.

Downloads

Published

2026-03-20

How to Cite

Prazak, I., Padovani, L., Lim, Y., Hsu, J. (Hsin-P., & Lee, M. (2026). Disability misinformation on Facebook: a comparison of LLM-based fact-checking tools. Information Research an International Electronic Journal, 31(iConf), 1042–1053. https://doi.org/10.47989/ir31iConf64259

Issue

Section

Conference proceedings

Similar Articles

<< < 3 4 5 6 7 8 9 10 11 12 > >> 

You may also start an advanced similarity search for this article.