AI for research assessment: opportunities and challenges from India

Authors

  • Tatiana Chakravorti Pennsylvania State University
  • Chuhao Wu Clemson University
  • Sai Koneru Pennsylvania State University
  • Sarah Rajtmajer Pennsylvania State University

DOI:

https://doi.org/10.47989/ir31iConf64130

Keywords:

Replication prediction tool, Research ethics, Open science

Abstract

Introduction. Artificial intelligence (AI) is beginning to reshape the infrastructures of scholarly communication, raising important questions about trust and transparency. This study examines researchers’ perspectives on AI-driven scientific replication prediction tools as emerging components of research assessment.

Method. Qualitative, in-depth, semi-structured interviews were conducted with 19 faculty and doctoral scholars in India to explore how researchers perceive and engage with the scientific replication AI tool. Analysis. Our analysis focused on how information practices, institutional incentives, and sociocultural contexts shape the adoption of AI technologies in scholarly work. Interview transcripts were studied using thematic analysis using a collaborative and iterative coding process.

Results. Participants mention that limited funding, infrastructure, and access to advanced tools or high-quality datasets make replicating studies difficult in India. They recognised the value of AI tools for surfacing reproducibility assessments during literature reviews and study design. They advocated for hybrid human–AI systems that balance the efficiency of automation with the nuanced judgment of experts.

Conclusion. This study situates AI replication tools within broader scholarly infrastructures, highlighting the need for design features that enhance explainability, fairness, and cultural sensitivity. It also represents the challenges faced by Indian researchers when it comes to replication and open science

References

Ahmad, M. A., Teredesai, A., & Eckert, C. (2020). Fairness, accountability, transparency in ai at scale: Lessons from national programs. Proceedings of the 2020 conference on fairness, accountability, and transparency, 690–690.

Altmejd, A., Dreber, A., Forsell, E., Huber, J., Imai, T., Johannesson, M., Kirchler, M., Nave, G., & Camerer, C. (2019). Predicting the replicability of social science lab experiments. PloS one, 14(12), e0225826.

Anderson, M. S., Ronning, E. A., Vries, R. D., & Martinson, B. C. (2010). Extending the mertonian norms: Scientists’ subscription to norms of research. The Journal of higher education, 81(3), 366–393.

Bajpai, V., Kühlewind, M., Ott, J., Schönwälder, J., Sperotto, A., & Trammell, B. (2017). Challenges with reproducibility. Proceedings of the Reproducibility Workshop, 1–4.

Baker, M. (2016a). 1,500 scientists lift the lid on reproducibility. Nature, 533(7604). Baker, M. (2016b). Reproducibility crisis. Nature, 533(26), 353–66.

Bánáti, A., Kacsuk, P., & Kozlovszky, M. (2015). Four level provenance support to achieve portable reproducibility of scientific workflows. 2015 38th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), 241–244.

Bello, S. A., Azubuike, F. C., & Akande, O. A. (2023). Reputation disparity in teaching and research productivity and rewards in the context of consequences of institutionalIsation of publish or perish culture in academia. Higher Education Quarterly, 77(3), 574–584.

Benjamin, D. J., Berger, J. O., Johannesson, M., Nosek, B. A., Wagenmakers, E.-J., Berk, R., Bollen, K. A., Brembs, B., Brown, L., Camerer, C., et al. (2018). Redefine statistical significance. Nature human behaviour, 2(1), 6–10.

Blandford, A., Furniss, D., & Makri, S. (2016). Qualitative hci research: Going behind the scenes. Morgan & Claypool Publishers.

Boeker, M., Vach, W., & Motschall, E. (2013). Google scholar as replacement for systematic literature searches: Good relative recall and precision are not enough. BMC medical research methodology, 13, 1–12.

Camerer, C. F., Dreber, A., Forsell, E., Ho, T.-H., Huber, J., Johannesson, M., Kirchler, M., Almenberg, J., Altmejd, A., Chan, T., et al. (2016). Evaluating replicability of laboratory experiments in economics. Science, 351(6280), 1433–1436.

Camerer, C. F., Dreber, A., Holzmeister, F., Ho, T.-H., Huber, J., Johannesson, M., Kirchler, M., Nave, G., Nosek, B. A., Pfeiffer, T., et al. (2018). Evaluating the replicability of social science experiments in nature and science between 2010 and 2015. Nature Human Behaviour, 2(9), 637–644.

Chakravorti, T., Dileep Koneru, S., & Rajtmajer, S. (2024). Reproducibility, replicability, and transparency in research: What 430 professors think in universities across the usa and india. arXiv e-prints, arXiv–2402.

Chard, K., Pruyne, J., Blaiszik, B., Ananthakrishnan, R., Tuecke, S., & Foster, I. (2015). Globus data publication as a service: Lowering barriers to reproducible science. 2015 IEEE 11th International Conference on e-Science, 401–410.

Chen, P., Wu, L., & Wang, L. (2023). Ai fairness in data management and analytics: A review on challenges, methodologies and applications. Applied Sciences, 13(18), 10258.

Collaboration, O. S. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. https://doi.org/10.1126/science.aac4716

Collberg, C., & Proebsting, T. A. (2016). Repeatability in computer systems research. Commun. ACM, 59(3), 62–69. https://doi.org/10.1145/2812803

Cova, F., Strickland, B., Abatista, A., Allard, A., Andow, J., Attie, M., Beebe, J., Berniūnas, R., Boudesseul, J., Colombo, M., et al. (2021). Estimating the reproducibility of experimental philosophy. Review of Philosophy and Psychology, 12(1), 9– 44.

Crüwell, S., van Doorn, J., Etz, A., Makel, M. C., Moshontz, H., Niebaum, J. C., Orben, A., Parsons, S., & Schulte-Mecklenbeck, M. (2019). Seven easy steps to open science. Zeitschrift für Psychologie.

Dacrema, M. F., Boglio, S., Cremonesi, P., & Jannach, D. (2021). A troubling analysis of reproducibility and progress in recommender systems research. ACM Trans. Inf. Syst., 39(2), 20:1–20:49. https://doi.org/10.1145/3434185

de Brito Duarte, R., Correia, F., Arriaga, P., & Paiva, A. (2023). Ai trust: Can explainable ai enhance warranted trust? Human Behavior and Emerging Technologies, 2023(1), 4637678.

Ding, X., Kou, Y., Xu, Y., & Zhang, P. (2022). ‘as uploaders, we have the responsibility’: Individualised professionalIsation of bilibili uploaders. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 1–14.

Dutta, M., Ramasubramanian, S., Barrett, M., Elers, C., Sarwatay, D., Raghunath, P., Kaur, S., Dutta, D., Jayan, P., Rahman, M., et al. (2021). Decolonising open science: Southern interventions. Journal of communication, 71(5), 803–826.

Errington, T. M., Iorns, E., Gunn, W., Tan, F. E., Lomax, J., & Nosek, B. A. (2014). An open investigation of the reproducibility of cancer biology research. Elife, 3, e04333.

Eyert, F., & Lopez, P. (2023). Rethinking transparency as a communicative constellation. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 444–454.

Feger, S. S., Dallmeier-Tiessen, S., Schmidt, A., & Woźniak, P. W. (2019). Designing for reproducibility: A qualitative study of challenges and opportunities in high energy physics. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–14.

Fernandez, L. (2006). Open access initiatives in india-an evaluation. Partnership: The Canadian Journal of Library and Information Practice and Research, 1(1). Fidler, F., & Wilcox, J. (2018). Reproducibility of scientific results.

Freese, J., Rauf, T., & Voelkel, J. G. (2022). Advances in transparency and reproducibility in the social sciences. Social Science Research, 107, 102770.

Gilmore, R. O., Diaz, M. T., Wyble, B. A., & Yarkoni, T. (2017). Progress toward openness, transparency, and reproducibility in cognitive neuroscience. Annals of the New York Academy of Sciences, 1396(1), 5–18.

Gohel, P., Singh, P., & Mohanty, M. (2021). Explainable ai: Current status and future directions. arXiv preprint arXiv:2107.07045.

Gunzer, F., Jantscher, M., Hassler, E. M., Kau, T., & Reishofer, G. (2022). Reproducibility of artificial intelligence models in computed tomography of the head: A quantitative analysis. Insights into Imaging, 13(1), 1–8.

Haibe-Kains, B., Adam, G. A., Hosny, A., Khodakarami, F., Shraddha, T., Kusko, R., Sansone, S.-A., Tong, W., Wolfinger, R. D., Mason, C. E., Jones, W., Dopazo, J.,

Haim, A., Baxter, C., Gyurcsan, R., Shaw, S. T., & Heffernan, N. T. (2023). How to open science: Analysing the open science statement compliance of the learning@ scale conference. Proceedings of the Tenth ACM Conference on Learning@ Scale, 174–182.

Hanief Bhat, M. (2014). Exploring research data in indian institutional repositories. Program, 48(2), 206–216.

Head, M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The extent and consequences of p-hacking in science. PLoS biology, 13(3), e1002106.

Huang, X., Vitak, J., & Tausczik, Y. (2020). ‘You don’t have to know my past’: How wechat moments users manage their evolving self-presentation. Proceedings of the 2020 CHI Conference on Human Factors in Computing systems, 1–13.

Hutson, M. (2018). Artificial intelligence faces reproducibility crisis. Science, 359(6377),

725–726. https://doi.org/10.1126/science.359.6377.725

Iyer, R., Li, Y., Li, H., Lewis, M., Sundar, R., & Sycara, K. (2018). Transparency and explanation in deep reinforcement learning neural networks. Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 144–150.

Jiang, N., Liu, X., Liu, H., Lim, E. T. K., Tan, C.-W., & Gu, J. (2023). Beyond ai-powered context-aware services: The role of human–ai collaboration. Industrial Management & Data Systems, 123(11), 2771–2802.

Kapoor, S., & Narayanan, A. (2023). Leakage and the reproducibility crisis in machinelearning-based science. Patterns, 4(9).

Loken, E., & Gelman, A. (2017). Measurement error and the replication crisis. Science, 355(6325), 584–585.

López-Nicolás, R., López-López, J. A., Rubio-Aparicio, M., & Sánchez-Meca, J. (2022).

A meta-review of transparency and reproducibility-related reporting prac-

tices in published meta-analyses on clinical psychological interventions (2000–2020). Behavior research methods, 54(1), 334–349.

Lu, J., Lee, D., Kim, T. W., & Danks, D. (2019). Good explanation for algorithmic transparency. Available at SSRN 3503603.

Marshall, I. J., & Wallace, B. C. (2019). Toward systematic review automation: A practical guide to using machine learning tools in research synthesis. Systematic reviews, 8, 1–10.

Maxwell, S. E., Lau, M. Y., & Howard, G. S. (2015). Is psychology suffering from a replication crisis? what does ‘failure to replicate’ really mean? American Psychologist, 70(6), 487.

McKiernan, E. C., Bourne, P. E., Brown, C. T., Buck, S., Kenall, A., Lin, J., McDougall, D., Nosek, B. A., Ram, K., Soderberg, C. K., et al. (2016). How open science helps researchers succeed. elife, 5, e16800.

Mede, N. G., Schäfer, M. S., Ziegler, R., & Weißkopf, M. (2021). The ‘replication crisis’ in the public eye: Germans’ awareness and perceptions of the (ir) reproducibility of scientific research. Public Understanding of Science, 30(1), 91– 102.

Memmert, L., & Bittner, E. (2022). Complex problem solving through human-ai collaboration: Literature review on research contexts.

Miyakawa, T. (2020). No raw data, no science: Another possible source of the reproducibility crisis.

National Academies of Sciences, Engineering, and Medicine et al. (2019). Reproducibility and replicability in science. National Academies Press.

Nazim, M., Bhardwaj, R. K., Agrawal, A., & Bano, A. (2023). Open access publishing in india: Trends and policy perspectives. Global Knowledge, Memory, and Communication, 72(4/5), 437–451.

Niksirat, K. S., Goswami, L., Tyler, J., Silacci, A., Aliyu, S., Aebli, A., Wacharamanotham, C., Cherubini, M., et al. (2023). Changes in research ethics, openness, and transparency in empirical studies between chi 2017 and chi 2022.

Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., Buck, S., Chambers, C. D., Chin, G., Christensen, G., et al. (2015). Promoting an open research culture. Science, 348(6242), 1422–1425.

Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S., Breckler, S., Buck, S., Chambers, C., Chin, G., Christensen, G., et al. (2016). Transparency and openness promotion (top) guidelines.

Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11), 2600–2606.

Nosek, B. A., Hardwicke, T. E., Moshontz, H., Allard, A., Corker, K. S., Almenberg, A. D., Fidler, F., Hilgard, J., Kline, M., Nuijten, M. B., et al. (2021). Replicability, robustness, and reproducibility in psychological science.

Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: Ii. restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7(6), 615–631.

Obels, P., Lakens, D., Coles, N. A., Gottfried, J., & Green, S. A. (2020). Analysis of open data and computational reproducibility in registered reports in psychology. Advances in Methods and Practices in Psychological Science, 3(2), 229–237.

Patil, P., Peng, R. D., & Leek, J. T. (2016). A statistical definition for reproducibility and replicability. BioRxiv, 066803.

Pawel, S., & Held, L. (2020). Probabilistic forecasting of replication studies. PloS one, 15(4), e0231416.

Pimentel, J. F., Murta, L., Braganholo, V., & Freire, J. (2019). A large-scale study about quality and reproducibility of jupyter notebooks. 2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR), 507–517. https: //doi.org/10.1109/MSR.2019.00077

Polanin, J. R., Hennessy, E. A., & Tsuji, S. (2020). Transparency and reproducibility of meta-analyses in psychology: A meta-review. Perspectives on Psychological Science, 15(4), 1026–1041.

Pushkarna, M., Zaldivar, A., & Kjartansson, O. (2022). Data cards: Purposeful and transparent dataset documentation for responsible ai. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 1776– 1826.

Raff, E. (2019). A step toward quantifying independently reproducible machine learning research. In H. M. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. B. Fox, & R. Garnett (Eds.), Advances in neural information processing systems 32: Annual conference on neural information processing systems 2019, neurips 2019, december 8-14, 2019, vancouver, bc, canada (pp. 5486–5496).

Rajtmajer, S., Griffin, C., Wu, J., Fraleigh, R., Balaji, L., Squicciarini, A., Kwasnica, A., Pennock, D., McLaughlin, M., Fritton, T., et al. (2022). A synthetic prediction market for estimating confidence in published work. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 13218–13220.

Rowe, A. (2023). Recommendations to improve use and reporting of statistics in animal experiments. Laboratory Animals, 57(3), 224–235.

Sadek, M., Calvo, R. A., & Mougenot, C. (2023). Trends, challenges, and processes in conversational agent design: Exploring practitioners’ views through semistructured interviews. Proceedings of the 5th International Conference on Conversational User Interfaces, 1–10.

Samuel, S., & König-Ries, B. (2022). A collaborative semantic-based provenance management platform for reproducibility. PeerJ Computer Science, 8, e921.

Santana-Perez, I., Pérez-Hernández, M. S., et al. (2015). Towards reproducibility in scientific workflows: An infrastructure-based approach. Scientific Programming, 2015.

Scaria, A. G., & Ray, S. (2018). Open science india report.

Schönbrodt, F., & Mellor, D. (2018). Academic job offers that mentioned open science. open science framework.

Schooler, J. W. (2014). Metascience could rescue the ‘replication crisis’. Nature, 515(7525), 9–9.

Shrout, P. E., & Rodgers, J. L. (2018). Psychology, science, and knowledge construction: Broadening perspectives from the replication crisis. Annual review of psychology, 69, 487–510.

Sivakumaren, K. (2015). Electronic thesis and dissertations (etds) by indian universities in shodhganga project: A study. Journal of advances in library and information science, 4(1), 62–66.

Sokol, K. (2019). Fairness, accountability, and transparency in artificial intelligence: A case study of logical predictive models. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 541–542.

Stodden, V., Leisch, F., & Peng, R. D. (2014). Implementing reproducible research. CRC Press.

Stodden, V., & Miguez, S. (2013). Best practices for computational science: Software infrastructure and environments for reproducible and extensible research. Available at SSRN 2322276.

Thakkar, D., Ismail, A., Kumar, P., Hanna, A., Sambasivan, N., & Kumar, N. (2022). When is machine learning data good?: Valuing in public health datafication. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 1–16.

Uhlmann, E. L., Ebersole, C. R., Chartier, C. R., Errington, T. M., Kidwell, M. C., Lai, C. K., McCarthy, R. J., Riegelman, A., Silberzahn, R., & Nosek, B. A. (2019). Scientific utopia iii: Crowdsourcing science. Perspectives on Psychological Science, 14(5), 711–733.

Van Dalen, H. P., & Henkens, K. (2012). Intended and unintended consequences of a publish-or-perish culture: A worldwide survey. Journal of the American Society for Information Science and Technology, 63(7), 1282–1293.

Varanasi, R. A., Siddarth, D., Seshadri, V., Bali, K., & Vashistha, A. (2022). Feeling proud, feeling embarrassed: Experiences of low-income women with crowd work. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 1–18.

Vilhuber, L. (2020). Reproducibility and replicability in economics. Harvard Data Science Review, 2(4), 1–39.

White, R. W., Dumais, S. T., & Teevan, J. (2009). Characterising the influence of domain expertise on web search behavior. Proceedings of the second ACM international conference on web search and data mining, 132–141.

Willis, C., & Stodden, V. (2020). Trust but verify: How to leverage policies, workflows, and infrastructure to ensure computational reproducibility in publication.

Yang, Y., Youyou, W., & Uzzi, B. (2020). Estimating the deep replicability of scientific findings using human and artificial intelligence. Proceedings of the National Academy of Sciences, 117(20), 10762–10768.

Downloads

Published

2026-03-20

How to Cite

Chakravorti, T., Wu, C., Koneru, S., & Rajtmajer, S. (2026). AI for research assessment: opportunities and challenges from India. Information Research an International Electronic Journal, 31(iConf), 1714–1732. https://doi.org/10.47989/ir31iConf64130

Issue

Section

Conference proceedings

Similar Articles

1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.