Transparency as organizational accomplishment: How an astronomical collaboration learns to observe its algorithms

Authors

DOI:

https://doi.org/10.47989/ir31iConf64156

Keywords:

Algorithmic systems, AI for science, Social informatics, Transparency, Organizational learning

Abstract

Introduction. Recent work on algorithmic systems has characterized such systems as relational entities, which are embedded in larger sociotechnical assemblages. We examine some implications of this framing for the practical accomplishment of transparency by organizations and communities. Specifically, we investigate a large astronomical survey project’s effort to render its observation scheduling algorithm transparent to a large international community of researchers.

Method. We report on one part of a larger ethnographic study with the survey project, using interview data and documentary data as the primary empirical sources for this analysis.

Analysis. The authors conducted theory-generative qualitative coding based on grounded theory methodology, developing three codes relevant to the organizational accomplishment of transparency in this case.

Results. We argue that from the perspective of an organized group that is wrestling with the opacity of its algorithmic systems, accomplishing transparency is effortful in design and organizing work, it is ongoing, and it produces novel organizational relations.

Conclusions. Relational conceptions of algorithmic and AI/ML systems highlight the work and challenges of accomplishing transparency beyond processes of publication, visibility, or access. Greater attention to this organizing work is warranted in the broadening rollout of such systems in scientific and social life.

References

Ananny M (2016) Toward an ethics of algorithms: convening, observation, probability, and timeliness. Science, Technology & Human Values 41(1): 93–117. DOI: 10.1177/0162243915606523

Ananny, M., and Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. new media & society, 20(3), 973-989.

Bechky, B. A. (2003). Sharing meaning across occupational communities: The transformation of understanding on a production floor. Organization science, 14(3), 312-330.

Brown, S., Davidovic, J., and Hasan, A. (2021). The algorithm audit: Scoring the algorithms that score us. Big Data & Society, 8(1), 2053951720983865.

Bryant, A. (2017). Grounded theory and grounded theorizing: Pragmatism in research practice. Oxford University Press.

Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big data & society, 3(1), 2053951715622512.

Chang, J., & Custis, C. (2022, October). Understanding implementation challenges in machine learning documentation. In Proceedings of the 2nd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (pp. 1-8).

Charmaz, K. (2014). Constructing grounded theory. Sage.

Diakopoulos N (2013) Algorithmic Accountability Reporting: On the Investigation of Black Boxes. Report, Tow Center for Digital Journalism, Columbia University.

Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Iii, H. D., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86-92.

Jarrahi, M. H., and Sutherland, W. (2019, March). Algorithmic management and algorithmic competencies: Understanding and appropriating algorithms in gig work. In International conference on information (pp. 578-589). Cham: Springer International Publishing.

Fleischmann, K. R., and Wallace, W. A. (2005). A covenant with transparency: Opening the black box of models. Communications of the ACM, 48(5), 93-97.

Fluke, C. J., and Jacobs, C. (2020). Surveying the reach and maturity of machine learning and artificial intelligence in astronomy. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(2), e1349.

Ghosh, S. S. (2025). An Intelligent Infrastructure as a Foundation for Modern Science. arXiv preprint arXiv:2508.10051.

Lee, M. K., Kusbit, D., Kahng, A., Kim, J. T., Yuan, X., Chan, A., ... and Procaccia, A. D. (2019). WeBuildAI: Participatory framework for algorithmic governance. Proceedings of the ACM on human-computer interaction, 3(CSCW), 1-35.

Neyland, D., and Möllers, N. (2016). Algorithmic IF … THEN rules and the conditions and consequences of power. Information, Communication & Society, 20(1), 45–62.

Pine, K., and Mazmanian, M. (2015). Emerging insights on building infrastructure for data-driven transparency and accountability of organizations. iConference 2015 Proceedings.

Rader, E., Cotter, K., & Cho, J. (2018, April). Explanations as mechanisms for supporting algorithmic transparency. In Proceedings of the 2018 CHI conference on human factors in computing systems (pp. 1-13).

Ribes, D., & Finholt, T. A. (2008, November). Representing community: knowing users in the face of changing constituencies. In Proceedings of the 2008 ACM conference on Computer supported cooperative work (pp. 107-116).

Rosenbaum, H. (2023). Algorithmic Assemblages, the Natural Attitude, and the Social Informatics of the Pandemic Lifeworld. In The Usage and Impact of ICTs during the Covid-19 Pandemic (pp. 248-271). Routledge.

Sawyer, S., Crowston, K., & Wigand, R. (2014). Digital assemblages: Evidence and theorizing from the computerization of the US residential real estate industry. New Technology, Work, and Employment, 29(1), 40–57.

Saunders, E., and Lampoudi, S. (2013, November). Multi-telescope observing: the LCOGT network scheduler. In The Third Hot-wiring the Transient Universe Workshop (HTU-III), held (pp. 13-15).

Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big data & society, 4(2), 2053951717738104.

Slota, S. C., Fleischmann, K. R., Greenberg, S., Verma, N., Cummings, B., Li, L., and Shenefiel, C. (2020). Good systems, bad data?: Interpretations of AI hype and failures. Proceedings of the Association for Information Science and Technology, 57(1), e275.

Tanweer, A., Fiore-Gartland, B., and Aragon, C. (2016). Impediment to insight to innovation: Understanding data assemblages through the breakdown–repair process. Information, Communication & Society, 19(6), 736-752.

Zednik, C. (2021). Solving the black box problem: A normative framework for explainable artificial intelligence. Philosophy & technology, 34(2), 265-288.

Zhao, H., Chen, H., Yang, F., Liu, N., Deng, H., Cai, H., ... and Du, M. (2024). Explainability for large language models: A survey. ACM Transactions on Intelligent Systems and Technology, 15(2), 1-38.

Downloads

Published

2026-03-20

How to Cite

Sutherland, W., & Tanweer, A. (2026). Transparency as organizational accomplishment: How an astronomical collaboration learns to observe its algorithms. Information Research an International Electronic Journal, 31(iConf), 1639–1651. https://doi.org/10.47989/ir31iConf64156

Issue

Section

Conference proceedings

Similar Articles

1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.