Competing visions of ethical AI: a case study of OpenAI

Authors

DOI:

https://doi.org/10.47989/ir31iConf64115

Keywords:

AI ethics, Ethics discourse, AI governance

Abstract

Introduction. AI Ethics is framed distinctly across actors and stakeholder groups. We report results from a case study of OpenAI analysing ethical AI discourse.

Method. Research addressed: How has OpenAI’s public discourse leveraged ‘ethics’, safety’, alignment’ and adjacent related concepts over time, and what does discourse signal about framing in practice? A structured corpus, differentiating between communication for a general audience and communication with an academic audience, was assembled from public documentation.

Analysis. Qualitative content analysis of ethical themes combined inductively derived and deductively applied codes. Quantitative analysis leveraged computational content analysis methods via NLP to model topics and quantify changes in rhetoric over time. Visualizations report aggregate results. For reproducible results, we have released our code at https://github.com/famous-blue-raincoat/AI_Ethics_Discourse.

Results. Results indicate that safety and risk discourse dominate OpenAI’s public communication and documentation, without applying academic and advocacy ethics frameworks or vocabularies.

Conclusions. Implications for governance are presented, along with discussion of ethics-washing practices in industry.

References

Barocas, S., Hardt, M., & Narayanan, A. (2023). Fairness and machine learning: Limitations and opportunities. The MIT Press.

Bietti, E. (2020). From ethics washing to ethics bashing: A view on tech ethics from within moral philosophy. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 210–219. https://doi.org/10.1145/3351095.3372860

Burns, C., Leike, J., Aschenbrenner, L., Wu, J., Izmailov, P., Gao, L., Baker, B., & Kirchner, J. H. (2023, December 14). Weak-to-strong generalization. https://web.archive.org/web/20251122235243/https://openai.com/index/weak-to-strong-generalization/

Cath, C. (2018). Governing artificial intelligence: Ethical, legal, and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180080. https://doi.org/10.1098/rsta.2018.0080

Fairclough, N. (1992). Discourse and text: Linguistic and intertextual analysis within discourse analysis. Discourse & Society, 3(2), 193–217. https://doi.org/10.1177/0957926592003002004

Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review. https://doi.org/10.1162/99608f92.8cd550d1

Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Iii, H. D., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–92. https://doi.org/10.1145/3458723

Green, B. (2021). The contestation of tech ethics: A sociotechnical approach to technology ethics in practice. Journal of Social Computing, 2(3), 209–225. https://doi.org/10.23919/JSC.2021.0018

Greene, D., Hoffmann, A. L., & Stark, L. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. Hawaii International Conference on System Sciences. https://doi.org/10.24251/HICSS.2019.258

Griffin, T. A., Green, B. P., & Welie, J. V. M. (2025). The ethical wisdom of AI developers. AI and Ethics, 5(2), 1087–1097. https://doi.org/10.1007/s43681-024-00458-x

Guan, M. Y., Joglekar, M., Wallace, E., Jain, S., Barak, B., Helyar, A., Dias, R., Vallone, A., Ren, H., Wei, J., Chung, H. W., Toyer, S., Heidecke, J., Beutel, A., & Glaese, A. (2025). Deliberative alignment: Reasoning enables safer language models (No. arXiv:2412.16339). arXiv. https://doi.org/10.48550/arXiv.2412.16339

Hajer, M. A. (1995). The politics of environmental discourse: Ecological modernization and the policy process. Clarendon Press; Oxford University Press.

Hao, K. (2025). Empire of AI: Dreams and nightmares in Sam Altman’s OpenAI. Penguin Press.

Heilinger, J.-C. (2022). The ethics of ai ethics. A constructive critique. Philosophy & Technology, 35(3), 61. https://doi.org/10.1007/s13347-022-00557-9

Herzog, C., & Blank, S. (2024). A systemic perspective on bridging the principles-to-practice gap in creating ethical artificial intelligence solutions – a critique of dominant narratives and proposal for a collaborative way forward. Journal of Responsible Innovation, 11(1), 2431350. https://doi.org/10.1080/23299460.2024.2431350

IEEE. (2019). Ethically Aligned Design - A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems, 1–294. https://ieeexplore.ieee.org/document/9398613

Irving, G., & Askell, A. (2019). AI safety needs social scientists. Distill, 4(2), 10.23915/distill.00014. https://doi.org/10.23915/distill.00014

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2

Lazar, S., & Nelson, A. (2023). AI safety on whose terms? Science, 381(6654), 138–138. https://doi.org/10.1126/science.adi8982

Lloyd, S. (1982). Least squares quantization in PCM. IEEE transactions on information theory, 28(2), 129-137.

McInnes, L., Healy, J., & Astels, S. (2017). hdbscan: Hierarchical density-based clustering. J. Open-Source Software, 2(11), 205. https://joss.theoj.org/papers/10.21105/joss.00205

Metcalf, J., Moss, E., & boyd, danah. (2019). Owning ethics: Corporate logics, Silicon Valley, and the institutionalization of ethics. Social Research: An International Quarterly, 86(2), 449–476. https://doi.org/10.1353/sor.2019.0022

Metcalf, J., Moss, E., Watkins, E. A., Singh, R., & Elish, M. C. (2021). Algorithmic impact assessments and accountability: The co-construction of impacts. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 735–746. https://doi.org/10.1145/3442188.3445935

Metz, C., Isaac, M., Mickle, T., Weise, K., & Roose, K. (2023, November 22). Sam Altman Is Reinstated as OpenAI’s Chief Executive. The New York Times. Retrieved January 2, 2026. https://web.archive.org/web/20250912211038/https://www.nytimes.com/2023/11/22/technology/openai-sam-altman-returns.html

Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220–229. https://doi.org/10.1145/3287560.3287596

Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501–507. https://doi.org/10.1038/s42256-019-0114-4

Morley, J., Elhalal, A., Garcia, F., Kinsey, L., Mökander, J., & Floridi, L. (2021). Ethics as a Service: A pragmatic operationalisation of AI ethics. Minds and Machines, 31(2), 239–256. https://doi.org/10.1007/s11023-021-09563-w

OpenAI, Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F. L., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S., Avila, R., Babuschkin, I., Balaji, S., Balcom, V., Baltescu, P., Bao, H., Bavarian, M., Belgum, J., … Zoph, B. (2024). Gpt-4 technical report (arXiv:2303.08774). arXiv. https://doi.org/10.48550/arXiv.2303.08774

OpenAI appoints Scott Schools as Chief Compliance Officer. (2024, October 22). OpenAI. Retrieved September 12, 2025. https://web.archive.org/web/20250810102318/https://openai.com/global-affairs/openai-chief-compliance-officer-announcement/

OpenAI’s approach to AI and national security. (2024, October 24). OpenAI. Retrieved November 20, 2025. https://web.archive.org/web/20251112113838/https://openai.com/global-affairs/openais-approach-to-ai-and-national-security/

OpenAI charter. (n.d.). OpenAI. Retrieved September 12, 2025. https://web.archive.org/web/20250912002726/https://openai.com/charter/

OpenAI news. (2025, August 7). OpenAI. Retrieved September 12, 2025. https://web.archive.org/web/20250808015203/https://openai.com/news/

OpenAI research. (2025, September 5). OpenAI. Retrieved September 12, 2025. https://web.archive.org/web/20250723120604/https://openai.com/research/index/

OpenAI’s commitment to child safety: Adopting safety by design principles. (2024, April 23). OpenAI. Retrieved September 12, 2025. https://web.archive.org/web/20250910175627/https://openai.com/index/child-safety-adopting-sbd-principles/

Operator system card. (2025, January 23). OpenAI. Retrieved December 20, 2025. https://web.archive.org/web/20250910181725/https://openai.com/index/operator-system-card/

Our updated preparedness framework. (2025, April 15). OpenAI. Retrieved September 12, 2025. https://web.archive.org/web/20250928131055/https://openai.com/index/updating-our-preparedness-framework/

Orr, W., & Davis, J. L. (2020). Attributions of ethical responsibility by Artificial Intelligence practitioners. Information, Communication & Society, 23(5), 719–735. https://doi.org/10.1080/1369118X.2020.1713842

Papagiannidis, E., Mikalef, P., & Conboy, K. (2025). Responsible artificial intelligence governance: A review and research framework. The Journal of Strategic Information Systems, 34(2), 101885. https://doi.org/10.1016/j.jsis.2024.101885

Pioneering an AI clinical copilot with Penda Health. (2025, July 22). OpenAI. Retrieved September 12, 2025. https://web.archive.org/web/20250916204527/https://openai.com/index/ai-clinical-copilot-penda-health/

Planning for AGI and beyond. (2023, February 24). OpenAI. Retrieved September 12, 2025. https://web.archive.org/web/20250928123016/https://openai.com/index/planning-for-agi-and-beyond/

Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 33–44. https://doi.org/10.1145/3351095.3372873

Reimers, N., & Gurevych, I. (2019). Sentence-BERT: Sentence embeddings using Siamese BERT-networks. ArXiv. https://arxiv.org/abs/1908.10084

Ryan, M., Antoniou, J., Brooks, L., Jiya, T., Macnish, K., & Stahl, B. (2021). Research and practice of AI ethics: A case study approach juxtaposing academic discourse with organisational reality. Science and Engineering Ethics, 27(2), 16. https://doi.org/10.1007/s11948-021-00293-x

Sam Altman returns as CEO, OpenAI has a new initial board. (2023, November 29). OpenAI. Retrieved December 30, 2025. https://web.archive.org/web/20251213233127/https://openai.com/index/sam-altman-returns-as-ceo-openai-has-a-new-initial-board/

Seele, P., & Schultz, M. D. (2022). From greenwashing to machinewashing: A model and future directions derived from reasoning by analogy. Journal of Business Ethics, 178(4), 1063–1089. https://doi.org/10.1007/s10551-022-05054-9

Shilton, K. (2018). Values and ethics in human-computer interaction. Foundations and Trends in Human–Computer Interaction, 12(2), 107–171. https://doi.org/10.1561/1100000073

Shoker, S., & Reddie, A. (2023, August 1). Confidence-building measures for artificial intelligence: Workshop proceedings. OpenAI. Retrieved September 12, 2025. https://web.archive.org/web/20251113040959/https://openai.com/index/confidence-building-measures-for-artificial-intelligence/

Sora system card. (2024, December 9). OpenAI. Retrieved December 23, 2025. https://web.archive.org/web/20250912002625/https://openai.com/index/sora-system-card/

Stahl, B. C., Antoniou, J., Ryan, M., Macnish, K., & Jiya, T. (2022). Organisational responses to the ethical issues of artificial intelligence. AI & SOCIETY, 37(1), 23–37. https://doi.org/10.1007/s00146-021-01148-6

Stamboliev, E., & Christiaens, T. (2025). How empty is trustworthy AI? A discourse analysis of the ethics guidelines of trustworthy AI. Critical Policy Studies, 19(1), 39–56. https://doi.org/10.1080/19460171.2024.2315431

U.S. Senate Committee on the Judiciary, Subcommittee on Privacy, Technology, and the Law. (2023, May 16). Oversight of A.I.: Rules for artificial intelligence [Hearing]. U.S. Government Publishing Office. Retrieved September 12, 2025. https://web.archive.org/web/20250913112015/https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-rules-for-artificial-intelligence

van Maanen, G. (2022). AI ethics, ethics washing, and the need to politicize data ethics. Digital Society, 1(2), 9. https://doi.org/10.1007/s44206-022-00013-3

Downloads

Published

2026-03-20

How to Cite

Wilfley, M., Ai, M., & Sanfilippo, M. R. (2026). Competing visions of ethical AI: a case study of OpenAI. Information Research an International Electronic Journal, 31(iConf), 684–698. https://doi.org/10.47989/ir31iConf64115

Issue

Section

Conference proceedings

Similar Articles

1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.